There are many ways to implement event bus using AWS. Whenever we talk about event bus and aws, we remember Eventbridge. That is so true to implement this kind of architecture in AWS. However, the ex-engineers in my current company decided to use another service which is S3. So what is event bus in general?
Event bus is like a central hub that all producers publish events and all consumers retrieve them. In addition, it has to have filtering mechanism to filter events to correct consumers. Now, we get to know event bus. How can we use s3 to implement it?
S3 has event trigger, every time objects created or updated, will trigger event to other AWS Services.
- Publish: create/update object.
- Consume: trigger event created/updated to other AWS services as consumers.
- Filter: prefix of object (folder)
We can use SQS, SNS like a consumer to let other services consume the same event or different ones. The s3 messages always have object key that we can implement filtering in any consumers.
Challenges:
- If using SNS to consume event. This is a waste if messages go to incorrect lambda function – it filtered out. We better filter out event by Event notifications of S3.
- Can’t pass message attribute directly. It relies on object events such as Created/Updated and the object key. We can’t add more attributes.
Despite the challenges, we’re still using it as event bus for simply getting data of sensors. Yes, can’t add message attributes directly but we put messages in object, exactly XML format. S3 is a very good event storage that we can use to check every time we need to investigate the issues or take use of data for analysis. Our logic is very simple but for huge sensor data. Only one event object created and one consumer which is a lambda to write a set of sensor data to dynamodb.
At first step, the producers can be IoT devices or a hub which collect sensor data, process and create s3 object directly, or send data to a AWS Gateway. Another lambda will write objects. There are various ways to make up producers. Because our filtering is very simple so this architecture is good enough. This way, we can store big messaging data of sensors in a XML file every one minute of interval.
In case we want high throughput and low latency, we might consider using Eventbride. However, EventBridge allows for a maximum payload size of 256 KB per event. This limit includes the total size of the event, including all attributes and metadata. If your event data exceeds this size, you’ll need to find ways to reduce the payload or use additional services, such as Amazon S3 like above, to store larger data and reference it in your event.
Each solution has both pros and cons. There is no best one but suitable one.