As we know, Ruby on Rails is full stack framework, a monolith application. Good, there is no best architecture, only the “fit for purpose” architecture. We can find many pros and cons of monolith comparing to microservices. I’d like to discuss about event-driven architecture. How can we apply it to Ruby on Rails to decouple services? Services? There is no microservices. We make our application as distributed system. A normal ROR app will have a web server and other instances for background jobs. Let’s say we use sidekiq. Any heavy workloads will be put in background server. This is kind of distributed system. When we talk about distributed system, it is about physical distribution.
Going straight to the problem. A company has many robots. Users open tracking page in their device. They want to see alerts every time a robot going to a geofence, in realtime. They are testing AI, to see if these robots can find this location in a maze.
It’s just a requirement for fun. But we regularly see it in development. A responsive frontend, providing realtime alerts.
From the perspective of monolith, the pure and worst solution is we develop javascript function to be called every second to check the condition. However, we have many robots. Let’s say 100 robots. There will be 100 * 10 = 600 requests per minute.
const is_in_redzone = () => {
}
Therefore, one user opens this page will make 600 request. 100 users would make 60.000 requests to web server every minute. The same task unnecessarily repeated. It slows web servers down. We basically move the heavy workloads to background servers. But how do we do this if frontend requires realtime alerts?
The solution is not new. We use event-driven architecture. This implementation is for monolith. We pick SNS as event broker. We can actually use rabbitmq, kafka. Again, the design is fit for purpose.
This is pub/sub mechanism. Whenever users open this page, the server creates SQS queues and return queue names to client to subscribe. Client use long polling technique to subscribe the queue. In the meantime, the background servers will process checking every 1 minutes for all robots in parallel. It pushes the message to AWS SNS topic if any robot goes to redzone. SNS will fan out the message to all queues (user’s sessions).
SNS topic: robot1
SQS: user1-robot1, user2-robot1, user3-robot1
This is the hard use case. If we simply make a request that needs to wait for a bit long time processing like data analytics we don’t need to use SNS to fan out and scheduled job to run every N seconds. The server simply creates a queue, enqueue sidekiq job and return queue name to client for subscribing. The microservices implement a bit complex but regarding ROR monolith we just do simply using sidekiq jobs. We can build mutliple sidekiq servers for efficiency.