I am studying the replication of microservice data right now, and one thing I have come across is to come up with the right architecture to ensure atomic events. As I understand it, the main thread:
- Commit changes to the database.
- Post an event detailing the changes to the global message bus.
But what if, for example, there was a break in power between steps 1 and 2? In a naively constructed system, this will mean that the changes are saved, but a detailed description of the events will never be published. I pondered the following ideas to create better warranties, but I'm not entirely sure of all the pros and cons of each:
A: use the built-in database (e.g. SQLite) in my microservice instance to track the complete transaction, from commit to the main database to publish events.
B: Create an event table in my main database using database transactions to insert an event and commit the corresponding changes at the same time. Then the service will transfer the event to the bus, and then make another commit in the main database to mark the event as published.
C: As above, create an event table in my main database using database transactions to insert an event and commit the corresponding changes at the same time. Then inform (either manually through REST / Messages from the service, or through database hooks) the special EventPusher service into which a new event is added. The EventPusher service will query the Events table and transfer the events to the bus, designating each as Published after confirmation. If a certain amount of time has passed without any notification, EventPusher will execute a manual request.
What are the pros and cons of each of the above options? Is there another excellent option that I have yet to consider?
source share