Microservices: atomic events

I am studying the replication of microservice data right now, and one thing I have come across is to come up with the right architecture to ensure atomic events. As I understand it, the main thread:

  • Commit changes to the database.
  • Post an event detailing the changes to the global message bus.

But what if, for example, there was a break in power between steps 1 and 2? In a naively constructed system, this will mean that the changes are saved, but a detailed description of the events will never be published. I pondered the following ideas to create better warranties, but I'm not entirely sure of all the pros and cons of each:

A: use the built-in database (e.g. SQLite) in my microservice instance to track the complete transaction, from commit to the main database to publish events.

B: Create an event table in my main database using database transactions to insert an event and commit the corresponding changes at the same time. Then the service will transfer the event to the bus, and then make another commit in the main database to mark the event as published.

C: As above, create an event table in my main database using database transactions to insert an event and commit the corresponding changes at the same time. Then inform (either manually through REST / Messages from the service, or through database hooks) the special EventPusher service into which a new event is added. The EventPusher service will query the Events table and transfer the events to the bus, designating each as Published after confirmation. If a certain amount of time has passed without any notification, EventPusher will execute a manual request.

What are the pros and cons of each of the above options? Is there another excellent option that I have yet to consider?

+6
source share
2 answers

But what if, for example, a power failure occurred between steps 1 and 2

Consider the following approach:

using(var scope = new TransactionScope()) { _repository.UpdateUser(data); _eventStore.Publish(new UserUpdated { ... }); scope.Complete(); } 

This pseudo code assumes you are using something similar to Entity Framework and TransactionScope

Thus, even if your event store is implemented as some external service, your UpdateUser transaction will not be transferred until the store events succeed. There is still a slight chance of a failure when you already received a response from _eventStore but did not complete an ORM transaction transaction. In this worst case scenario, you will receive a published event, but you will not receive data from the database, which always stores the latest snapshot. In essence, the snapshot becomes invalid for this aggregate.

If your domain cannot tolerate such risks, you should not store the state / snapshot in a relational database at all . The event store will be the only source of truth you can rely on (this is the recommended approach of many CQRS / ES practitioners).

B: Create an event table in my main database using database transactions to insert an event and commit the corresponding changes at the same time. Then the service will transfer the event to the bus, and then make another commit in the main database to mark the event as published.

This approach will also work, however you will have to reinvent the wheel instead of just reusing the bulletproof event store implementation .

Options A and C are too exotic / reevaluated to seriously consider them viable.

+3
source

I was wondering the same thing. Apparently, there are several ways to combat the atomicity of updating db and posting the corresponding event.

( Sample: event-driven architecture )

Applied events seem like your ideas.
An example would be:

The order service inserts a row into the ORDER table and inserts the Order. An event has been created in the table EVENT [within one local transaction db].
an event publisher thread either processes an EVENT table request for unpublished events, publishes events, and then updates the EVENT table to mark events as published.

( Event driven event management for Microservices )

If at some point the Event Publisher crashes or otherwise fails, events that it did not process are still marked as unpublished.
Therefore, when an event publisher returns to the Internet, he immediately publishes these events.

If an event publisher published an event and then crashed before marking the event as published, the event can be published more than once.
For this reason, it is important that subscribers de-duplicate received messages.

In addition, the answer in the stackoverflow question, which may sound completely different, but essentially sets the same thing, is a link to a couple of relevant blog posts.

+2
source

Source: https://habr.com/ru/post/1014095/


All Articles