Event Sourcing Race Status

Here is a good article that describes what ES and how to deal with it.

Everything is fine, but one image bothers me. Here

ES example

I understand that in distributed event-based systems, we can achieve ultimate consistency. In any case ... How do we guarantee that we do not order more places than is available? This is especially a problem if there are many concurrent queries.

It may happen that n units are filled with the same number of reserved seats, and all these aggregate instances allow you to reserve.

+5
source share
2 answers

There will be several ways to deal with such a scenario.

First, the event stream will have the current version as the version of the last added event. This means that when you should not or should not, you can save the event stream if the event stream is not in the version at boot time. Since the first record will increase the version of the event stream, the second record will not be allowed. Since events are not emitted as such, but rather the result of a search for events, we would not have the race condition type in your example.

Well, if your commands are processed after the queue, all failures should be repeated. If you are unable to process the request, you must enter the normal scenario, "Sorry, Dave. I'm afraid I can’t do this," telling the user that they should try something else.

Another option is to start processing by releasing an update for some row of the table to serialize any calls to the aggregate. Probably not the most elegant, but it invokes a system-wide processing unit.

I suggest that to a large extent, you cannot trust a reading store when it comes to transaction processing.

Hope that helps :)

+2
source

I understand that with the distribution of event-based systems, we can achieve the final consistency, in any case ... How not to let more places be booked than ours? Especially in terms of many concurrent requests?

All events are closed to the team executing them until the record book recognizes a successful record. Thus, we do not report events at all, and we do not report this to the caller, not knowing that our version of “what happened next” was adopted in the record book.

Recording events is similar to comparing and swapping the tail pointer in an aggregate story. If the other team changed the pointer to the tail during operation, our swap failed and we must reduce / redo / fail.

In practice, this is usually realized if the write-to-write command includes the expected position for writing. (Example: ES-ExpectedVersion in GES).

It is expected that the record book will reject the record if the expected position is in the wrong place. Think of a position as a unique key in a table in a DBMS, and you have the right idea.

This means that in fact the recording in the event flow is really consistent - the recording book only allows recording if the position you are writing is correct, which means that the position has not changed since the moment you copied the story that you downloaded.

It is typical for teams to read event streams directly from a write book, rather than ultimately consistent reading models.

It may happen that n-AggregateRoots will be filled with the same number of reserved seats, which means that checking in the backup method will not help. Then n-AggregateRoots throws a successful reservation event.

Each status bit must be controlled by one cumulative root. You can have n different copies of this root, which all compete to write in the same story, but the comparison and swap operation allows only one winner, which ensures that the “population” has a single internally consistent history.

0
source

Source: https://habr.com/ru/post/1259765/


All Articles