How to update / transfer data when using CQRS and EventStore?

So now I am immersing the CQRS architecture with the EventStore template.

It opens up applications for a new dimension of scalability and flexibility, as well as for testing.

However, I still adhere to how to handle data transfer correctly.

Here is a specific use case:

Let's say I want to manage blogs with articles and comments.

On the write side, I use MySQL and on the read side ElasticSearch, now every time I process the command, I save data on the write side, send an event to save data on the read side.

Now let's say that I have a ViewModel named ArticleSummary , which contains the id and title.

I have a new function request to include article tags in my ArticleSummary , I would add some dictionary for my model to include tags.

Given that tags already exist in my record layer, I will need to update or use a new “table” to correctly use the new included data.

I am aware of the EventLog Replay strategy, which is to replay all events in order to “refresh” all ViewModel, but seriously, is this viable when we have a billion lines?

Are there any proven strategies? Any feedback?

+4
source share
2 answers

I know about the EventLog Replay strategy, which is to replay all events in order to “refresh” all ViewModel, but seriously, is it viable when we have a billion lines?

I would say yes :)

You are about to write a handler for a new summary function that will update your query side anyway. So you already have the code. Writing a special shutdown cancellation code may not buy you so much. I would go with the transition code when you need to do an initial update of, say, a new system that requires some data conversion after shutdown, but in this case your infrastructure will exist.

You will need to send only the relevant events to the new handler so that you also do not play everything.

In any case, if you have a billion lines of data, your servers will probably be able to handle the load :)

+4
source

I am currently using NEventStore from JOliver.

When we started, we again played our entire store through our denormalizer / event handlers when the application started.

We initially stored all our data in memory, but we knew that this approach would not be viable in the long run.

The approach we are currently using is that we can reproduce an individual denormalizer that does things much faster, because you do not overplay events through denominators that have not changed.

We found that we needed a different view of our commits so that we could request all the events that we processed by the type of event - a request that cannot be executed against a normal store.

+2
source

Source: https://habr.com/ru/post/1501511/


All Articles