We currently use an SQL-enabled event store (a typical two-table implementation), and some people on the team fear that although we only use the event store for writing, the situation may slow down a bit, so it was suggested instead of adding snapshots here and there actually maintain a fully consistent (with event streams) snapshot of each aggregate in its last state (in JSON format). All queries in the system will ultimately be executed on the read side with a typical SQL database, which is ultimately updated from the ES (write) side.
Having such a system will allow us to take advantage of the availability of Event Storage, while eliminating any possible performance problems. We do not currently use any time travel feature, although this is likely to be the end result.
Is this a good approach? Something about it left my inconvenience there. For example, if we need some kind of function for time travel, without having snapshots here and there in every cumulative stream of events, this will be a performance disaster. Of course, we can have both the most current snapshot per aggregate instance, and snapshots in the event streams.
If we decide to go this route, we need to update the snapshot for this aggregate transaction to update events in the same aggregate or just update the events and, ultimately, update the snapshot
What are the disadvantages of this approach? Has anyone tried something like this?
source share