Reliable work collections

I need to store a lot of data in Reliable dictionaries on the Service Fabric. We implement the event repository as a series of reliable dictionaries, so every event emitted by a domain ends in a store. I would like to know the difference in performance in the following two scenarios:

  • use one (very large) reliable dictionary to store all events for a specific type of aggregate: this leads to a small number of dictionaries, each of which contains millions of events
  • use a small reliable dictionary to store events of one aggregate instance: this leads to a large number of small dictionaries (I think millions), each of which contains several events

In light of the replication of the state and read and write performance, what would be the most efficient way to move forward?

+4
source share
1 answer

Looks like you should be using Actual Actors, you might have millions of actors holding data.

If you need to read a lot of summary information from all your actors, see https://github.com/Azure-Samples/service-fabric-dotnet-data-aggregation/blob/master/README.md

Here are my thoughts if you want to go with state-supported services: For the first scenario, you will have to use separation, for the second you need to create several data services so that your data is distributed between the nodes.

, , .

.

+3

Source: https://habr.com/ru/post/1680713/


All Articles