Processing a huge number of records and a horizontal scale

First, let me explain how our data looks:

As now, we use mongodb and have one collection with 10 fields and one composite index that will be exposed to a large number of records and not so much to read .

What is our experience with mongodb:

Since we write data in chunks of 1000 records and the ability to enable mongodb to write data in chunks, this speeds up the process, so we were able to write 60 thousand records / s without an index and 45 thousand records / s with an index.

And what is our problem?

It does not scale well horizontally. Mongodb has a sharding function , but the problem is that it is good for reading, and if you have very big data. Due to the fact that we delete old data from the collection, this is not our main problem, but there are records, and the fragments actually slow down the process, as we can see, because you cannot write in chunks (for each record, mongos should solve, in what fragment to put it). In fact, even if we do about 20 fps and have a horizontal scale for recording, it would be nice, but so many fragments that you add to the cluster letter are getting slower, so this is not so much a solution (as we see) .

My questions:

  • mongo sharding ? , - , (1000 ) . , ?

  • NoSql ?

  • / .

+4
1

# 2

. , , , . riak node , . riak , .

http://basho.com/riak/

0

Source: https://habr.com/ru/post/1525578/


All Articles