Architecture for a lot of data logging, DB or file?

I am working on a Python application. I want to be scalable to accommodate about 150 records per second. This has spread among approximately 50 different sources.

Is Mondodb a good candidate for this? I am divided into writing to the database or simply creating a log file for each source and disclosing them separately.

Any other suggestions for recording a lot of data?

+4
source share
1 answer

I would say that mongodb is very well suited for a collection of magazines, due to:

  • Mongodb has amazing fast recordings
  • Logs are not so important, so in the event of a server failure it is quite possible to lose some of them. This way you can run mongodb without journaling to avoid writing overhead.
  • In addition, you can use sharding to increase write speed, while at the same time, you can simply move the oldest logs to a separate collection or file system.
  • You can easily export data from the database to json / csv.
  • Once you have everything in the database, you can query the data to find the log you need.

So, I find that mongod is perfect for things like magazines. You do not need to manage a large number of log files in the file system. Mongodb does this for you.

+6
source

Source: https://habr.com/ru/post/1384035/


All Articles