I have seen many companies use MongoDB to store logs. Its schema binding is very flexible for application logs, in which the schema tends to change from time to time. In addition, the Capped Collection function is really useful because it automatically clears old data so that the data is written into memory.
People combine magazines using the usual grouping or MapReduce , but this is not so fast. Especially MongoDB MapReduce only works in one thread, and its overhead for JavaScript is huge. A new aggregation structure could solve this problem.
Another problem is that, although the MongoDB tab is fire and swell by default, calling a large number of insert commands causes a severe write lock response. This may affect application performance and prevent readers from collecting / filtering saved logs.
In one solution, a log collector structure such as Fluentd , Logstash , or Flume can be used. It is assumed that these daemons run on each application node and take logs from application processes.

They buffer logs and write data asynchronously to other systems, such as MongoDB / PostgreSQL / etc. Recording is performed in batches, so it is much more efficient than direct from applications. This link describes how to put logs into a Fluentd program from Perl.
source share