Writing SLF4J to a file compared to DB vs Solr

I need some suggestions regarding SLF4J logging.

We are currently using SLF4J logging (log4j binding) for our Java web application that uses a simple ConsoleAppender. The next step is to research the places where we can save the magazines.

Our application processes about 100,000 messages per day. Each message generates about 60 -100 lines of logs. Our goal is to quickly find and find failed messages (using messageId) and determine the causes of the failure.

My question is: which of the following is a good place to store our magazines:

  • File (s)
  • Db
  • Solr

Thanks.

+6
source share
2 answers

Consider moving from log4j and using the logback implementation of the slf4j API. The log has an extensive list of appenders .

I think that perhaps your questions are more about how to make your magazines searchable. The answer depends on what you are looking for.

  • For simple applications, I just use the copied appender and grep file for the messages that interest me.
  • More complex applications will additionally register messages in the database.
  • There are currently no Solr applications available for log4j and logback. However, it is easy to write using the solrj API
  • For monitoring log messages, there is lilith , which is the remote GUI for log messages. I don’t know how well it scales, but it is certainly interesting for demonstrations and simple monitoring.

Update

As suggested by Sebastien, there is also a Graylog2 appender for the magazine. Now available at Maven Central

<dependency> <groupId>me.moocar</groupId> <artifactId>logback-gelf</artifactId> <version>0.9.6p2</version> </dependency> 

Of course, this will depend on the installation of the graylog2 server.

+11
source

In the servlet specification, there is no way to provide you with the location of the file system for hosting the logs.

Therefore, the most reliable long-term solution is to simply use java.util.logging (with slf4j binding), and let the web container handle the generated logs.

You have about 10 million journal entries per day. This means that you need to be careful when using resources. Connecting to a database is much more expensive than accessing files. I would advise you to comment on the approaches to find out if you can get the performance you need in order to consider something else besides flat files supported at night.

0
source

Source: https://habr.com/ru/post/907270/


All Articles