Log4j output to HDFS

Has anyone tried to write the log4j log file directly to the Hadoop distributed file system ?

If so, answer how to do it. I think I will have to create an Appender for it.

This is true? My need is to write the logs to a file at specific intervals and request this data at a later stage.

+6
source share
2 answers

I recommend using Apache Flume to complete this task. There is a Flume appender for Log4j . This way you send logs to Flume and write to HDFS. The good thing about this approach is that Flume becomes a single point of communication with HDFS. Flume makes it easy to add new data sources without having to write a bunch of code to interact with HDFS again and again.

+8
source

standard log4j (1.x) does not support writing to HDFS. but fortunately log4j is very simple. I wrote one HDFS FileAppender file to write a log to MapRFS (compatible with Hadoop). the file name may be something like "maprfs: ///projects/example/root.log". This works well in our projects. I extract part of the application code and paste it below. code fragments may not work. but it will give you an idea of ​​how to write you an application. In fact, you only need to extend org.apache.log4j.AppenderSkeleton and implement append (), close (), requireLayout (). for more information, you can also download the source code of log4j 1.2.17 and see how AppenderSkeleton is determined, it will provide you with all the information. good luck!

Note: an alternative way to write to HDFS is to install HDFS on all of your nodes so that you can write logs the same way you write to a local directory. perhaps this is the best way to practice.

import org.apache.log4j.AppenderSkeleton; import org.apache.log4j.spi.LoggingEvent; import org.apache.log4j.Layout; import org.apache.hadoop.conf.Configuration; import java.io.*; public class HDFSFileAppender { private String filepath = null; private Layout layout = null; public HDFSFileAppender(String filePath, Layout layout){ this.filepath = filePath; this.layout = layout; } @Override protected void append(LoggingEvent event) { String log = this.layout.format(event); try { InputStream logStream = new ByteArrayInputStream(log.getBytes()); writeToFile(filepath, logStream, false); logStream.close(); }catch (IOException e){ System.err.println("Exception when append log to log file: " + e.getMessage()); } } @Override public void close() {} @Override public boolean requiresLayout() { return true; } //here write to HDFS //filePathStr: the file path in MapR, like 'maprfs:///projects/aibot/1.log' private boolean writeToFile(String filePathStr, InputStream inputStream, boolean overwrite) throws IOException { boolean success = false; int bytesRead = -1; byte[] buffer = new byte[64 * 1024 * 1024]; try { Configuration conf = new Configuration(); org.apache.hadoop.fs.FileSystem fs = org.apache.hadoop.fs.FileSystem.get(conf); org.apache.hadoop.fs.Path filePath = new org.apache.hadoop.fs.Path(filePathStr); org.apache.hadoop.fs.FSDataOutputStream fsDataOutputStream = null; if(overwrite || !fs.exists(filePath)) { fsDataOutputStream = fs.create(filePath, overwrite, 512, 3, 64*1024*1024); }else{ //append to existing file. fsDataOutputStream = fs.append(filePath, 512); } while ((bytesRead = inputStream.read(buffer)) != -1) { fsDataOutputStream.write(buffer, 0, bytesRead); } fsDataOutputStream.close(); success = true; } catch (IOException e) { throw e; } return success; } } 
+1
source

Source: https://habr.com/ru/post/944449/


All Articles