standard log4j (1.x) does not support writing to HDFS. but fortunately log4j is very simple. I wrote one HDFS FileAppender file to write a log to MapRFS (compatible with Hadoop). the file name may be something like "maprfs: ///projects/example/root.log". This works well in our projects. I extract part of the application code and paste it below. code fragments may not work. but it will give you an idea of how to write you an application. In fact, you only need to extend org.apache.log4j.AppenderSkeleton and implement append (), close (), requireLayout (). for more information, you can also download the source code of log4j 1.2.17 and see how AppenderSkeleton is determined, it will provide you with all the information. good luck!
Note: an alternative way to write to HDFS is to install HDFS on all of your nodes so that you can write logs the same way you write to a local directory. perhaps this is the best way to practice.
import org.apache.log4j.AppenderSkeleton; import org.apache.log4j.spi.LoggingEvent; import org.apache.log4j.Layout; import org.apache.hadoop.conf.Configuration; import java.io.*; public class HDFSFileAppender { private String filepath = null; private Layout layout = null; public HDFSFileAppender(String filePath, Layout layout){ this.filepath = filePath; this.layout = layout; } @Override protected void append(LoggingEvent event) { String log = this.layout.format(event); try { InputStream logStream = new ByteArrayInputStream(log.getBytes()); writeToFile(filepath, logStream, false); logStream.close(); }catch (IOException e){ System.err.println("Exception when append log to log file: " + e.getMessage()); } } @Override public void close() {} @Override public boolean requiresLayout() { return true; }
source share