I run this command:
hadoop jar hadoop-streaming.jar -D stream.tmpdir=/tmp -input "<input dir>" -output "<output dir>" -mapper "grep 20151026" -reducer "wc -l"
Where <input dir>
is a directory with many avro
files.
And getting this error:
An exception in the "main" thread java.lang.OutOfMemoryError: GC overhead limit exceeded by org.apache.hadoop.hdfs.protocol.DatanodeID.updateXferAddrAndInvalidateHashCode (DatanodeID.java:287) in org.apache.hadoocol.Dhdf. (DatanodeID.java:91) in org.apache.hadoop.hdfs.protocol.DatanodeInfo. (DatanodeInfo.java:136) in org.apache.hadoop.hdfs.protocol.DatanodeInfo. (DatanodeInfo.java:122) in org.apache.hadoop.hdfs.protocolPB.PBHelper.convert (PBHelper.java:633) in org.apache.hadoop.hdfs.protocolPB.PBHelper.convert (PBHelper.java:793) in org.apache.hadoop.hdfs.protocolPB.PBHelper.convertLocatedBlock (PBHelper.java:1252) in org.apache.hadoop.hdfs.protocolPB.PBHelper.convert (PBHelper.java:1270) in org.apache.hadoop.hdfs. protocolPB.PBHelper.convert (PBHelper.java:1413) in org.apache.hadoop.hdfs.protocolPB.PBHelper.convert (PBHelper.java:1524) in org.apache.hadoop.hdfs.protocolPB.PBHelper.convert (PBHelper. java: 1533) in org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getListing (ClientNamenodeProtocolTranslatorPB.javacla57) at sun.reflect.GeneratedMethodAccessor3.invokeMoreflef 43) in java.lang.reflect.Method.invoke (Method.java:601) in org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod (Retry InvocationHandler.java:187) in org.apache.hadoop.io.retry.RetryInvocationHandler.invoke (RetryInvocationHandler.java:102) in com.sun.proxy. $ Proxy15.getListing (unknown source) in org.apache.hadoop.hdfs.DFSClient.listPaths (DFSClient.java:1969) at org.apache.hadoop.hdfs.DistributedFileSystem $ DirListingIterator.hasNextNoFilter (DistributedavaFileSystem) .apache.hadoop.hdfs.DistributedFileSystem $ DirListingIterator.hasNext (DistributedFileSystem.java:863) in org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus (FileInputFormat.java:267) in org.apache.at file.ado (FileInputFormat.java:228) in org.apache.hadoop.mapred.FileInputFormat.getSplits (FileInputFormat.javahaps13) in org.apache.hadoop.mapreduce.JobSubmitter.writeOldSplits (JobSubmitter.java:624) in org.apache.java:624. hadoop.mapreduce.JobSubmitter.writeSplits (JobSubmitter.java:616) in org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal (JobSubmitter.java:492) on org.apache.hadoop.mapreduce.Job $ 10.run ( : 1296) at org.apache.hadoop.mapreduce.Job $ 10.run (Job.java:1293) at java.secu rity.AccessController.doPrivileged (native method) in javax.security.auth.Subject.doAs (Subject.java:415)
How to solve this problem?
source share