The lauch yarn container failed exception and mapred-site.xml configuration

I have 7 nodes in my Hadoop cluster [8 GB RAM and 4 HPV for each node], 1 Namenode + 6 datanodes.

EDIT-1 @ARNON: I followed the link, crazy calculation in accordance with the hardware configuration on my nodes and added a question to the file about the mapred-site file and the yarn-site.xml file. However, my application crashes with the same output

Mapreduce has 34 input splits with a block size of 128 MB.

mapred-site.xml has the following properties:

mapreduce.framework.name = yarn mapred.child.java.opts = -Xmx2048m mapreduce.map.memory.mb = 4096 mapreduce.map.java.opts = -Xmx2048m 

yarn-site.xml has the following properties:

 yarn.resourcemanager.hostname = hadoop-master yarn.nodemanager.aux-services = mapreduce_shuffle yarn.nodemanager.resource.memory-mb = 6144 yarn.scheduler.minimum-allocation-mb = 2048 yarn.scheduler.maximum-allocation-mb = 6144 

EDIT-2 @ARNON: Setting yarn.scheduler.minimum-alloc-mb to 4096 puts the entire map task in a paused state and assigns it as 3072 crashes followed by

 Exception from container-launch: ExitCodeException exitCode=134: /bin/bash: line 1: 3876 Aborted (core dumped) /usr/lib/jvm/java-7-openjdk-amd64/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx8192m -Djava.io.tmpdir=/tmp/hadoop-ubuntu/nm-local-dir/usercache/ubuntu/appcache/application_1424264025191_0002/container_1424264025191_0002_01_000011/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/home/ubuntu/hadoop/logs/userlogs/application_1424264025191_0002/container_1424264025191_0002_01_000011 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA org.apache.hadoop.mapred.YarnChild 192.168.0.12 50842 attempt_1424264025191_0002_m_000005_0 11 > /home/ubuntu/hadoop/logs/userlogs/application_1424264025191_0002/container_1424264025191_0002_01_000011/stdout 2> /home/ubuntu/hadoop/logs/userlogs/application_1424264025191_0002/container_1424264025191_0002_01_000011/stderr 

How can this be avoided? any help is appreciated

Is it possible to limit the number of containers on hooop ndoes?

0
source share
1 answer

It seems that you are allocating too much memory for your tasks (without even looking at all the configurations) 8 GB of RAM and 8 GB for each card task, and all this is a lot. Try to use lower 2Gb allocations with a bunch of 1 GB or something. kind

0
source

Source: https://habr.com/ru/post/985065/


All Articles