I run a spark streaming application on Yarn, it works well for several days, and after that I ran into a problem, an error message from the yarn list below:
Application application_1449727361299_0049 failed 2 times due to AM Container for appattempt_1449727361299_0049_000002 exited with exitCode: -104 For more detailed output, check application tracking page:https://sccsparkdev03:26001/cluster/app/application_1449727361299_0049Then, click on links to logs of each attempt. Diagnostics: Container [pid=25317,containerID=container_1449727361299_0049_02_000001] is running beyond physical memory limits. Current usage: 3.5 GB of 3.5 GB physical memory used; 5.3 GB of 8.8 GB virtual memory used. Killing container.
And here is my memory configuration:
spark.driver.memory = 3g spark.executor.memory = 3g mapred.child.java.opts -Xms1024M -Xmx3584M mapreduce.map.java.opts -Xmx2048M mapreduce.map.memory.mb 4096 mapreduce.reduce.java.opts -Xmx3276M mapreduce.reduce.memory.mb 4096
This OOM error is strange because I did not store any data in memory, since it is a streaming program, did anyone come across the same question, how is it? Or who knows what the reason is?
source share