I run spark-1.0.0, connecting to a spark-free autonomous cluster that has one master and two slaves. I ran wordcount.py from Spark-submit, it actually reads data from HDFS and also writes the results to HDFS. So far, everything is in order, and the results will be correctly recorded in HDFS. But what bothers me is that when I check Stdout for each worker, it is empty, I donβt know if it should be empty? and I got the following in stderr:
Stderr log page for some (app-20140704174955-0002)
Spark Executor Command: "java" "-cp" ":: /usr/local/spark-1.0.0/conf: /usr/local/spark-1.0.0 /assembly/target/scala-2.10/spark-assembly-1.0.0-hadoop1.2.1.jar:/usr/local/hadoop/conf" " -XX:MaxPermSize=128m" "-Xms512M" "-Xmx512M" "org.apache.spark.executor.CoarseGrainedExecutorBackend " "akka.tcp:// spark@master :54477/user/CoarseGrainedScheduler" "0" "slave2" "1 " "akka.tcp:// sparkWorker@slave2 :41483/user/Worker" "app-20140704174955-0002" ======================================== 14/07/04 17:50:14 ERROR CoarseGrainedExecutorBackend: Driver Disassociated [akka.tcp:// sparkExecutor@slave2 :33758] -> [akka.tcp:// spark@master :54477] disassociated! Shutting down.
source share