In Spark 1.0.0 Offline with multiple work nodes, I try to run the Spark shell from two different computers (the same Linux user).
The documentation says: "By default, applications sent to an offline cluster will run in FIFO (first-in-first-out) order, and each application will try to use all available nodes."
The number of cores per employee is set to 4 with 8 available (via SPARK_JAVA_OPTS = "- Dspark.cores.max = 4"). Memory is also limited, so there should be enough for both.
However, looking at WebUI Spark Master, a shell application that was launched later will always remain in the “WAITING” state until the first one is displayed. The number of cores assigned to it is 0, Memory for node 10G (same as already running)
Is there a way to run both shells at the same time without using Mesos?
source
share