Hi I have a Spark cluster offline, i.e. I have one Spark-master process and three Spark-slave processes running on my laptop (Spark cluster on one machine).
Starting the wizard and slaves is just starting the scripts in Spark_Folder / sbin / start-master.sh and Spark_Folder / sbin / stop-master.sh.
However, when I run Spark_Folder / sbin / stop-all.sh, it stops only one master and one smear, since I have three subordinate starts, after running stop -all.sh I still have two subordinates.
I enter the script "stop-slaves.sh" and found below:
if [ "$SPARK_WORKER_INSTANCES" = "" ]; then "$sbin"/spark-daemons.sh stop org.apache.spark.deploy.worker.Worker 1 else for ((i=0; i<$SPARK_WORKER_INSTANCES; i++)); do "$sbin"/spark-daemons.sh stop org.apache.spark.deploy.worker.Worker $(( $i + 1 )) done fi
This script seems to stop based on the number "SPARK_WORKER_INSTANCES". But what if I start the slave using a non-numeric name?
And any idea to turn off the entire spark cluster with one click? (I know that to run "pkill -f spark *" will work though)
Thank you very much.
source share