Worker nodes do not start correctly on Windows

STEP 3: Extract this file using the following command Tar -xf spark-1.3.1-bin-hadoop2.4.tgz

STEP 4: Set the environment variables using the following commands to create the SPARK environment

   SET HADOOP_HOME=C:\Hadoop
   SET SCALA_HOME =C:\scala
   SET SPARK_EXECUTOR_MEMORY =512m
   SET SPARK_HOME=F:\spark-1.3.1-bin-hadoop2.4
   SET SPARK_MASTER_IP =synclapn2881
   SET SPARK_WORKER_CORES =2 
   SET SPARK_WORKER_DIR=F:\work\sparkdata
   SET SPARK_WORKER_INSTANCES =4 
   SET SPARK_WORKER_MEMORY =1g
   SET Path=%SPARK_HOME%\bin;%Path%;

STEP 5: Launch the node wizard using the following spark-class org.apache.spark.deploy.master.Master command

6: , :      spark-class org.apache.spark.deploy.worker.Worker spark://masternode: 7077

: masternode localhostname

1 node , 4 ,

         SET SPARK_WORKER_INSTANCES =4 

.

SPARK WEB UI

   Expected Result
   Create 4 Worker nodes, as I had SET SPARK_WORKER_INSTANCES to 4 

Advanvce

+4
1

4 , 6 4 times 4
, spark-class org.apache.spark.deploy.worker.Worker spark://masternode:7077

windows.there , , . , -, start-slave script . , Linux.

[ ] [1]
Note: The launch scripts do not currently support Windows. To run a Spark cluster on Windows, start the master and workers by hand.

[1]: https://spark.apache.org/docs/latest/spark-standalone.html#Cluster

0

Source: https://habr.com/ru/post/1584027/


All Articles