STEP 3: Extract this file using the following command Tar -xf spark-1.3.1-bin-hadoop2.4.tgz
STEP 4: Set the environment variables using the following commands to create the SPARK environment
SET HADOOP_HOME=C:\Hadoop
SET SCALA_HOME =C:\scala
SET SPARK_EXECUTOR_MEMORY =512m
SET SPARK_HOME=F:\spark-1.3.1-bin-hadoop2.4
SET SPARK_MASTER_IP =synclapn2881
SET SPARK_WORKER_CORES =2
SET SPARK_WORKER_DIR=F:\work\sparkdata
SET SPARK_WORKER_INSTANCES =4
SET SPARK_WORKER_MEMORY =1g
SET Path=%SPARK_HOME%\bin;%Path%;
STEP 5: Launch the node wizard using the following spark-class org.apache.spark.deploy.master.Master command
6: , : spark-class org.apache.spark.deploy.worker.Worker spark://masternode: 7077
: masternode localhostname
1 node , 4 ,
SET SPARK_WORKER_INSTANCES =4
.

Expected Result
Create 4 Worker nodes, as I had SET SPARK_WORKER_INSTANCES to 4
Advanvce