I am learning Spark and wanted to run the simplest possible cluster consisting of two physical machines. I did all the basic setup, and everything seems to be in order. The output of the autostart script is as follows:
[username@localhost sbin]$ ./start-all.sh
starting org.apache.spark.deploy.master.Master, logging to /home/username/spark-1.6.0-bin-hadoop2.6/logs/spark-username-org.apache.spark.deploy.master.Master-1-localhost.out
localhost: starting org.apache.spark.deploy.worker.Worker, logging to /home/sername/spark-1.6.0-bin-hadoop2.6/logs/spark-username-org.apache.spark.deploy.worker.Worker-1-localhost.out
username@192.168.???.??: starting org.apache.spark.deploy.worker.Worker, logging to /home/username/spark-1.6.0-bin-hadoop2.6/logs/spark-username-org.apache.spark.deploy.worker.Worker-1-localhost.localdomain.out
therefore, there are no errors and it seems that the node wizard is working, as well as two Worker nodes. However, when I open WebGUI at 192.168.Â?. :: 8080, it displays only one worker - local. My problem is similar to that described here: Spark Clusters: employee information is not displayed in the web user interface, but nothing happens in my / etc / hosts file. All that it contains:
127.0.0.1 localhost.localdomain localhost
::1 localhost6.localdomain6 localhost6
What am I missing? Both machines are running Fedora Workstation x86_64.