Spark why a worker automatically starts on my host

I have 3 cars, one is a master, and there are 2 subordinates. I have a main address in spark / conf / master and two slaves in spark / conf / slaves.

I start start-masters.sh. I have a graphical interface without any work - everything is fine. Then I start start-slaves.shand get my two subordinates in the GUI - everything is fine

But when I start the task with the main URL with --master spark://master, which is the same address in the wizard file, and is taken from the main GUI, the worker starts on all machines.

Why? I do not want the worker to be a master, because the driver uses too much memory and cannot share the machine with the worker / worker. What am I doing wrong? Should I run a worker on a wizard? Should I move the master to one of the slaves and use the current master as a client for Spark?

+4
source share

Source: https://habr.com/ru/post/1626802/


All Articles