I installed a standalone spark cluster (1.6). have 1 master and added 3 machines in the conf / slaves file as workers. Despite the fact that I allocated 4 GB of memory to each of my employees in sparks, why does it use only 1024 MB when launching the application? I would like him to use all 4 GB allocated to him. Help me figure out where and what I'm doing wrong.
Below is a screenshot of the spark's main page (when the application is running using spark-submit), where in the Memory column it displays 1024.0 MB, which is used in brackets next to 4.0 GB.
I also tried to install the option - executor-memory 4G with the spark-submit function, and it does not work (as suggested in How to change memory on node for apache spark worker ).
These are the parameters that I set in spark-env.sh
export SPARK_WORKER_CORES = 3
export SPARK_WORKER_MEMORY = 4g
export SPARK_WORKER_INSTANCES = 2

source share