Although you can make the driver code work on a specific version of Java ( export JAVA_HOME=/path/to/jre/ && spark-submit ... ), workers will execute code with the default version of Java from the user PATH of the yarn from the work computer.
What you can do is set up each Spark instance to use a specific JAVA_HOME by editing spark-env.sh files ( documentation ).
source share