It depends on your cluster manager. I assume that you are asking about the local[n] runtime.
If so, then the driver and one and only one executor is the same JVM with the number of threads n .
DAGScheduler - Spark's execution scheduler will use n threads to schedule as many tasks as you have been told.
If you have more tasks, i.e. threads than cores, your OS will have to deal with more threads than cores, and plan them accordingly.
source share