There is no context in the question, but first - it seems that the Spark Job Server limits the number of parallel jobs (unlike Spark itself, which limits the number of jobs, not jobs):
From application.conf
# Number of jobs that can be run simultaneously per context
# If not set, defaults to number of cores on machine where jobserver is running
max-jobs-per-context = 8
If this is not a problem (you set a limit higher or use more than one context), then the total number of cores in a cluster (8 * 20 = 160) is the maximum number of simultaneous tasks. If each of your tasks creates 16 tasks, Spark will queue for the next incoming task, waiting for processors to be available.
Spark , repartition coalesce RDD/DataFrame, . , RDD (, union), .