I assume you are using std.parallelism? I wrote std.parallelism, so I will give you a constructive solution. Actually there was a join
function in some beta versions of std.parallelism. He waited for the completion of all tasks, and then disabled the task pool. I deleted it because I realized that it was useless.
The reason is that if you manually create a set of O (N) task
objects to iterate over a certain range, you are using the library incorrectly. Instead, you should use a parallel foreach loop that automatically concatenates before it releases control back to the calling thread. Your example:
foreach(ref elem; parallel(array)) { job1(elem); } foreach(ref elem; parallel(array)) { job2(elem); }
In this case, job1
and job2
should not start a new task, because the parallel foreach loop already uses enough tasks to fully use all the processor cores.
source share