To answer my question, I found a solution that works the way I hoped:
First, Mygenerator no longer a generator, but a function. Also, instead of iterating over the x, y, and z segments, now I pass one segment to the function at the time:
def Myfunction(x_segment, y_segment, z_segment):
Using multiprocessing.Pool with the imap (generator) function works:
pool = multiprocessing.Pool(ncpus) results = pool.imap(Myfunction, ( (x[i], y[i], z[i]) for i in range(len(x)) ) M1, M2 = reduce(lambda r1, r2: (r1[0] + r2[0], r1[1] + r2[1]), (result for result in results)) pool.close() pool.join()
where I changed x and y in the lambda expression to r1 and r2 to avoid confusion with other variables with the same name. When trying to use a generator with multiprocessing , I had problems with brine.
The only disappointment with this solution is that it really did not speed up the calculations. I assume this is due to overhead operations. When using 8 cores, the processing speed was increased by about 10%. When reduced to 4 cores, the speed was doubled. It seems to be the best I can do with my specific task, if there is no other way to do parallelization ...
For use, we need the imap function, since map save all returned values โโin memory before the reduce operation, in which case it will be impossible.