I searched and cannot find the answer to this question elsewhere. I hope I haven’t missed anything.
I'm trying to use Python multiprocessing to essentially batch run some proprietary models in parallel. I have, say, 200 simulations, and I want them to run them ~ 10-20 at a time. My problem is that proprietary software crashes if two models start working at the same time. I need to introduce a delay between processes spawned by multiprocessing so that each new model works a bit before it starts.
So far, my solution has been to introduce a random time delay at the start of the child process before it works from the model run. However, this only reduces the likelihood of running any two starts at the same time, so I still encounter problems when processing a large number of models. Therefore, I believe that the time delay should be built into the multiprocessor part of the code, but I could not find any documentation or examples of this.
Edit: I am using Python 2.7
This is my code:
from time import sleep
import numpy as np
import subprocess
import multiprocessing
def runmodels(arg):
sleep(np.random.rand(1,1)*120)
subprocess.call(arg)
if __name__ == '__main__':
arguments = [big list of runs in here
]
count = 12
pool = multiprocessing.Pool(processes = count)
r = pool.imap_unordered(runmodels, arguments)
pool.close()
pool.join()
source
share