Multiprocessing Python. Is it possible to set a fixed time delay between separate processes?

I searched and cannot find the answer to this question elsewhere. I hope I haven’t missed anything.

I'm trying to use Python multiprocessing to essentially batch run some proprietary models in parallel. I have, say, 200 simulations, and I want them to run them ~ 10-20 at a time. My problem is that proprietary software crashes if two models start working at the same time. I need to introduce a delay between processes spawned by multiprocessing so that each new model works a bit before it starts.

So far, my solution has been to introduce a random time delay at the start of the child process before it works from the model run. However, this only reduces the likelihood of running any two starts at the same time, so I still encounter problems when processing a large number of models. Therefore, I believe that the time delay should be built into the multiprocessor part of the code, but I could not find any documentation or examples of this.

Edit: I am using Python 2.7

This is my code:

from time import sleep
import numpy as np
import subprocess
import multiprocessing

def runmodels(arg):
    sleep(np.random.rand(1,1)*120) # this is my interim solution to reduce the probability that any two runs start at the same time, but it isn't a guaranteed solution
    subprocess.call(arg) # this line actually fires off the model run

if __name__ == '__main__':    

    arguments =     [big list of runs in here
                    ]    

    count = 12
    pool = multiprocessing.Pool(processes = count)
    r = pool.imap_unordered(runmodels, arguments)      
    pool.close()
    pool.join()
+4
source share
2 answers

One way to do this is with a thread and a semaphore:

from time import sleep
import subprocess
import threading


def runmodels(arg):
    subprocess.call(arg)
    sGlobal.release() # release for next launch


if __name__ == '__main__':
    threads = []
    global sGlobal
    sGlobal = threading.Semaphore(12) #Semaphore for max 12 Thread
    arguments =  [big list of runs in here
                ]
    for arg in arguments :
        sGlobal.acquire() # Block if more than 12 thread
        t = threading.Thread(target=runmodels, args=(arg,))
        threads.append(t)
        t.start()
        sleep(1)

    for t in threads :
        t.join()
0
source

multiprocessing.Pool() already limits the number of processes running simultaneously.

, ( ):

import threading
import multiprocessing

def init(lock):
    global starting
    starting = lock

def run_model(arg):
    starting.acquire() # no other process can get it until it is released
    threading.Timer(1, starting.release).start() # release in a second
    # ... start your simulation here

if __name__=="__main__":
   arguments = ...
   pool = Pool(processes=12, 
               initializer=init, initargs=[multiprocessing.Lock()])
   for _ in pool.imap_unordered(run_model, arguments):
       pass
+1

Source: https://habr.com/ru/post/1588939/


All Articles