How to use initializer in Python ThreadPool

I am trying to perform thread convolution using PyFFTW to compute a large number of 2D convolutions simultaneously. (Separate processes are not required since GIL is freed up for Numpy operations). Now here is the canonical model for this: http://code.activestate.com/recipes/577187-python-thread-pool/

(Py) FFTW is so fast because it reuses plans. They must be configured separately for each thread to avoid access errors, for example:

class Worker(Thread): """Thread executing tasks from a given tasks queue""" def __init__(self, tasks): Thread.__init__(self) self.tasks = tasks self.daemon = True # Make separate fftw plans for each thread. flag_for_fftw='patient' self.inputa = np.zeros(someshape, dtype='float32') self.outputa = np.zeros(someshape_semi, dtype='complex64') # create a forward plan. self.fft = fftw3.Plan(self.inputa,self.outputa, direction='forward', flags=[flag_for_fftw],nthreads=1) # Initialize the arrays for the inverse fft. self.inputb = np.zeros(someshape_semi, dtype='complex64') self.outputb = np.zeros(someshape, dtype='float32') # Create the backward plan. self.ifft = fftw3.Plan(self.inputb,self.outputb, direction='backward', flags=[flag_for_fftw],nthreads=1) self.start() 

Thus, you can pass the arguments self.inputa , self.outputa , self.fft , self.inputb , self.outputb , self.ifft to the actual convolution in the run method in the Worker class.

This is all well and good, but we could also import the ThreadPool class:

 from multiprocessing.pool import ThreadPool 

But how should I define the initializer in ThreadPool to get the same result? According to the docs.python.org/library/multiprocessing.html docs "every workflow will call an initializer (* initargs) when it starts." You can easily verify this in the Python source code.

However, when you configure Threadpool, for example, using two threads:

 po = ThreadPool(2,initializer=tobedetermined) 

and you run it, perhaps in some loop

 po.apply_async(convolver,(some_input,)) 

how can you configure convolver using initializer? How can you use it separately FFTW plans in each thread without redistributing the FFTW plan for each rollup?

Cheers, Alex.

+4
source share
1 answer

you can wrap the convolver call with a function that uses Thread Local Storage ( threading.local() ) to initialize PyFFTW and remember the result

+1
source

Source: https://habr.com/ru/post/1368975/


All Articles