I am trying to perform thread convolution using PyFFTW to compute a large number of 2D convolutions simultaneously. (Separate processes are not required since GIL is freed up for Numpy operations). Now here is the canonical model for this: http://code.activestate.com/recipes/577187-python-thread-pool/
(Py) FFTW is so fast because it reuses plans. They must be configured separately for each thread to avoid access errors, for example:
class Worker(Thread): """Thread executing tasks from a given tasks queue""" def __init__(self, tasks): Thread.__init__(self) self.tasks = tasks self.daemon = True
Thus, you can pass the arguments self.inputa , self.outputa , self.fft , self.inputb , self.outputb , self.ifft to the actual convolution in the run method in the Worker class.
This is all well and good, but we could also import the ThreadPool class:
from multiprocessing.pool import ThreadPool
But how should I define the initializer in ThreadPool to get the same result? According to the docs.python.org/library/multiprocessing.html docs "every workflow will call an initializer (* initargs) when it starts." You can easily verify this in the Python source code.
However, when you configure Threadpool, for example, using two threads:
po = ThreadPool(2,initializer=tobedetermined)
and you run it, perhaps in some loop
po.apply_async(convolver,(some_input,))
how can you configure convolver using initializer? How can you use it separately FFTW plans in each thread without redistributing the FFTW plan for each rollup?
Cheers, Alex.