You may need to build one of the other moving parts. For example, create Queue.Queue() , in which each listener sends a new Event() , on which it is waiting. When it is time to wake up one of the waiting threads, pull the element in the queue that waited longer - this will be one of these event objects, and release the stream through event.set() .
Obviously, you can also use a semaphore for each waiting process, but I like the semantics of Event , as it can happen only once, while the semaphore has semantics that its value can support many waiting threads.
To install the system up:
import Queue big_queue = Queue.Queue()
Then to wait:
import threading myevent = threading.Event() big_queue.put(myevent) myevent.wait()
And release one of the pending threads:
event = big_queue.get() event.set()
I believe the weakness of this approach is that the thread executing set / release has to wait until the wait thread appears, while a true semaphore would allow several releases to continue, even if no one else had expected?
source share