Explicit time going through the python multiprocessing module: of course I did something wrong

I use python to experiment with video games in cognitive science. I am testing a device that detects eye movements through EOG , and this device is talking to a computer via USB. In order to guarantee constant reading of data from USB, while the experiment performs other actions (for example, changing the display, etc.), I thought that I would use a multiprocessor module (of course, with a multi-core computer), placing the USB reading work separately workflow and use the queue to inform the employee when interesting events occur in the experiment. Nevertheless, I came across some strange behavior that even when there is 1 second between the infection of an employee with 2 different messages, when I look at the working output at the end, he seems to have received the second almost immediately after the first. Of course, I encoded something terrible, but I don’t see that, so I am very grateful for the help that anyone can provide.

I tried breaking my code into a minimal example demonstrating this behavior. If you go to this topic:

https://gist.github.com/914070

you will find "multiprocessing_timetravel.py", which encodes the example, and ".R analysis", which analyzes the file "temp.txt" that results from running "multiprocessing_timetravel.py". "analysis.R" is written to R and requires that you have the plyr library installed, but I also included an example output from the analysis in the "analysis_results.txt" file in gist.

+4
source share
2 answers

Ah, I solved it, and it turned out to be a lot easier than I expected. 5 events were held on the “trial”, and the latest event triggered the recording of data in HD. If this final recording is time consuming, the employee may not capture the next first test event until the second event has been queued. When this happens, the first event lasts (for working eyes) for only one of its cycles before it encounters the second event. I will either have to figure out a faster way to write out the data, or leave the data in memory until a break in the experiment allows me to write for a long time.

0
source

Despite working with multiprocessing, your queue still uses synchronization objects (two locks and a semaphore), and the put method spawns another thread (based on a 2.7 source). So the GIL rivalry (and other fun stuff) can come into play, as suggested by BlueRaja. You can try playing with sys.checkinterval and see if this reduces the observed inconsistency, although you do not want to work normally in this state.

Note that if your USB reader code resets the GIL (e.g. ctypes code or the Python add-in designed to remove GIL), you get true multithreading, and a threading approach may be more productive than using multiprocessing.

+1
source

Source: https://habr.com/ru/post/1347604/


All Articles