Embedded Python synchronization in a multi-threaded program

Here is an example of using a Python interpreter in a multithreaded program:

#include <python.h>
#include <boost/thread.hpp>

void f(const char* code)
{
    static volatile auto counter = 0;
    for(; counter < 20; ++counter)
    {
        auto state = PyGILState_Ensure();
        PyRun_SimpleString(code);
        PyGILState_Release(state);

        boost::this_thread::yield();
    }
}

int main()
{
    PyEval_InitThreads();
    Py_Initialize();
    PyRun_SimpleString("x = 0\n");
    auto mainstate = PyEval_SaveThread();

    auto thread1 = boost::thread(f, "print('thread #1, x =', x)\nx += 1\n");
    auto thread2 = boost::thread(f, "print('thread #2, x =', x)\nx += 1\n");
    thread1.join();
    thread2.join();

    PyEval_RestoreThread(mainstate);
    Py_Finalize();
}

It looks great, but it is not in sync. The Python interpreter issues and reloads the GIL several times during PyRun_SimpleString (see docs, page # 2 ).

We can serialize the PyRun_SimpleString call using our own synchronization object, but this is the wrong way.

Python has its own synchronization modules - _threadand threading. But they do not work in this code:

Py_Initialize();
PyRun_SimpleString(R"(
import _thread
sync = _thread.allocate_lock()

x = 0
)");

auto mainstate = PyEval_SaveThread();

auto thread1 = boost::thread(f, R"(
with sync:
    print('thread #1, x =', x)
    x += 1
)");
  • It gives an error File "<string>", line 3, in <module> NameError: name '_[1]' is not definedand deadlock.

How to synchronize embedded Python code in the most efficient way?

+3
2

with issue Python 3.1, Python 3.2 Python 2.7.

, threading .

, , .

+2

CPython , ( Python), , . print, (. string_print stringobject.c).

, : Python, I/O.

Boost, , , , boost::interprocess::interprocess_mutex.

[: , Abyx.]

+4

Source: https://habr.com/ru/post/1774263/


All Articles