Parallelism in Julia. Features and limitations

In her arXiv article, the original authors of Julia mention the following:

2.14 Parallelism . Parallel execution is provided by a message-based message processing system implemented in the Julia Standard Library. The language design supports the implementation of such libraries by providing symmetrical coroutines, which can also be considered as Co-planned threads. This feature allows asynchronous to hide inside libraries, rather than requiring to set up callbacks. Currently, Julia does not support native threads, which is a limitation, but has the advantage of the complexity of synchronous use of shared memory.

What do they mean by saying Julia does not support native threads ? What is a native thread?

Do other interpreted languages ​​such as Python or R support this type of parallelism? Only Julia in this?

+43
python multithreading parallel-processing r julia-lang
May 7 '13 at 13:50
source share
1 answer

"Native threads" are separate execution contexts controlled by the kernel of the operating system, access to shared memory space and the possibility of simultaneous execution on separate kernels. Contrast this with individual processes that can run simultaneously on multiple cores, but have separate memory spaces. To make sure that the processes interact beautifully, easily, since they can only communicate with each other through the core. Ensuring that threads do not interact in unpredictable, erroneous ways is very difficult, as they can read and write to the same memory without hindrance.

The situation of R is quite simple: R is not multithreaded . Python is a bit more complicated: Python supports threads, but because of the global interpreter lock (GIL), actually executing Python code at the same time is possible. Other popular open source dynamic languages ​​are in different mixed states relative to native streaming (Ruby: no / kinda / yes?; Node.js: no ), but overall the answer is no, they do not support completely parallel native streaming, therefore Julia is not alone in this.

When we add shared-memory parallelism to Julia, as we plan - whether it be our own threads or several processes with shared memory - this will be true concurrency, and the GIL will not prevent Julia code from executing simultaneously. However, this is an incredibly complex feature to add to a language, as evidenced by non-existent or limited support in other very popular, mature dynamic languages. Adding a concurrency shared memory model is technically difficult, but the real challenge is to develop a programming model that will allow programmers to efficiently use concurrency hardware in a productive and safe way. This problem, as a rule, has not been solved and is a very active area of ​​research and experimentation - there is no “gold standard” for copying. We could just add support for POSIX threads, but this programming model is considered dangerous and incredibly difficult to use correctly and efficiently. Go has a great history of concurrency, but is designed for writing highly competitive servers, and not for working with big data at the same time, so it’s completely not clear that just copying the Go model is a good idea for Julia.

+59
May 7, '13 at 16:01
source share



All Articles