Do Benafores really stand on modern OS?

In my days as a BeOS programmer, I read this article by Benoit Schillings, describing how to create a "benaphore": a method of using the atomic variable to enforce critical section, which avoids the need to obtain / release a mutex in general (without competition) case.

I thought it was pretty smart, and it seems that you can do the same trick on any platform that supports atomic increment / decrement.

On the other hand, it looks like it can be easily included in the standard implementation of the mutex ... in this case, the implementation of this logic in my program will be unnecessary and will not bring any benefit.

Does anyone know if modern APIs use locks (e.g. pthread_mutex_lock () / pthread_mutex_unlock ()) this trick inside? And if not, why not?

+1
source share
2 answers

What your article describes is widely used today. Most often it is called the “ Critical Section ”, and it consists of a locked variable, a group of flags, and an internal synchronization object (Mutex, if I remember correctly). As a rule, in scenarios with a small amount of conflicts, the critical section is executed completely in user mode without using the kernel synchronization object. This ensures quick execution. When the conflict is high, the kernel object is used to wait, which frees up a time slice conducting for faster rotation.

As a rule, there is very little sense in implementing synchronization primitives on this day and in age. Operating systems come with a large number of such objects, and they are optimized and tested in a much wider range of scenarios than one programmer could imagine. Literally, it takes time to invent, implement and test a good synchronization mechanism. This does not mean that you should not try :)

+1
source

The Java AbstractQueuedSynchronizer (and its sibling AbstractQueuedLongSynchronizer ) works similarly, or at least can be implemented in a similar way. These types form the basis for several concurrency primitives in the Java library, such as ReentrantLock and FutureTask .

It works by using the atomic whole to represent the state. A lock can define a value of 0 as unlocked and 1 as locked. Any thread that wants to get a lock tries to change the lock state from 0 to 1 through an atomic comparison and set operation; if the attempt fails, the current state is not 0, which means that the lock belongs to another thread.

AbstractQueuedSynchronizer also facilitates waiting for locks and notification of conditions by supporting CLH queues, which are blocked links that represent a line of threads waiting either to receive a lock or to receive a notification through a condition. Such a notification moves one or all of the threads waiting for this state to the head of the queue of those who are waiting for a related lock.

Most of this mechanism can be implemented as an atomic integer representing a state, as well as a pair of atomic pointers for each waiting queue. The actual scheduling of those threads will try to check and change the state variable (through, say, AbstractQueuedSynchronizer#tryAcquire(int) ) goes beyond such libraries and gets into the host system scheduler.

0
source

Source: https://habr.com/ru/post/895204/


All Articles