Mutex locks - where a collection can be built by merging

From here: stack overflow

If you want to block several objects protected by a mutex from a set of such objects , where sets could be created by merging , you can

select exactly one mutex for use on one object, allowing more threads to work in parallel,

or use for each object one link to any, possibly, common recursive mutex, to reduce the likelihood of not locking all mutexes together,

or use for each object one comparable link to any possible common non-recursive mutex, bypassing the intention to block several times.

I just don't understand the whole quote above. What is he talking about? Please explain in layman's words.

+1
multithreading pthreads mutex
May 11 '12 at 8:50
source share
1 answer

Here is my interpretation of the quoted quote. I hope that it is understandable, and in fact corresponds to the intention of the person who wrote this original answer.

Suppose you have a data structure that needs to be protected by a mutex. You have several options for how to "granulate" you can handle critical sections regarding these objects. These parameters can also affect how the thread may be required to create locks for multiple objects at the same time:

  • use one mutex per object:

    struct foo { mutex mux; // important data fields... }; 

    This has the advantage that threads associated with different objects will not have any statements. If a single thread needs to deal with multiple objects while holding locks for them (I think this is what is meant by "set merging"), there is no need for recursive mutexes. However, care must be taken to avoid a deadlock.

  • each object refers to a recursive mutex that can be passed to other objects:

     struct foo { recursive_mutex* pmux; // important data fields... }; 

    Since two objects can actually be associated with the same mutex, if thread 1 tries to lock object A and thread 2 simultaneously tries to lock object B when objects A and B use the same mutex, one of the threads will block until the other thread releases mutex. Because a mutex is recursive, a single thread can block multiple objects, even if they have the same mutex. Please note that there is the same caveat clause.

    The advantage of this scheme over the first one is that if a thread needs to block several objects at the same time, there is a certain probability that some of the objects in this set will share the mutex. As soon as a thread blocks one of the objects, theoretically, the probability of blocking when trying to block the next object is reduced. However, I think in practice it would be quite difficult to prove that you will get this benefit if you cannot characterize the blocking behavior of your threads and the many objects that they will block (and set up mutex sharing to reflect this model) .

  • The last element of this quote mainly relates to the use of non-recursive locks in the above scenario. In this case, you need to prevent the thread from trying to "reinstall" the mutex (which, of course, cannot be performed using a non-recursive mutex), so the thread will have to somehow compare the lock with which it is associated with acquiring locks that it already acquired, to determine whether it should or should not acquire a lock on this object. If more than a few objects are involved, this can be a complex scenario to ensure that the thread acquires the exact correct set of locks.

     struct foo { mutex* pmux; // pointer to (possibly shared) non-recursive mutex // important data fields... }; // a set of objects a thread needs to work on in a critical section // the objects possibly share non-recursive mutexes struct foo* pA; struct foo* pB; struct foo* pC; // acquire the necessary locks on all three objects: mutex_lock( pA->pmux); if (pB->pmux != pA->pmux) mutex_lock( pB->pmux); if ((pC->pmux != pA->pmux) && (pC->pmux != pB->p-mux)) mutex_lock( pC->pmux); // releasing the set of mutexes is similar 

    Instead of manually retrieving inline mutexes, it would probably be better to pass them to a function that controlled the complexity of ensuring that any duplicates were ignored. And, as in the previous schemes, it is necessary to avoid the deadlock.

0
May 12 '12 at 9:08 a.m.
source share



All Articles