How are read / write locks implemented in pthread?

How they are implemented especially in the case of pthreads. What pthread APIs do they use under the hood? Some pseudo-code will be appreciated.

+6
source share
2 answers

I haven't programmed pthreads programming for a while, but when I did, I never used POSIX read / write locks. The problem is that most of the time the mutex is enough: i.e. your critical section is small and the region is not so critical in performance that the double barrier is worth worrying about.

In cases where performance is a problem, usually using atomic operations (usually available as a compiler extension) is the best option (i.e. the problem is an additional barrier, not the size of the critical section).

By the time you eliminate all these cases, you will be left with cases where you have certain performance / fairness / rw-bias requirements that require true rw-lock; and it is then that you will find that all the relevant POSIX rw-lock performance / fairness parameters are undefined and implementation specific. At this point, it’s usually best for you to implement your own so that you can comply with / rw -bias justice requirements.

The main algorithm is to calculate how many of them are in the critical section, and if the thread has not yet been allowed access, to disconnect it to a suitable queue for waiting. Most of your efforts will be to ensure proper equity / bias between serving the two queues.

The following pseudo-code, similar to a p-shaped alias, illustrates what I'm trying to say.

 struct rwlock { mutex admin; // used to serialize access to other admin fields, NOT the critical section. int count; // threads in critical section +ve for readers, -ve for writers. fifoDequeue dequeue; // acts like a cond_var with fifo behaviour and both append and prepend operations. void *data; // represents the data covered by the critical section. } void read(struct rwlock *rw, void (*readAction)(void *)) { lock(rw->admin); if (rw->count < 0) { append(rw->dequeue, rw->admin); } while (rw->count < 0) { prepend(rw->dequeue, rw->admin); // Used to avoid starvation. } rw->count++; // Wake the new head of the dequeue, which may be a reader. // If it is a writer it will put itself back on the head of the queue and wait for us to exit. signal(rw->dequeue); unlock(rw->admin); readAction(rw->data); lock(rw->admin); rw->count--; signal(rw->dequeue); // Wake the new head of the dequeue, which is probably a writer. unlock(rw->admin); } void write(struct rwlock *rw, void *(*writeAction)(void *)) { lock(rw->admin); if (rw->count != 0) { append(rw->dequeue, rw->admin); } while (rw->count != 0) { prepend(rw->dequeue, rw->admin); } rw->count--; // As we only allow one writer in at a time, we don't bother signaling here. unlock(rw->admin); // NOTE: This is the critical section, but it is not covered by the mutex! // The critical section is rather, covered by the rw-lock itself. rw->data = writeAction(rw->data); lock(rw->admin); rw->count++; signal(rw->dequeue); unlock(rw->admin); } 

Something like the above code is the starting point for any rwlock implementation. Think about the nature of your problem and replace dequeue with the appropriate logic, which determines which class of the thread should be woken up next. Usually a limited number of readers or jump authors are allowed, or vice versa, depending on the application.

Of course, my main preference is to avoid rw-locks at all; typically using some combination of atomic operations, mutexes, STMs, messaging, and persistent data structures. However, there are times when you really need rw-lock, and when you do this, it’s useful to know how they work, so I hope this helps.

EDIT - In response to a (very reasonable) question, where am I waiting in the pseudocode above:

I assumed that the dequeue implementation contains wait, so somewhere inside the append(dequeue, mutex) or prepend(dequeue, mutex) there is a block of code line by line:

 while(!readyToLeaveQueue()) { wait(dequeue->cond_var, mutex); } 

so I passed the corresponding mutex in the queue operation.

+8
source

Each implementation may be different, but usually they should encourage readers by default due to the POSIX requirement so that the thread can get a read lock on rwlock several times. If they loved writers, then whenever a writer waited, the reader would be at a standstill in a second read-lock attempt, unless the implementation can determine that the reader already has a read lock, but the only way to determine that it is to store the list all threads that contain read locks, which is very inefficient in time and space.

0
source

Source: https://habr.com/ru/post/918089/


All Articles