I haven't programmed pthreads programming for a while, but when I did, I never used POSIX read / write locks. The problem is that most of the time the mutex is enough: i.e. your critical section is small and the region is not so critical in performance that the double barrier is worth worrying about.
In cases where performance is a problem, usually using atomic operations (usually available as a compiler extension) is the best option (i.e. the problem is an additional barrier, not the size of the critical section).
By the time you eliminate all these cases, you will be left with cases where you have certain performance / fairness / rw-bias requirements that require true rw-lock; and it is then that you will find that all the relevant POSIX rw-lock performance / fairness parameters are undefined and implementation specific. At this point, itβs usually best for you to implement your own so that you can comply with / rw -bias justice requirements.
The main algorithm is to calculate how many of them are in the critical section, and if the thread has not yet been allowed access, to disconnect it to a suitable queue for waiting. Most of your efforts will be to ensure proper equity / bias between serving the two queues.
The following pseudo-code, similar to a p-shaped alias, illustrates what I'm trying to say.
struct rwlock { mutex admin;
Something like the above code is the starting point for any rwlock implementation. Think about the nature of your problem and replace dequeue with the appropriate logic, which determines which class of the thread should be woken up next. Usually a limited number of readers or jump authors are allowed, or vice versa, depending on the application.
Of course, my main preference is to avoid rw-locks at all; typically using some combination of atomic operations, mutexes, STMs, messaging, and persistent data structures. However, there are times when you really need rw-lock, and when you do this, itβs useful to know how they work, so I hope this helps.
EDIT - In response to a (very reasonable) question, where am I waiting in the pseudocode above:
I assumed that the dequeue implementation contains wait, so somewhere inside the append(dequeue, mutex)
or prepend(dequeue, mutex)
there is a block of code line by line:
while(!readyToLeaveQueue()) { wait(dequeue->cond_var, mutex); }
so I passed the corresponding mutex in the queue operation.