Here's the pseudo code for a simple read / write lock using the mutex and condition variable. The mutex API should be clear. It is assumed that the condition variables have a wait(Mutex&) member that (atomically!) Omits the mutex and expects the condition to be signaled. The condition is signaled either using signal() , which wakes up one waiter, or signal_all() , which wakes up all waiters.
read_lock() { mutex.lock(); while (writer) unlocked.wait(mutex); readers++; mutex.unlock(); } read_unlock() { mutex.lock(); readers--; if (readers == 0) unlocked.signal_all(); mutex.unlock(); } write_lock() { mutex.lock(); while (writer || (readers > 0)) unlocked.wait(mutex); writer = true; mutex.unlock(); } write_unlock() { mutex.lock(); writer = false; unlocked.signal_all(); mutex.unlock(); }
This implementation has many disadvantages.
Awakens all waiters whenever a lock becomes available.
If most of the waiters wait for a record lock, this is useless - most of the waiters will not be able to get a lock, in the end, and expect a wait. Just using signal() does not work, because you want to wake everyone waiting to unlock read locks. Therefore, to fix this, you need separate condition variables for easy reading and writing.
There is no justice. Readers Starve Authors
You can fix this by monitoring the number of incomplete read and write locks and either stop receiving read locks when there are incomplete write locks (although you will starve readers!), Or accidentally wake up either all readers or one (if you use a separate condition variable, see section above).
Locks are not issued in the order in which they are requested
To guarantee this, you will need a real waiting queue. You can, for example, create one condition variable for each waiter and inform all readers or one author, as at the head of the queue, after the lock is released.
Even pure read workloads cause a mutex conflict
This is hard to fix. One way is to use atomic instructions to get read or write locks (usually for comparison and sharing). If data collection fails due to blocking, you will have to return to the mutex. However, doing it right is quite difficult. Plus, there is still a debate - atomic instructions are far from free, especially on machines with a large number of cores.
Conclusion
Implementing synchronization primitives is correctly tough . Implementing efficient and fair synchronization primitives is even more complicated . And it almost never pays off. pthreads on linux for example. contains a read / write lock that uses a combination of futexes and atomic instructions and thus is probably superior to anything you can think of in a few days of work.