How would you implement your own read / write mechanism in C ++ 11?

I have a set of data structures that I need to protect with a read / write lock. I am aware of boost :: shared_lock, but I would like to have a special implementation using std :: mutex, std :: condition_variable and / or std :: atomic so that I can better understand how this works (and then configure later),

Each data structure (movable but not copied) is inherited from the Commons class, which encapsulates the lock. I would like the public interface to look something like this:

class Commons { public: void read_lock(); bool try_read_lock(); void read_unlock(); void write_lock(); bool try_write_lock(); void write_unlock(); }; 

... so that it can be inherited by some users:

 class DataStructure : public Commons {}; 

I write scientific code and generally avoid data races; this lock is basically protection against errors, which I will probably make later. Thus, my priority is low overhead, so I don’t make it very difficult for the program to run correctly. Each thread is likely to run on its own processor core.

Could you show me (pseudo code in order) read / write locks? Now I have to be an option that prevents the writer from starving. My main problem so far has been a gap in read_lock between checking if reading is safe to actually increase the number of readers, after which write_lock knows what to expect.

 void Commons::write_lock() { write_mutex.lock(); reading_mode.store(false); while(readers.load() > 0) {} } void Commons::try_read_lock() { if(reading_mode.load()) { //if another thread calls write_lock here, bad things can happen ++readers; return true; } else return false; } 

I'm a little new to multithreading, and I'd love to understand that. Thanks in advance for your help!

+37
c ++ multithreading locking c ++ 11 readwritelock
Aug 20 2018-12-12T00:
source share
3 answers

Here's the pseudo code for a simple read / write lock using the mutex and condition variable. The mutex API should be clear. It is assumed that the condition variables have a wait(Mutex&) member that (atomically!) Omits the mutex and expects the condition to be signaled. The condition is signaled either using signal() , which wakes up one waiter, or signal_all() , which wakes up all waiters.

 read_lock() { mutex.lock(); while (writer) unlocked.wait(mutex); readers++; mutex.unlock(); } read_unlock() { mutex.lock(); readers--; if (readers == 0) unlocked.signal_all(); mutex.unlock(); } write_lock() { mutex.lock(); while (writer || (readers > 0)) unlocked.wait(mutex); writer = true; mutex.unlock(); } write_unlock() { mutex.lock(); writer = false; unlocked.signal_all(); mutex.unlock(); } 

This implementation has many disadvantages.

Awakens all waiters whenever a lock becomes available.

If most of the waiters wait for a record lock, this is useless - most of the waiters will not be able to get a lock, in the end, and expect a wait. Just using signal() does not work, because you want to wake everyone waiting to unlock read locks. Therefore, to fix this, you need separate condition variables for easy reading and writing.

There is no justice. Readers Starve Authors

You can fix this by monitoring the number of incomplete read and write locks and either stop receiving read locks when there are incomplete write locks (although you will starve readers!), Or accidentally wake up either all readers or one (if you use a separate condition variable, see section above).

Locks are not issued in the order in which they are requested

To guarantee this, you will need a real waiting queue. You can, for example, create one condition variable for each waiter and inform all readers or one author, as at the head of the queue, after the lock is released.

Even pure read workloads cause a mutex conflict

This is hard to fix. One way is to use atomic instructions to get read or write locks (usually for comparison and sharing). If data collection fails due to blocking, you will have to return to the mutex. However, doing it right is quite difficult. Plus, there is still a debate - atomic instructions are far from free, especially on machines with a large number of cores.

Conclusion

Implementing synchronization primitives is correctly tough . Implementing efficient and fair synchronization primitives is even more complicated . And it almost never pays off. pthreads on linux for example. contains a read / write lock that uses a combination of futexes and atomic instructions and thus is probably superior to anything you can think of in a few days of work.

+45
Sep 29 '12 at 22:59
source share

Mark this class :

 // // Multi-reader Single-writer concurrency base class for Win32 // // (c) 1999-2003 by Glenn Slayden (glenn@glennslayden.com) // // #include "windows.h" class MultiReaderSingleWriter { private: CRITICAL_SECTION m_csWrite; CRITICAL_SECTION m_csReaderCount; long m_cReaders; HANDLE m_hevReadersCleared; public: MultiReaderSingleWriter() { m_cReaders = 0; InitializeCriticalSection(&m_csWrite); InitializeCriticalSection(&m_csReaderCount); m_hevReadersCleared = CreateEvent(NULL,TRUE,TRUE,NULL); } ~MultiReaderSingleWriter() { WaitForSingleObject(m_hevReadersCleared,INFINITE); CloseHandle(m_hevReadersCleared); DeleteCriticalSection(&m_csWrite); DeleteCriticalSection(&m_csReaderCount); } void EnterReader(void) { EnterCriticalSection(&m_csWrite); EnterCriticalSection(&m_csReaderCount); if (++m_cReaders == 1) ResetEvent(m_hevReadersCleared); LeaveCriticalSection(&m_csReaderCount); LeaveCriticalSection(&m_csWrite); } void LeaveReader(void) { EnterCriticalSection(&m_csReaderCount); if (--m_cReaders == 0) SetEvent(m_hevReadersCleared); LeaveCriticalSection(&m_csReaderCount); } void EnterWriter(void) { EnterCriticalSection(&m_csWrite); WaitForSingleObject(m_hevReadersCleared,INFINITE); } void LeaveWriter(void) { LeaveCriticalSection(&m_csWrite); } }; 

I did not have the opportunity to try, but the code looks fine.

+6
May 31 '14 at 4:52
source share

You can implement Readers-Writers locking by following the exact Wikipedia algorithm here (I wrote):

 #include <iostream> #include <thread> #include <mutex> #include <condition_variable> int g_sharedData = 0; int g_readersWaiting = 0; std::mutex mu; bool g_writerWaiting = false; std::condition_variable cond; void reader(int i) { std::unique_lock<std::mutex> lg{mu}; while(g_writerWaiting) cond.wait(lg); ++g_readersWaiting; // reading std::cout << "\n reader #" << i << " is reading data = " << g_sharedData << '\n'; // end reading --g_readersWaiting; while(g_readersWaiting > 0) cond.wait(lg); cond.notify_one(); } void writer(int i) { std::unique_lock<std::mutex> lg{mu}; while(g_writerWaiting) cond.wait(lg); // writing std::cout << "\n writer #" << i << " is writing\n"; g_sharedData += i * 10; // end writing g_writerWaiting = true; while(g_readersWaiting > 0) cond.wait(lg); g_writerWaiting = false; cond.notify_all(); }//lg.unlock() int main() { std::thread reader1{reader, 1}; std::thread reader2{reader, 2}; std::thread reader3{reader, 3}; std::thread reader4{reader, 4}; std::thread writer1{writer, 1}; std::thread writer2{writer, 2}; std::thread writer3{writer, 3}; std::thread writer4{reader, 4}; reader1.join(); reader2.join(); reader3.join(); reader4.join(); writer1.join(); writer2.join(); writer3.join(); writer4.join(); return(0); } 
0
Sep 19 '19 at 20:34
source share



All Articles