Boost :: unique_lock and boost :: shared_lock for reader locks

We have implemented reader write lock with

typedef boost::unique_lock<boost::shared_mutex> WriterLock; typedef boost::shared_lock<boost::shared_mutex> ReadersLock; 

where we have many multithreaded readers and only a few authors. Readers share access with other readers, but block records. Writer locks until it has exclusive access to the resource.

We could not find this in the acceleration documentation ... What is the Writer Hunger Prevention Policy?
For example, if there are many readers, all of whom block the lock from the thread pool, is there any guarantee upper limit on the number of lock attempts before the writer finally gets the lock?

We have seen performance indicators that seem to indicate that the record should wait until there are no readers, and there are rare cases when it is a long time, as new readers can request locks while the current readers are being served. In this case, it seems that in our code the writer should wait a very long time until there are no readings.

We would prefer a system with a large number of queues where, when a writer requests a lock, all current readers are reset, but all new incoming readers are blocked by the write request.

What is the behavior of the updated castle concept in Boost? Increase flows

There is no one way in which he handles the writer's hunger.

+4
source share
3 answers

A slight improvement in @Guerrero's solution, which increases fairness for several readers and several authors, so no one could starve to death:

 read() { while (atomic-write-requests > 0) condition.wait(); ReadersLock lock(acquireReaderLock()); doRead(); } write() { while (atomic-write-requests > 0) condition.wait(); atomic-write-requests++; WritersLock lock(acquireWriterLock()); doWrite(); atomic-write-requests--; condition.notify(); } 

In this decision, a new and fair competition will begin each time the author leaves the field of action.

+1
source

Unaware of the implementation of boost, perhaps you can prevent a writer from starving with your implementation. Readers could wait while writers exist. Maybe like this pseudo code:

 read() { while (atomic-write-requests > 0) { condition.wait(); } ReadersLock lock(acquireReaderLock()); doRead(); } write() { atomic-write-requests++; WritersLock lock(acquireWriterLock()); doWrite(); atomic-write-requests--; condition.notify(); } 
0
source

If you want to use the fifo approach, boost implements several scheduling strategies (including FIFOs) in its statechart library. I assume that you will have to adapt a lot of your code to use this. Checkout fifo_scheduler and FifoWorker:

http://www.boost.org/doc/libs/1_51_0/libs/statechart/doc/reference.html#FifoWorker

http://www.boost.org/doc/libs/1_51_0/libs/statechart/doc/reference.html#fifo_scheduler.hpp

0
source

Source: https://habr.com/ru/post/1442166/


All Articles