Is this the correct version of C ++ 11 blocking with double validation with shared_ptr?

This article by Jeff Presing claims that the double-checked lock pattern (DCLP) is fixed in C ++ 11. The classic example used for this pattern is a singleton pattern, but I have a different use case and still lack experience in handling "atomic weapons" - maybe someone here can help me.

Is the following piece of code the correct version of DCLP described by Jeff under "Using C ++ 11 Consistently Atomics Consistent" ?

class Foo { std::shared_ptr<B> data; std::mutex mutex; void detach() { if (data.use_count() > 1) { std::lock_guard<std::mutex> lock{mutex}; if (data.use_count() > 1) { data = std::make_shared<B>(*data); } } } public: // public interface }; 
+6
source share
3 answers

No, this is not a valid DCLP implementation.

The fact is that your external check data.use_count() > 1 refers to an object (of type B with reference count ) that can be deleted (without links) in the part protected by the mutex. Any memory traps there cannot help.

Why data.use_count () accesses an object:

Assume that these operations are completed:

 shared_ptr<B> data1 = make_shared<B>(...); shared_ptr<B> data = data1; 

Then you have the following layout ( weak_ptr support is not shown here):

  data1 [allocated with B::new()] data -------------------------- [pointer type] ref; --> |atomic<int> m_use_count;| <-- [pointer type] ref |B obj; | -------------------------- 

Each shared_ptr object is simply a pointer that points to the allocated memory area. This memory area includes an object of type B plus an atomic counter reflecting the number shared_ptr pointing to this object. When this counter goes to zero, the memory area is freed (and object B destroyed). It is this counter that is returned by shared_ptr::use_count() .

UPDATE : execution, which can lead to access to already freed memory (initially two points shared_ptr for the same object, .use_count() is 2):

 /* Thread 1 */ /* Thread 2 */ /* Thread 3 */ Enter detach() Enter detach() Found `data.use_count()` > 1 Enter critical section Found `data.use_count()` > 1 Dereference `data`, found old object. Unreference old `data`, `use_count` becomes 1 Delete other shared_ptr, old object is deleted Assign new object to `data` Access old object (for check `use_count`) !! But object is freed !! 

An external check should only accept a pointer to an object to decide whether to block access.

By the way, even your implementation will be correct, it has little meaning:

  • If data (and detach ) can be accessed from several streams at the same time, the uniqueness of an object does not give any advantages, since it can be accessed from several streams. If you want to change the object, all calls to data must be protected by an external mutex, in this case detach() cannot be executed at the same time.

  • If data (and detach ) can only be accessed by one thread at a time, the detach implementation does not need to be blocked at all.

+2
source

This is a data race if two threads refer to detach on the same instance of Foo at the same time, since std::shared_ptr<B>::use_count() (read-only operation) will be executed simultaneously with the assignment operator std::shared_ptr<B> (a modifying operation), which is a data consumption and, therefore, the cause of undefined behavior. If Foo instances are not available at the same time, on the other hand, there is no data race, but then std::mutex will be useless in your example. The question arises: how does a data pointee become common in the first place? Without this important bit of information, it is difficult to determine if the code is safe, even if Foo never used at the same time.

+1
source

According to your source, I think that you still need to add a thread fence before the first test and after the second test.

 std::shared_ptr<B> data; std::mutex mutex; void detach() { std::atomic_thread_fence(std::memory_order_acquire); if (data.use_count() > 1) { auto lock = std::lock_guard<std::mutex>{mutex}; if (data.use_count() > 1) { std::atomic_thread_fence(std::memory_order_release); data = std::make_shared<B>(*data); } } } 
0
source

Source: https://habr.com/ru/post/989337/


All Articles