Approach to using std :: atomic versus std :: condition_variable wrt pausing and resuming std :: thread in C ++

This is a separate question, but related to the previous question that I asked here

I use std::thread in my C++ code to constantly poll some data and add it to the buffer. I use C++ lambda to start the stream as follows:

 StartMyThread() { thread_running = true; the_thread = std::thread { [this] { while(thread_running) { GetData(); } }}; } 

thread_running is an atomic<bool> declared in the class header. Here is my GetData function:

 GetData() { //Some heavy logic } 

Further, I also have a StopMyThread function, where I set thread_running to false so that it thread_running while loop in the lambda block .

 StopMyThread() { thread_running = false; the_thread.join(); } 

As I understand it, I can pause and resume a thread using std::condition_variable , as stated here in my previous question.

But is there a drawback if I just use std::atomic<bool> thread_running to execute or not execute logic in GetData() as shown below?

 GetData() { if (thread_running == false) return; //Some heavy logic } 

Will it have more CPU cycles compared to how to use std::condition_variable as described here ?

+5
source share
3 answers

A condition variable is useful when you want to conditionally stop another thread or not. That way, you can always have a worker thread that waits for it to notice that it has nothing to do to work.

The atomic solution requires your interaction with the user interface to synchronize with the workflow or very complex logic to do this asynchronously.

Typically, your UI response flow should never be blocked in an unavailable state from workflows.

 struct worker_thread { worker_thread( std::function<void()> t, bool play = true ): task(std::move(t)), execute(play) { thread = std::async( std::launch::async, [this]{ work(); }); } // move is not safe. If you need this movable, // use unique_ptr<worker_thread>. worker_thread(worker_thread&& )=delete; ~worker_thread() { if (!exit) finalize(); wait(); } void finalize() { auto l = lock(); exit = true; cv.notify_one(); } void pause() { auto l = lock(); execute = false; } void play() { auto l = lock(); execute = true; cv.notify_one(); } void wait() { Assert(exit); if (thread) thread.get(); } private: void work() { while(true) { bool done = false; { auto l = lock(); cv.wait( l, [&]{ return exit || execute; }); done = exit; // have lock here } if (done) break; task(); } } std::unique_lock<std::mutex> lock() { return std::unique_lock<std::mutex>(m); } std::mutex m; std::condition_variable cv; bool exit = false; bool execute = true; std::function<void()> task; std::future<void> thread; }; 

or somesuch.

It belongs to the stream. The thread starts the task many times while it is in play() mode. If you pause() the next time you finish task() , the workflow stops. If you play() before completing the call to task() do not notice pause() .

The only expectation is the destruction of worker_thread , where it automatically informs the worker thread that should exit and waits for it to complete.

You can manually select .wait() or .finalize() . .finalize() is asynchronous, but if your application shuts down, you can call it earlier and give the worker thread more time to clean up, while the main thread cleans things up elsewhere.

.finalize() cannot be undone.

Code not verified.

+4
source

If something is missing for me, you already answered this in your original question: you will create and destroy the workflow every time it is necessary. This may or may not be a problem in your actual application.

+1
source

Two different problems are being solved, and this may depend on what you are actually doing. One of the problems: "I want my thread to work until I stop it." Another, apparently, refers to the case: "I have a couple of producers / consumers and want to be able to notify the consumer when the data is ready." The thread_running and join method works well for the first one. Secondly, you can use the mutex and condition because you are doing more than just using state to get things going. Suppose you have a vector<Work> . You protect this with the mutex, so the condition becomes [&work] (){ return !work.empty(); } [&work] (){ return !work.empty(); } or something like that. When the wait returns, you hold the mutex so that you can put things away and work with them. When you are done, you will return to waiting by letting go of the mutex so that the producer can add things to the queue.

You can combine these methods. You have a β€œprocessed” atom that periodically checks all your threads to find out when to exit, so you can join them. Use this condition to cover the case of data delivery between streams.

+1
source

Source: https://habr.com/ru/post/1259600/


All Articles