Is endless endless cyclic unsuccessful practice?

In short: Does a time delay cycle depend on significant processing power compared to a similar cycle that slows down the delay?

In not very short:

I came across this question more often. I write the main part of the program (either a microcontroller or a computer application), and it consists of a semi-infinite while loop to stay alive and look for events.

I'll take this example: I have a small application that uses an SDL window and a console. In the while loop, I would like to listen for events for this SDL window, but I would also like to break this loop according to the command line input using a global variable. Possible solution (pseudo-code):

// Global bool running = true; // ... while (running) { if (getEvent() == quit) { running = false; } } shutdown(); 

The while kernel will terminate from the listen event or something external. However, this cycle runs continuously, maybe even 1000 times per second. To survive a little, I do not need response time. Therefore, I often add a delay statement:

 while (running) { if (getEvent() == quit) { running = false; } delay(50); // Wait 50 milliseconds } 

This limits the refresh rate to 20 times per second, which is a lot.

So. Is there a real difference between the two? It is important? Will it be more significant on the microcontroller unit (where the processing power is very limited (but you do not need to run anything other than the program ...))?

+5
source share
7 answers

Well, actually this is not a question about C ++, but the answer depends on the implementation of the CPU / host architecture OS / delay ().

  • If this is a multi-tasking environment, then delay () can (and probably will) help the OS scheduler to make its work more efficient. However, the real difference could be noticed too little (except for the old joint multitasking, where delay () is mandatory).

  • If it is a single-tasking environment (perhaps some kind of microcontroller), then delay () can still be useful if the base implementation can follow some special instructions for energy consumption instead of a normal cycle. But, of course, there is no guarantee if this is not indicated in your manual.

Given the performance problems, it is obvious that you can receive and process the event with a significant delay (or even skip it completely), but if you think this is not the case, then there are no others against the delay ().

+4
source

You greatly complicate your code for reading, and you do asynchronism in the old way: you explicitly expect something to happen, rather than relying on a mechanism that does the work for you. In addition, you are delayed for 50 ms. Is it always optimal? Does it depend on which programs are running? In C ++ 11, you can use condition_variable. This allows you to wait until an event occurs, without coding wait cycles.

Documentation here: http://en.cppreference.com/w/cpp/thread/condition_variable

I adapted this example to make it easier to understand. Just expect one event.

Here is an example for you, tailored to your context.

 // Example program #include <iostream> #include <string> #include <iostream> #include <string> #include <thread> #include <mutex> #include <chrono> #include <condition_variable> std::mutex m; std::condition_variable cv; std::string data; bool ready = false; bool processed = false; using namespace std::chrono_literals; void worker_thread() { // Wait until main() sends data std::unique_lock<std::mutex> lk(m); std::cout << "Worker thread starts processing data\n"; std::this_thread::sleep_for(10s);//simulates the work data += " after processing"; // Send data back to main() processed = true; std::cout << "Worker thread signals data processing completed"<<std::endl; std::cout<<"Corresponds to you getEvent()==quit"<<std::endl; // Manual unlocking is done before notifying, to avoid waking up // the waiting thread only to block again (see notify_one for details) lk.unlock(); cv.notify_one(); } int main() { data = "Example data"; std::thread worker(worker_thread); // wait for the worker { std::unique_lock<std::mutex> lk(m); //this means I wait for the processing to be finished and I will be woken when it is done. //No explicit waiting cv.wait(lk, []{return processed;}); } std::cout<<"data processed"<<std::endl; } 
+1
source

In my experience, you should do something that refuses the processor. sleep works fine, and on most Windows systems, even sleep (1) is enough to completely offload the processor in a loop.

You can get the best of all worlds, however, if you use something like std :: condition_variable. You can create constructs using condition variables (similar to "events" and WaitForSingleObject in the Windows API).

One thread can block a condition variable that is freed by another thread. Thus, one thread can execute condition_varaible.wait (some_time), and it will either wait for a waiting period (without loading the processor), or it will continue to execute immediately when another thread releases it.

I use this method when one thread sends messages to another thread. I want the receiving stream to respond as soon as possible, and not after waiting for sleep (20) to complete. For example, the receiving stream has the condition condition_variable.wait (20). The sending stream sends the message and fulfills the corresponding condition_variable.release (). The receiving stream will immediately release and process the message.

This solution gives a very quick response to messages and does not require excessive processor load.

If you don't care about portability and you use windows, events and WaitForSingleObject do the same.

your loop will look something like this:

 while(!done) { cond_var.wait(std::chrono::milliseconds(20)); // process messages... msg = dequeue_message(); if(msg == done_message) done = true; else process_message(msg); } 

In another thread ...

 send_message(string msg) { enqueue_message(msg); cond_var.release(); } 

Your message processing loop will spend the most if it doesn't work, expecting a condition variable. When a message is sent and the condition variable is freed by the send stream, your receive stream immediately responds.

This allows your thread to receive a loop at the minimum speed specified by the wait time and the maximum value determined by the transmit stream.

+1
source

What are you asking how to implement Event Loop correctly. Use OS calls. You are requesting an OS for an event or message. If there is no message, the OS simply sends the process to sleep mode. In a microcontroller environment, you probably don't have an OS. It should use the concept of interrupts , which is largely a โ€œmessageโ€ (or event) at a lower level.

And for microcontrollers, you donโ€™t have such concepts as sleep or interruptions, so you just end the cycle.

In your example, a properly implemented getEvent() should block and do nothing until something actually happens, for example. keystroke.

+1
source

The best way to determine what is to measure it.

An undelayed loop will result in 100% utilization for this particular kernel on which the application is running. With an expression of delay, it will be about 0 - 1%. (counting on the immediate response of the getEvent function)

0
source

Well, it depends on several factors - if you do not need to run anything else besides this cycle in parallel, it does not make any differences in performance, obviously. But the problem that may arise is the energy consumption - depending on how long this cycle is, you can save as 90% of the power consumed by the microcontroller in the second embodiment. To call it bad practice in general does not seem right to me - it works in many scenarios.

0
source

As I know about the while loop, the process is still stored in ram. Thus, it will not allow the processor to use its resource for a given delay. The only difference he makes in the second code is the number of while loops over a given period of time. This helps if the program has been running for a long time. Otherwise, problems with the first case.

0
source

Source: https://habr.com/ru/post/1274116/


All Articles