An accurate sleep is required. Error Max 1ms

I have a thread that starts a loop. I need this loop to run every 5 ms (1 ms error). I know that the Sleep () function is not exact.

Do you have any suggestions?

Update I can not do it differently. At the end of the cycle, I need a dream. I do not want the processor to be 100% loaded.

+5
c ++ multithreading loops sleep winapi
Nov 15 '12 at 12:30
source share
5 answers

From the question tags, I suppose you are on the windows. Look at the multimedia timers , they advertise accuracy within 1 ms. Another option is to use Spin Locks , but basically it will support the processor core with maximum use.

+4
Nov 15 '12 at 12:42
source share

Do not use spinning here. The requested resolution accuracy can be achieved by standard methods.

You can use Sleep() until periods of about 1 ms when the system interrupt period is set to operate at this high frequency. See the description of Sleep () for details, in particular multimedia timers, with Obtaining and setting the timer resolution to get detailed information on how to set the system interruption period. The resulting accuracy with this approach is in the range of several microseconds if correctly implemented.

I suspect your loop is doing something else too. Thus, I suspect that you need a total period of 5 ms, which will then be the sum of Sleep() , and the rest of the time that you spend on other things in the loop.

For this scenario, I suggest Waiting Timer Objects , however these timers also rely on setting the multimedia timer API. I gave an overview of the corresponding functions for a more accurate determination of the time here . A deeper understanding of precision time can be found here .

For more accurate and reliable synchronization, you may need to take a look at process priority classes and thread priorities . Another answer about the accuracy of Sleep () is this .

However, whether it is possible to get a Sleep() delay of exactly 5 ms depends on the system hardware. Some systems allow you to work with 1024 interrupts per second (set by the multimedia timer API). This corresponds to a period of 0.9765625 ms. The closest you can get, therefore, 4.8828125 ms. Others allow you to get closer, especially with Windows 7, the time has improved significantly when working on high resolution event timers hardware. See About Timers on MSDN and the Precision Timer ,

Summary: Set the maximum multimedia timer mode and use the expected timer .

+7
Nov 15 '12 at 12:50
source share

Instead of using sleep, perhaps you can try a loop that checks the time interval and returns when the time difference is 5 ms. The loop should be more accurate than sleeping.

However, remember that accuracy is not always possible. The processor may be associated with another operation for such a short interval and may miss 5 ms.

+3
Nov 15 '12 at 12:34
source share

I was looking for a lightweight cross-platform sleep feature that is suitable for real-time applications (i.e. high resolution / high accuracy with reliability). Here are my findings:

Planning Basics

Refusing the CPU and then returning it is expensive. According to this article , scheduler latency can be anywhere between 10-30 ms on Linux. Therefore, if you need to sleep less than 10 ms with high accuracy, you need to use special APIs for specific OS. Normal C ++ 11 std :: this_thread :: sleep_for not sleeping with high resolution. For example, on my machine, quick tests show that he often sleeps for at least 3 ms when I ask to sleep in just 1 ms.

Linux

The most popular solution is the nanosleep () API. However, if you want <2 ms high resolution than you also need to use the sched_setscheduler call to set the thread / process for real-time scheduling. If you do not, then nanosleep () acts in the same way as the obsolete usleep, which has a resolution of ~ 10 ms. Another possibility is to use alarms .

Window

The solution here is to use multimedia times, as others have suggested. If you want to emulate Linux nanosleep () on Windows, the following is shown as ( original ref ). Again, note that you do not need to do CreateWaitableTimer () again and again if you call sleep () in a loop.

 #include <windows.h> /* WinAPI */ /* Windows sleep in 100ns units */ BOOLEAN nanosleep(LONGLONG ns){ /* Declarations */ HANDLE timer; /* Timer handle */ LARGE_INTEGER li; /* Time defintion */ /* Create timer */ if(!(timer = CreateWaitableTimer(NULL, TRUE, NULL))) return FALSE; /* Set timer properties */ li.QuadPart = -ns; if(!SetWaitableTimer(timer, &li, 0, NULL, NULL, FALSE)){ CloseHandle(timer); return FALSE; } /* Start & wait for timer */ WaitForSingleObject(timer, INFINITE); /* Clean resources */ CloseHandle(timer); /* Slept without problems */ return TRUE; } 

Cross platform code

Here is time_util.cc , which implements sleep for Linux, Windows and Apple. However, note that it does not set up real-time using sched_setscheduler, as I mentioned above, so if you want to use for <2ms, then this is what you need to do extra. Another improvement you can make is to avoid calling CreateWaitableTimer for the Windows version again and again if you cause sleep in some kind of loop. How to do this, see an example here .

 #include "time_util.h" #ifdef _WIN32 # define WIN32_LEAN_AND_MEAN # include <windows.h> #else # include <time.h> # include <errno.h> # ifdef __APPLE__ # include <mach/clock.h> # include <mach/mach.h> # endif #endif // _WIN32 /**********************************=> unix ************************************/ #ifndef _WIN32 void SleepInMs(uint32 ms) { struct timespec ts; ts.tv_sec = ms / 1000; ts.tv_nsec = ms % 1000 * 1000000; while (nanosleep(&ts, &ts) == -1 && errno == EINTR); } void SleepInUs(uint32 us) { struct timespec ts; ts.tv_sec = us / 1000000; ts.tv_nsec = us % 1000000 * 1000; while (nanosleep(&ts, &ts) == -1 && errno == EINTR); } #ifndef __APPLE__ uint64 NowInUs() { struct timespec now; clock_gettime(CLOCK_MONOTONIC, &now); return static_cast<uint64>(now.tv_sec) * 1000000 + now.tv_nsec / 1000; } #else // mac uint64 NowInUs() { clock_serv_t cs; mach_timespec_t ts; host_get_clock_service(mach_host_self(), SYSTEM_CLOCK, &cs); clock_get_time(cs, &ts); mach_port_deallocate(mach_task_self(), cs); return static_cast<uint64>(ts.tv_sec) * 1000000 + ts.tv_nsec / 1000; } #endif // __APPLE__ #endif // _WIN32 /************************************ unix <=**********************************/ /**********************************=> win *************************************/ #ifdef _WIN32 void SleepInMs(uint32 ms) { ::Sleep(ms); } void SleepInUs(uint32 us) { ::LARGE_INTEGER ft; ft.QuadPart = -static_cast<int64>(us * 10); // '-' using relative time ::HANDLE timer = ::CreateWaitableTimer(NULL, TRUE, NULL); ::SetWaitableTimer(timer, &ft, 0, NULL, NULL, 0); ::WaitForSingleObject(timer, INFINITE); ::CloseHandle(timer); } static inline uint64 GetPerfFrequency() { ::LARGE_INTEGER freq; ::QueryPerformanceFrequency(&freq); return freq.QuadPart; } static inline uint64 PerfFrequency() { static uint64 xFreq = GetPerfFrequency(); return xFreq; } static inline uint64 PerfCounter() { ::LARGE_INTEGER counter; ::QueryPerformanceCounter(&counter); return counter.QuadPart; } uint64 NowInUs() { return static_cast<uint64>( static_cast<double>(PerfCounter()) * 1000000 / PerfFrequency()); } #endif // _WIN32 

Another more complete cross-platform code can be found here .

Another quick fix

As you may have noticed, the above code is no longer very light. It should include the Windows title among other things that might not be very desirable if you are designing libraries for titles only. If you need less than 2 ms sleep, and you are not very interested in using the OS code, you can simply use the following simple solution, which is a cross-platform and works fine on my tests. Just remember that now you are not using highly optimized OS code, which can be much better for saving energy and managing processor resources.

 typedef std::chrono::high_resolution_clock clock; template <typename T> using duration = std::chrono::duration<T>; static void sleep_for(double dt) { static constexpr duration<double> MinSleepDuration(0); clock::time_point start = clock::now(); while (duration<double>(clock::now() - start).count() < dt) { std::this_thread::sleep_for(MinSleepDuration); } } 

Matters Related

  • How to make sleep flow less than a millisecond on Windows
  • Cross-platform sleep function for C ++
  • Is there an alternative sleep function in C in milliseconds?
+3
Jan 25 '17 at 22:10
source share

These functions:

allows you to create the expected timer with a resolution of 100 nanoseconds, wait for it, and the calling thread to perform a specific function during startup.

Here is an example of using the specified timer .

Please note that WaitForSingleObject has a timeout measured in milliseconds, which may possibly serve as a rough substitute for waiting, but I would not trust it. See here SO question for more details.

+1
Nov 16 '12 at 8:37
source share



All Articles