Timer Accuracy: c clock () vs WinAPI QPC or timeGetTime ()

I would like to characterize the accuracy of the software timer. I'm not really worried about HOW this is for sure, but you need to know WHAT is accuracy.

I investigated the c function of the clock () function and the QAP function of WinAPI and timeGetTime, and I know that they all are hardware dependent.

I am measuring a process that can take about 5-10 seconds, and my requirements are simple: I only need 0.1 second accuracy (resolution). But I need to know what accuracy is, in the worst case.

while higher accuracy would be preferable, I would prefer the accuracy to be poor (500 ms) and take it into account than to believe that the accuracy was better (1 ms), but could not document it.

Does anyone have any suggestions on how to characterize the accuracy of software clocks?

thanks

+4
source share
3 answers

You will need to distinguish between accuracy, resolution and delay.

clock (), GetTickCount, and timeGetTime () are derived from calibrated hardware clocks. The resolution is small, they are controlled by a clock interrupt, which is indicated by default 64 times per second or every 15.625 ms. You can use timeBeginPeriod () to run up to 1.0 ms. The accuracy is very good, the clock is calibrated from the NTP server, you can usually count on the fact that it will not be more than a second in a month.

QPC has a much higher resolution, always better than one microsecond, and only half a nanosecond on some machines. However, it has low accuracy, the clock source is a frequency selected from a chipset somewhere. It is not calibrated and has typical electronic tolerances. Use it only for short time intervals.

Delay is the most important factor when dealing with deadlines. You cannot use a high-precision clock source if you cannot read it fast enough. And this is always a problem when running code in user mode in an operating system with protected mode. Which always has code that works with higher priority than your code. In particular, device drivers are problem creators, video and audio drivers in particular. Your code is also unloaded from RAM, requiring a page reload. On a heavily loaded machine, not being able to run code in hundreds of milliseconds, is not unusual. You will need to include this rejection mode in your design. If you need guaranteed submillisecond accuracy, then you can only be provided with a real-time priority kernel thread.

A pretty decent timer is the multimedia timer you get from timeSetEvent (). It was designed to provide good service for programs requiring a reliable timer. You can make it tick in 1 ms, it may catch up with a delay. Please note that this is an asynchronous timer, the callback is performed on a separate worker thread, so you need to carefully monitor the correct synchronization of threads.

+10
source

Since you asked for hard facts, here they are:

A typical frequency device controlling HPETs is the CB3LV-3I-14M31818 which determines the frequency stability of +/- 50 ppm between -40 ° C and +85 ° C. The cheaper chip is the CB3LV-3I-66M6660 . This unit has a frequency stability of +/- 100 ppm between -20 ° C and 70 ° C.

As you can see, from 50 to 100ppm will lead to a drift from 50 to 100 us / s, from 180 to 360 ms / hour, or from 4.32 to 8.64 s / day!

RTC control devices are generally slightly better: the RV-8564-C2 RTC module provides deviations from +/- 10 to 20 ppm. Tighter tolerances are usually available in the military version or on request. The deviation of this source is 5 less than that of HPET. However, it is still 0.86 s / day.

All of the above values ​​are the maximum values ​​indicated in the data sheet. Typical values ​​can be significantly less, as indicated in my comment, they are within a few ppm.

Frequency values ​​are also accompanied by thermal drift. The result of QueryPerformanceCounter() can be highly dependent on thermal drift in systems running with an ACPI power management timer chip ( example ).

Additional timer information: Clock and timer circuits .

+1
source

For QPC, you can call QueryPerformanceFrequency to get the update speed. If you do not use time , you will get an accurate accuracy of more than 0.5 s, but clock not so accurate - often 10 ms segments [although, apparently, CLOCKS_PER_SEC standardized to 1 million, making the numbers PRESS more accurate].

If you are doing something in this direction, you can understand how small a gap you can measure [although at a REALLY high frequency you cannot notice how small, for example. a time counter that updates every measure, and reading takes 20-40 measures]:

  time_t t, t1; t = time(); // wait for the next "second" to tick on. while(t == (t1 = time())) /* do nothing */ ; clock_t old = 0; clock_t min_diff = 1000000000; clock_t start, end; start = clock(); int count = 0; while(t1 == time()) { clock_t c = clock(); if (old != 0 && c != old) { count ++; clock_t diff; diff = c - old; if (min_diff > diff) min_diff = diff; } old = c; } end = clock(); cout << "Clock changed " << count << " times" << endl; cout << "Smallest differece " << min_diff << " ticks" << endl; cout << "One second ~= " << end - start << " ticks" << endl; 

Obviously, you can apply the same principle to other sources of time.

(Not compiled, but hopefully not too many typos and errors)

Edit: So, if you measure time in the range of 10 seconds, a timer that runs at 100 Hz will give you 1000 ticks. But it can be 999 or 1001, depending on your luck, and you will understand it right / wrong, so 2000 ppm is there - then the input signal of the clock can also change, but it is much less than the variation of ~ 100 ppm at best. For Linux, the clock() parameter is updated at a frequency of 100 Hz (the actual timer that starts the OS can run at a higher frequency, but the clock() on Linux will be updated at a speed of 100 Hz or 10 ms [and it is updated only when the processor used to sit for 5 seconds, waiting for user input, 0 times].

In the windows, clock() measures the actual time, like your wristwatch, and not just the processor, so 5 seconds of waiting for user input are considered 5 seconds. I'm not sure how accurate this is.

Another problem that you find is that modern systems are not very good at repetitive deadlines in general - no matter what you do, the OS, processor, and memory all conspire to make a life of misfortune to get the same amount of time for two runs. The processor these days often works with deliberately changing clocks (this allowed to drift about 0.1-0.5%) to reduce electromagnetic radiation for EMC, (electromagnetic compatibility) test spikes that can "hide" from this beautifully closed computer unit .

In other words, even if you can get a very standardized watch, your test results will vary and decrease slightly, depending on OTHER factors that you can’t do anything about ...

In general, if you are not looking for a number to fill out a form requiring you to have a ppm number for your watch accuracy, and this is a government form that you cannot fill in with this information, I was not quite convinced that it is very useful to know the accuracy of the timer used to measure time itself. Because other factors will play the BEST big role.

0
source

Source: https://habr.com/ru/post/1498097/


All Articles