For QPC, you can call QueryPerformanceFrequency to get the update speed. If you do not use time , you will get an accurate accuracy of more than 0.5 s, but clock not so accurate - often 10 ms segments [although, apparently, CLOCKS_PER_SEC standardized to 1 million, making the numbers PRESS more accurate].
If you are doing something in this direction, you can understand how small a gap you can measure [although at a REALLY high frequency you cannot notice how small, for example. a time counter that updates every measure, and reading takes 20-40 measures]:
time_t t, t1; t = time(); // wait for the next "second" to tick on. while(t == (t1 = time())) /* do nothing */ ; clock_t old = 0; clock_t min_diff = 1000000000; clock_t start, end; start = clock(); int count = 0; while(t1 == time()) { clock_t c = clock(); if (old != 0 && c != old) { count ++; clock_t diff; diff = c - old; if (min_diff > diff) min_diff = diff; } old = c; } end = clock(); cout << "Clock changed " << count << " times" << endl; cout << "Smallest differece " << min_diff << " ticks" << endl; cout << "One second ~= " << end - start << " ticks" << endl;
Obviously, you can apply the same principle to other sources of time.
(Not compiled, but hopefully not too many typos and errors)
Edit: So, if you measure time in the range of 10 seconds, a timer that runs at 100 Hz will give you 1000 ticks. But it can be 999 or 1001, depending on your luck, and you will understand it right / wrong, so 2000 ppm is there - then the input signal of the clock can also change, but it is much less than the variation of ~ 100 ppm at best. For Linux, the clock() parameter is updated at a frequency of 100 Hz (the actual timer that starts the OS can run at a higher frequency, but the clock() on Linux will be updated at a speed of 100 Hz or 10 ms [and it is updated only when the processor used to sit for 5 seconds, waiting for user input, 0 times].
In the windows, clock() measures the actual time, like your wristwatch, and not just the processor, so 5 seconds of waiting for user input are considered 5 seconds. I'm not sure how accurate this is.
Another problem that you find is that modern systems are not very good at repetitive deadlines in general - no matter what you do, the OS, processor, and memory all conspire to make a life of misfortune to get the same amount of time for two runs. The processor these days often works with deliberately changing clocks (this allowed to drift about 0.1-0.5%) to reduce electromagnetic radiation for EMC, (electromagnetic compatibility) test spikes that can "hide" from this beautifully closed computer unit .
In other words, even if you can get a very standardized watch, your test results will vary and decrease slightly, depending on OTHER factors that you can’t do anything about ...
In general, if you are not looking for a number to fill out a form requiring you to have a ppm number for your watch accuracy, and this is a government form that you cannot fill in with this information, I was not quite convinced that it is very useful to know the accuracy of the timer used to measure time itself. Because other factors will play the BEST big role.