I ran a simple test to synchronize C ++ code, and I came across an artifact in which I am not 100% positive.
Customization
My code uses C ++ 11 high_resolution_clockto measure elapsed time. I also terminate my program using the Linux time( /usr/bin/time) command . For my program, high_resolution_clockit reports ~ 2s, and timereports ~ 7s (~ 6.5s user and ~ .5s system). In addition, using the verbose option shows in time that my program used 100% CPU with 1 voluntary context switch and 10 involuntary context switches ( /usr/bin/time -v).
Question
My question is, what causes such a sharp difference between time measurements in the OS and runtime measurements?
My initial thoughts
Thanks to my knowledge of operating systems, I assume that these differences are caused only by context switches with other programs (as noted time -v).
Is this the only reason for this difference? And should I trust the time indicated by my program or system when you look at code performance?
Again, my assumption is to trust the calculated time from my program over Linux time, because it is more than just using my software processor.
Warning
:
#include <chrono>
#include <cstdlib>
#include <iostream>
#include <vector>
using namespace std;
using namespace std::chrono;
int main() {
size_t n = 100000000;
double d = 1;
auto start_hrc = high_resolution_clock::now();
for(size_t i = 0; i < n; ++i) {
switch(rand() % 4) {
case 0: d += 0.0001; break;
case 1: d -= 0.0001; break;
case 2: d *= 0.0001; break;
case 3: d /= 0.0001; break;
}
}
auto end_hrc = high_resolution_clock::now();
duration<double> diff_hrc = end_hrc - start_hrc;
cout << d << endl << endl;
cout << "Time-HRC: " << diff_hrc.count() << " s" << endl;
}