How to get system uptime in milliseconds in C ++?

How can I get the system uptime since the system started? All I found was time from the era and nothing more.

For example, something like time () in the ctime library, but it only gives me the value of seconds from the era. I want something like time (), but since the system started.

+8
source share
3 answers

It depends on the OS and already answered for several systems in the thread stack.

#include<chrono> // for all examples :) 

Windows ...

using GetTickCount64() (resolution usually 10-16 milliseconds)

 #include <windows> // ... auto uptime = std::chrono::milliseconds(GetTickCount64()); 

Linux ...

... using /proc/uptime

 #include <fstream> // ... std::chrono::milliseconds uptime(0u); double uptime_seconds; if (std::ifstream("/proc/uptime", std::ios::in) >> uptime_seconds) { uptime = std::chrono::milliseconds( static_cast<unsigned long long>(uptime_seconds*1000.0) ); } 

... using sysinfo (resolution 1 second)

 #include <sys/sysinfo.h> // ... std::chrono::milliseconds uptime(0u); struct sysinfo x; if (sysinfo(&x) == 0) { uptime = std::chrono::milliseconds( static_cast<unsigned long long>(x.uptime)*1000ULL ); } 

OS X ...

... using sysctl

 #include <time.h> #include <errno.h> #include <sys/sysctl.h> // ... std::chrono::milliseconds uptime(0u); struct timeval ts; std::size_t len = sizeof(ts); int mib[2] = { CTL_KERN, KERN_BOOTTIME }; if (sysctl(mib, 2, &ts, &len, NULL, 0) == 0) { uptime = std::chrono::milliseconds( static_cast<unsigned long long>(ts.tv_sec)*1000ULL + static_cast<unsigned long long>(ts.tv_usec)/1000ULL ); } 

BSD-like systems (or systems supporting CLOCK_UPTIME or CLOCK_UPTIME_PRECISE respectively) ...

... using clock_gettime (resolution see clock_getres )

 #include <time.h> // ... std::chrono::milliseconds uptime(0u); struct timespec ts; if (clock_gettime(CLOCK_UPTIME_PRECISE, &ts) == 0) { uptime = std::chrono::milliseconds( static_cast<unsigned long long>(ts.tv_sec)*1000ULL + static_cast<unsigned long long>(ts.tv_nsec)/1000000ULL ); } 
+26
source

+1 to the accepted answer. Good review. But OS X's answer is incorrect, and I wanted to show a fix here.

The sysctl function with the input { CTL_KERN, KERN_BOOTTIME } on OS X returns Unix Time when the system booted, not the time it booted. And on this system (and any other system) std::chrono::system_clock also measures Unix Time . Therefore, you just need to subtract these two time points in order to get the time from the moment of loading. Here's how you modify your OS X solution to solve this problem:

 std::chrono::milliseconds uptime() { using namespace std::chrono; timeval ts; auto ts_len = sizeof(ts); int mib[2] = { CTL_KERN, KERN_BOOTTIME }; auto constexpr mib_len = sizeof(mib)/sizeof(mib[0]); if (sysctl(mib, mib_len, &ts, &ts_len, nullptr, 0) == 0) { system_clock::time_point boot{seconds{ts.tv_sec} + microseconds{ts.tv_usec}}; return duration_cast<milliseconds>(system_clock::now() - boot); } return 0ms; } 

Notes:

  • it’s better to have chrono for your unit conversions. If your code has 1000 in it (for example, to convert seconds to milliseconds), chrono it so that chrono performs the conversion.
  • You can rely on implicit conversions of chronological duration units to be correct if they are compiled. If they do not compile, this means that you are requesting a truncation, and you can explicitly request a truncation using duration_cast .
  • It is good to use the using directive locally in a function if it makes the code more readable.
+3
source

There is a boost example on how to configure message logging.

In it, the author implements a simple unsigned int get_uptime() function to get the system uptime for different platforms, including Windows, OSx, Linux, and BSD.

+2
source

Source: https://habr.com/ru/post/986713/


All Articles