It was an amazingly difficult question to answer! After I spilled the kernel code, I realized what was going on here, and it's pretty nice to know what is going on.
Typically for a process on Linux, total CPU utilization is simply the sum of the time spent in user space and the time spent in kernel space. Therefore, it would be naive to expect user_time + system_time
equal to cpu_time
. I found that Linux tracks the time spent on vCPU threads executing guest code separately from user space or kernel time.
So cpu_time == user_time + system_time + guest_time
So you can think of system_time + user_time
as the QEMU / KVM overhead on the host side. And cpu_time - (user_time + guest_time)
as the actual amount of time during which the guest OS started its processors.
To calculate CPU usage, you probably just want to write cpu_time
every N seconds and calculate the delta between the two samples. e.g. usage % = 100 * (cpu_time 2 - cpu_time 1) / N
source share