Accurate way to measure overhead in core space

I recently implemented a Linux security mechanism that connects to system calls. Now I have to measure the overhead caused by him. For a project, you need to compare the runtime of typical Linux applications with and without the mechanism. For typical Linux applications, I assume ex. gzipping a 1G file by doing "find /" grepping files. The main goal is to show the overhead in different types of tasks: processor binding, I / O binding, etc.

Question: how to organize a test so that they are reliable? The first thing that matters is that my mechanism only works in kernel space, so it makes sense to compare systime. I can use the time command for this, but is this the most accurate way to measure systime? Another idea is to run these applications in long loops to minimize error. Then loops should be inside or outside of time? If they are outside, I will get a lot of results - should I choose min, max, median, average?

Thanks for any suggestions.

+4
source share
1 answer

I think you want to more measure the typical application payload (as Ninjale suggests, compiling the kernel can be a good payload). You probably do not want to measure the overhead within each self-launch, or even inside the kernel as a whole.

The reason for this is that most applications spend much more time and resources in user space than on kernel land (for example, syscalls), so the overhead inside system calls is a "second order" effect and probably doesn't matter how much. Of course, there are probable exceptions.

Perhaps the phoronix test suite may be relevant.

You may be interested oprofile

See also this answer and this question

+3
source

Source: https://habr.com/ru/post/1387448/


All Articles