I recently implemented a Linux security mechanism that connects to system calls. Now I have to measure the overhead caused by him. For a project, you need to compare the runtime of typical Linux applications with and without the mechanism. For typical Linux applications, I assume ex. gzipping a 1G file by doing "find /" grepping files. The main goal is to show the overhead in different types of tasks: processor binding, I / O binding, etc.
Question: how to organize a test so that they are reliable? The first thing that matters is that my mechanism only works in kernel space, so it makes sense to compare systime. I can use the time command for this, but is this the most accurate way to measure systime? Another idea is to run these applications in long loops to minimize error. Then loops should be inside or outside of time? If they are outside, I will get a lot of results - should I choose min, max, median, average?
Thanks for any suggestions.
source share