Currently, imo, it makes no sense to use time for benchmarking purposes. Use perf stat instead. This gives you much more useful information and can repeat the benchmarking process for a certain amount of time and make statistics about the results, i.e. Calculate the variance and mean. This is much more reliable and easy to use as time :
perf stat -r 10 -d <your app and arguments>
-r 10 will run your application 10 times and make statistics on it. -d prints some more data, such as cache misses.
Thus, while time can be reliable enough for long-term applications, it is definitely not as reliable as perf stat . Use this instead.
Addendum: If you really want to use time , at least do not use the bash -builtin command, but the real deal in verbose mode:
/usr/bin/time -v <some command with arguments>
The output is then, for example:
Command being timed: "ls" User time (seconds): 0.00 System time (seconds): 0.00 Percent of CPU this job got: 0% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.00 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 1968 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 93 Voluntary context switches: 1 Involuntary context switches: 2 Swaps: 0 File system inputs: 8 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0
In particular, pay attention to how it is possible to measure peak RSS, which is often enough if you want to compare the effect of the patch on peak memory consumption. That is, use this value to compare before / after, and if there is a significant decrease in the RSS peak, you did something right.
milianw Nov 12 '14 at 11:24 2014-11-12 11:24
source share