What is the best way to "point / number" application performance?

In the old (single-threaded) days, we instructed our test group to always report CPU time, not real-time applications. Thus, if they said that in version 1 the action took 5 seconds of the processor, and in version 2 it took 10 seconds of the processor, we had a problem.

Now, with more and more multithreading, it seems like this no longer makes sense. It may be that version 1 of the application takes 5 seconds of the processor and the second version takes 10 CPUs, but this version 2 is still faster if version 1 is single-threaded and version 2 uses 4 threads (each of which takes 2.5 seconds of the processor) .

On the other hand, using real-time to compare performance is not reliable, since many other elements can affect it (other running applications, network congestion, very busy database server, fragmented disk, ...).

What, in your opinion, is the best way to "count" performance? I hope this is not an intuition, since it is not an objective β€œvalue” and probably leads to conflicts between the development team and the testing group.

+4
source share
1 answer

Performance must be determined before it is measured.

It:

  • memory consumption?
  • task lead time?
  • disk space allocation?

After defining, you can select metrics.

+1
source

Source: https://habr.com/ru/post/1302474/


All Articles