How to track performance testing

I am currently performing performance and load testing of a complex tiered system, investigating the impact of various changes, but I have problems tracking everything:

  • There are many copies of various assemblies.
    • Roughly released builds
    • Officially released fixes
    • The meetings I created contain additional additional fixes.
    • Assemblies I created that contain additional diagnostic logs or traces
  • There are many database patches , some of the above builds depend on the specific database patches that apply.
  • There are different levels of logging at different levels (application logging, application performance statistics, SQL server profiling).
  • There are many different scenarios , sometimes it is useful to test only one scenario, in other cases I need to check combinations of different scenarios.
  • The load can be divided into several machines or only one machine
  • The data available in the database can change , for example, some tests can be performed by the generated data, and then with data taken from a living system.
  • There is a huge amount of potential performance data that will be collected after each test , for example:
    • Many different types of application logging
    • SQL Profiler
    • DMVs
    • Perfmon
  • Gb , , , , , .

, ( , , ), - . , , , , , , . , , !

, , .

, , .., , - , , , .

/ , , , , -

+3
3

+ - , . , , . , , , , .

, .

0

@orip, , , , , . , , ? - , , . , .

, //, , , , ..

0

, . , , .

, , , , . , " ?" " ?". sqlite3 (, Python Firefox) , , .

Test scripts will make them speed up and allow you to collect the results in an orderly way, but it looks like your system may be too complex to make it easy.

0
source

Source: https://habr.com/ru/post/1717272/


All Articles