How to measure performance in a C ++ application (MFC)?

What good profiles do you know?

What is a good way to measure and tune the performance of a C ++ MFC application?

Is algorithm analysis really necessary? http://en.wikipedia.org/wiki/Algorithm_analysis

+4
source share
10 answers

I highly recommend AQTime if you stay on the Windows platform. It comes with a load of profilers, including static code analysis, and works with most important Windows compilers and systems, including Visual C ++, .NET, Delphi, Borland C ++, Intel C ++, and even gcc. And it integrates into Visual Studio, but can also be used offline. I like it.

+3
source

If you are still using Visual C ++ 6.0, I suggest using the built-in profiler. For more recent versions, you can try the Compuware DevPartner Performance Analysis Community Edition .

+2
source

For Windows, open Xperf , which comes free with the Windows SDK. It uses a selective profile, has some useful interface and does not require tools. Very useful for tracking performance issues. You can answer questions such as:

Who uses most processors? Go to the function name using call stacks. Who allocates most of the memory? Who performs most registry queries? Does the disc write? etc. You will be very surprised when you find bottlenecks, as they are probably not where you expected!

+2
source

It has been some time since I profiled unmanaged code, but when I did this, I had good results with Intel vtune. I am sure that someone else will tell us that this has overtaken.

Algorithmic analysis can improve your performance more deeply than anything you find in the profiler, but only for certain classes of applications. If you work with large enough data sets, algorithmic analysis may find ways to be more efficient in CPU / Memory / both, but if your application basically fills out a form with a relational database for storage, this may not offer you much.

+1
source

Intel Thread Checker via Vtune Performance Analyzer - Check this image for the view I use the most, which tells me which feature is best for my time.

alt text

I can still expand inside and decompose which functions within them eat up more time, etc. There are different views based on what you are looking at (total time = time inside fn + children), proper time (time spent only on running code inside a function, etc.).

This tool does a lot more than profiling, but I have not studied all of them. I would definitely recommend this. This tool is also available for download as a fully functional trial version that can run for 30 days. If you have cost limits, I would say that this window is all you need to indicate your problem.

Trial download here - https://registrationcenter.intel.com/RegCenter/AutoGen.aspx?ProductID=907&AccountID=&ProgramID=&RequestDt=&rm=EVAL&lang=

ps: I also played with Rational Rational, but for some reason I didn't do that much. I suspect that Rational may be more expensive than Intel.

+1
source

Tools (for example, the true time from DevPartner) that allow you to see the number of hits for the source lines allow you to quickly find algorithms with poor complexity "Big O". You still need to analyze the algorithm to determine how to reduce complexity.

+1
source

I second AQTime, having both AQTime and Compuwares DevPartner, for most cases. The reason is that AQTime will profile any executable file that has a valid PDB file, while TrueTime will require you to do an instrumental assembly. This greatly speeds up and simplifies ad hoc profiling. DevPartner is also a bit expensive if this is a problem. Where DevPartner comes to life, with BoundsChecker, which I still rate as a better tool for catching leaks and overwriting than the AQTimes runtime profiler. TrueTime may be much more accurate than AQTime, but I never found this to be a problem.

Profiling is worth it, IMO yes, if you need to improve performance in a local application. I think you will also learn a lot about how your program and algorithms work, as well as the costly consequences of using certain types of object classes to store and iterate through your data.

+1
source

Glowcode is a very good profiler (when it works). It can connect to a running program and requires only symbol files - you do not need to rebuild.

0
source

Some versions of pf visual studio 2005 (and possibly 2008) do come with a pretty good performance profiler.
if you have one, it should be available under the tools menu
or you can find a way to open the Performance explorer window to start a new performance session.
MSDN Link

0
source

FYI. Some versions of Visual Studio come with a non-optimizing compiler. For one of my MFC applications, if I compile it using MINGW / MSYS (gcc compiler) with -o3, it will work about 5-10 times faster than my compilation with Visual Studio.

For example, I have an openstreetmap xml compiler, and it takes about 3 minutes to process a 2.7 GB XML file (a compiled version of gcc). My visual studio compiling the same code takes about 18 minutes.

0
source

Source: https://habr.com/ru/post/1277308/


All Articles