Does class filtering for processor profiling include in Java VisualVM?

I want to filter out which classes are cpu-profiled in Java VisualVm (version 1.7.0 b110325). To do this, I tried to set " Profile only classes " for my test package in the Profiler β†’ Settings β†’ CPU-Settings profile , which did not affect. Then I tried to get rid of all java classes. * And sun. * By setting them to " Do not profile classes ", which also did not affect.

Is this just a mistake? Or am I missing something? Is there a workaround? I mean except:

I want to do this mainly to get half the correct percentage of CPU consumed per method. To do this, I need to get rid of annoying measurements, for example. for sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run() (about 70%). Many users seem to have this problem, see. For example,

+6
source share
2 answers

The reason you see sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run() in your profile is because you selected the Profile new Runnables option.

In addition, if you took a snapshot of your profiling session, you can see the entire column for any hotspot method - this way you could go from the run() method to your own application logic methods, filtering out the noise generated by the Profile parameter of the new Runnables .

+10
source

OK, since your goal is to make the code run as fast as possible, let me suggest how to do it. I'm not a VisualVM specialist, but I can tell you what works. (Only a few profilers actually tell you what you need to know, namely: which lines of your code on the stack make up a healthy fraction of the wall clock time.)

The only measurement I've ever come across is a stopwatch in total time, or, alternatively, if the code has something like frame rate, frames per second. I do not need any further precision breakdown, because at best it is a remote key to what is wasting time (and most often completely inappropriate) when there is a very direct way to find it.

If you do not want to make a random pause , that’s up to you, but it turned out to be effective, and here is an example of acceleration of 43 times .

Basically, the idea is that you get (small, like 10) the number of stack samples, which is taken randomly from a wall clock. Each sample consists (obviously) of a list of call sites and, possibly, a site without a call at the end. (If the sample is during I / O or sleep, it will end in a system call, which is very good. This is what you want to know.)

If there is a way to speed up your code (and there is almost certainly there), you will see it as a line of code that appears in at least one of the stack samples. The likelihood that he will appear on any one sample is exactly the same as the fraction of the time that she uses. Therefore, if there is a call site or another line of code using a healthy fraction of the time, and you can avoid its execution, the total time will decrease by this fraction.

I do not know every profiler, but I know that I can say that it is Zoom . Others can do it. They may be more flexible, but they do not work faster or better than the manual method when your goal is to maximize performance.

0
source

Source: https://habr.com/ru/post/895974/