Debug strange memory leak - Java / Tomcat

I had a very strange problem with a Java application running under Tomcat.

We tried to update the production code from a new, recently released within 1 week sprint, the application runs for several months without failures, and then this new code forces our Linux servers to switch after a while.

It is very strange that when viewing VisualVM to use memory, it never exceeds the maximum heap size, the JVM does not throw OutOfMemory, the machine just starts replacing, and the JVM continues to work even after that.

So it seems like a memory leak from somewhere, it seems like this is from new code, but it is strange that it is not inside the JVM, any ideas on how to debug this?

Thanks!

+6
source share
3 answers

Swap is not a convincing indicator of leakage. This is due to low physical memory. Use vmstat for Linux to use swap. Try using another machine, experiment with the configurations - memory size, physical memory size, address space.

If you are sure that the problem is in your program, try this:

  • Estimate the median and peak memory that your program should use. You should be able to consider all deviations from these indicators. If you cannot, skip to step 3.

  • Assuming that you did step 1 correctly and were able to take into account all the deviations, you can eliminate the leak (sorry for such vague sentences, but debugging is no worse than that of a detective). You should now focus on configuring the GC. First, enable GC logging. See if your heap is really full and where the GC spends most of its time. This can be a good starting point to start optimizing. Try to check if setting the GC parameters helps. Try experimenting with collection algorithms, max / min heap sizes, generation factors, etc. Only experiment when you eliminate the leak (step 1).

  • Assuming that you did step 1 correctly and were unable to account for all the deviations, you can assume that you have a leak somewhere. Use the memory profiler to see which objects contribute more to the heap size. Leave the profiler to work for a long period of time - your program will process some of the requests that it usually expects, and then leave them relatively isolated after that. If the memory level continues to rise, you may have some kind of leak. If not, then this is probably not a memory leak. Can you indicate the part of your program that can create them? If so, try sending a few requests that are specific to this part of your program. Is this problem deterministically replicating? If not, repeat step 3. If yes, use the division step and win and repeat step 3 until you find the class / method that is the culprits. It may be a certain combination of several parts (which means that individually they may look innocent, but together they can form a brilliant criminal syndicate).

Hope this helps, if not, leave a comment on my post.

All the best in your exercise!

+2
source

I would advise you to learn how to create heap dumps without using jvisualvm. For a Unix-based JVM for Oracle, this is usually done by sending signal 3 to the JVM using kill.

For more details see http://www.startux.de/index.php/java/45-java-heap-dumpyvComment45

Then you can see if the templates have changed.

If you don’t get an idea from this, it may be because you are storing a substring from a very large source string (which contains a basic string array) or because you are holding onto operating system resources as open database connections, etc. d.

Have you verified that your connection pool looks good?

+1
source

If you are not using it, I recommend using the visual virtual machine version 1.3.2 and all plug-ins. This is a big leap from earlier versions.

What happens to perm gen space?

What are the memory settings that you use? Min and max, of course, but what about the size of the space?

0
source

Source: https://habr.com/ru/post/887970/


All Articles