Trying to raise java.lang.OutOfMemoryException

I am trying to throw a java.lang.OutOfMemoryException in Jboss4 that one of our clients has received, presumably running J2EE applications for days / weeks.

I am trying to find a way for webapp spitout java.lang.OutOfMemoryException in minutes (instead of days / weeks).

One thing occurred to write a selenium script, and the script is bombarding webapps. Another thing we can do is to reduce the heap size of the JVM, but we would prefer not to, because we want to see the limit of our system.

Any suggestions?

ps: I do not have access to the source code, since we just provide a hosting service (of course, I could decompile class files ...)

+4
source share
7 answers

If you do not have access to the source code of the J2EE application in question, the following options:

  • Reduce the amount of RAM available for the JVM. You have already determined this and said that you do not want to do this.

  • Build a J2EE application (maybe it's just a JSP) and configure it to run as a target application within the same JVM, and this application allocates a ridiculous amount of memory. This will reduce the amount of memory available for the target application, I hope that it does not work the way you are trying to force.

+1
source

Try some profiling tools to investigate memory leaks. It is also useful to examine the memory that was made after OOM occurs and registers. IMHO: memory reduction is not the right way to study goats, you can get problems that are not related to real production.

+1
source

Do both, but in a controlled way:

  • Reduce the available memory to an absolute minimum (for example, using -Xms1M -Xmx2M , but I'm afraid that your application will not even load with such restrictions)
  • Controlled Nuclear Radiation: Script Selenium or each of the known working URLs before attacking the alleged perpetrator.
  • Finally, untie the power that will not be raised: run VisualVM and any other monitoring software that you can think of (DB execution is the usual suspect).
0
source

The root of the problem is most likely a webapp memory leak that is being executed by the client. To track this, you need to run the application with a representative workload with memory profiling enabled. Take a few snapshots and then use the profiler to compare snapshots to see where the objects are flowing. Although the source code would be perfect, you should be able to at least figure out where the objects associated with the leak are allocated. Then you need to track the cause.

However, if your client does not release binaries so that you can run an identical system with what it was running, you are kind of stuck, and you need to force the client to independently perform profiling and leak detection.

BTW - there isn’t much point in causing webapp to throw OutOfMemoryError. He will not tell you why this is happening, and not understanding why you cannot do much.

EDIT

There is no “limit measurement” point if the main reason for the memory leak is client code. Assuming you are providing servlet services, it is best to provide the client with instructions for debugging memory leaks ... and backtracking. And if they have a support contract that requires you (in fact) to debug their code, they must provide you with the source code to do your work.

0
source

If you are using Sun Java 6, you may want to connect to the application with jvisualvm in the JDK. This will allow you to do on-site profiling without having to change anything in your scenario and possibly identify the culprit right away.

0
source

If you do not have a source, decompile it, at least if you think that the conditions of use allow it, and you live in a free country. You can use: Java Decompiler or JAD .

0
source

In addition to everything else, I have to say that even if you can reproduce the OutOfMemory error and find out where it occurred, you probably did not find anything worth knowing.

The problem is that OOM occurs when distribution cannot take place. However, the real problem is not that the distribution, but that other distributions in other parts of the code were not allocated (removed links and garbage collection). An imperfect distribution here may not have anything to do with the source of the problem (no pun intended).

This problem is more in your case, as it may take several weeks before the start of the crash, offering either a rarely used application, or an abnormal code path, or a relatively HUGE amount of memory in relation to what would be necessary if the code were OK.

It might be nice to ask why this amount of memory is configured for JBoss, and not something else. If recommended by the supplier, they may already be aware of the leak and require that this mitigate the consequences of the error.

For this kind of error, it’s really worth having an idea of ​​which way the code is causing the problem, so you can do targeted tests. And check with the profiler so that you can see at runtime which objects (lists, maps, etc.) are growing without reduction.

This will give you the opportunity to decompile the correct classes and see what is wrong with them. (Closing or clearing in a try block, not possibly a finally block).

In any case, good luck. I think I would rather find a needle in a haystack. When you find the needle, you at least know that you found it :)

0
source

Source: https://habr.com/ru/post/1304124/


All Articles