Java GC periodically enters several complete GC cycles

Environment:

sun JDK 1.6.0_16 vm settings: -XX:+DisableExplicitGC -XX:+UseConcMarkSweepGC -Xms1024 -Xmx1024M -XX:MaxNewSize=448m -XX:NewSize=448m -XX:SurvivorRatio=4(6 also checked) -XX:MaxPermSize=128M OS: windows server 2003 processor: 4 cores of INTEL XEON 5130, 2000 Hz 

my application description: high intensity concurrent (java 5 concurrency) operations complete each time, committing an oracle. It works on approximately 20-30 threads, performing tasks.

works in a JBOSS web container.

My GC starts working fine, I see many small GCs, and all this time the processor shows a good load, like all 4 cores loaded up to 40-50%, the CPU graph is stable.

Then, after 1 minute of good work, the CPU begins to decline to 0% by 2 cores from 4, it becomes unstable, goes up and down ("teeth"). I see that my threads are slower (I have monitoring), I see that the GC starts to produce a lot of FULL GC during this time and the next 4-5 minutes, this situation remains as it is, and then for a short period of time like 1 minute, he returns to normal, but soon after that all the bad things are repeated.

Question: Why do I have so frequent full GC ??? How to prevent this?

I played with SurvivorRatio - it does not help.

I noticed that the application works fine until the first FULL GC appears, as long as I have enough memory. Then it works poorly.

my GC LOG:

  • starts well
  • then a long period of FULL GC (many of them)

 1027.861: [GC 942200K->623526K(991232K), 0.0887588 secs] 1029.333: [GC 803279K(991232K), 0.0927470 secs] 1030.551: [GC 967485K->625549K(991232K), 0.0823024 secs] 1030.634: [GC 625957K(991232K), 0.0763656 secs] 1033.126: [GC 969613K->632963K(991232K), 0.0850611 secs] 1033.281: [GC 649899K(991232K), 0.0378358 secs] 1035.910: [GC 813948K(991232K), 0.3540375 secs] 1037.994: [GC 967729K->637198K(991232K), 0.0826042 secs] 1038.435: [GC 710309K(991232K), 0.1370703 secs] 1039.665: [GC 980494K->972462K(991232K), 0.6398589 secs] 1040.306: [Full GC 972462K->619643K(991232K), 3.7780597 secs] 1044.093: [GC 620103K(991232K), 0.0695221 secs] 1047.870: [Full GC 991231K->626514K(991232K), 3.8732457 secs] 1053.739: [GC 942140K(991232K), 0.5410483 secs] 1056.343: [Full GC 991232K->634157K(991232K), 3.9071443 secs] 1061.257: [GC 786274K(991232K), 0.3106603 secs] 1065.229: [Full GC 991232K->641617K(991232K), 3.9565638 secs] 1071.192: [GC 945999K(991232K), 0.5401515 secs] 1073.793: [Full GC 991231K->648045K(991232K), 3.9627814 secs] 1079.754: [GC 936641K(991232K), 0.5321197 secs] 
+4
source share
2 answers

This is not like a memory leak; it is entirely possible that all this memory is actually used by the JVM, which can explain the frequent full GCs. Is it likely that you can create multiple processes? that is, instead of a single process having 20-30 threads, with 5 processes running 4-5 threads each?

Also, is there a reason why you have those MaxNewSize , MaxNewSize and SurvivorRatio JVM options? Have you seen any significant performance improvements with such things? My first approach when setting up any application is to run it with minimal settings in the JVM and make changes only if my newly added parameters have an impact on the situation.

+2
source

From the last two lines of your journal (before you edited your question):

 397.245: [Full GC 660160K->443379K(660160K), 2.7433121 secs] 401.793: [Full GC 660160K->446464K(660160K), 2.7697340 secs] 

You have definitely reached the memory limit, which is 660160K. This limit is the total available space, not counting the space in the permanent generation, which is a common pile minus one of the survivors. ( Link )

Every four seconds you create ~ 220M new gc'able objects, the total memory usage seems to increase at a speed of 1 MB / s. Thus, after a while, the JVM will do nothing but Full GC until the empty space runs out.

In this scenario, I strongly doubt that adjusting the settings will help for a long time, simply because you have reached the limit after 6 minutes.

It looks like you need to look for memory leaks or a stored link to large unused objects (result sets, DOM objects, ...)

You have new meanings: in the same situation. You still fall into the same upper memory limit as after a few minutes, because you assigned more memory. Really smells like a memory leak. You are still playing gc'able content at ~ 60 MB / s.

+3
source

Source: https://habr.com/ru/post/1334172/


All Articles