OutOfMemory Exceeded GC Upper Limit to Receive Lock on Log4j Objects

Can anyone help me pinpoint the exact problems. Is it either a JVM, Log4j, or something else in our application?

We run a multithreaded application using JDK 1.6.0.24 on a Solaris 10 server (SUNW, Sun-Fire-V240). Which has an RMI call to communicate with the client.

Our application was sent hanged. I saw OutOfMemory below threaddump. However, I know that this is due to the fact that the GC can only claim 2% of the object's memory.

  # java.lang.OutOfMemoryError: GC overhead limit exceeded
     Heap
      PSYoungGen total 304704K, used 154560K [0xe0400000, 0xfbc00000, 0xfbc00000)
       eden space 154560K, 100% used [0xe0400000,0xe9af0000,0xe9af0000)
       from space 150144K, 0% used [0xf2960000,0xf2960000,0xfbc00000)
       to space 145856K, 0% used [0xe9af0000,0xe9af0000,0xf2960000)
      PSOldGen total 897024K, used 897023K [0xa9800000, 0xe0400000, 0xe0400000)
       object space 897024K, 99% used [0xa9800000,0xe03ffff0,0xe0400000)
      PSPermGen total 28672K, used 27225K [0xa3c00000, 0xa5800000, 0xa9800000)
       object space 28672K, 94% used [0xa3c00000,0xa5696580,0xa5800000)

In my case, this should be because the GC cannot require memory from many pending threads. If I see Dump Dump. Most of the thread is waiting for a lock on org.apache.log4j.Logger. Using log4j-1.2.15

If you see the first stream trace below. It gets a lock on 2 items, and other threads (~ 50) wait to get a lock. Almost the same footprint can be observed for 20 minutes.

Here is the thread dump:

      pool-3-thread-51 "prio = 3 tid = 0x00a38000 nid = 0xa4 runnable [0xa0d5f000] java.lang.Thread.State: RUNNABLE at java.text.DateFormat.format (DateFormat.javahaps16) at org.apache. log4j.helpers.PatternParser $ DatePatternConverter.convert (PatternParser.java:443) at org.apache.log4j.helpers.PatternConverter.format (PatternConverter.java:65) at org.apache.log4j.PatternLayout.format (PatternLayout.java: 506) at org.apache.log4j.WriterAppender.subAppend (WriterAppender.javahaps10) at org.apache.log4j.RollingFileAppender.subAppend (RollingFileAppender.java:276) at org.apache.log4j.WriterAppender.appendjavaApp : 162) at org.apache.log4j.AppenderSkeleton.doAppend (AppenderSkeleton.java:251) - locked (a org.apache.log4j.RollingFileAppender) at org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppach66 (App ) at org.apache.log4j.Category.callAppenders (Category.java:206) - locked (a org.apache.log4j.Logger) at org.apache.log4j.Category.forcedLog (Category.javahaps91) at o  rg.apache.log4j.Category.info (Category.java:666) at com.airvana.faultServer.niohandlers.NioNotificationHandler.parseAndQueueData (NioNotificationHandler.java:296) at com.airvana.faultServer.niohandlersHandler. java: 145) at org.jboss.netty.channel.SimpleChannelHandler.handleUpstream (SimpleChannelHandler.java:105) at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream (DefaultChannelPipeline.javaβˆ—67) "Timer-3" prio = 3 tid = 0x0099a800 nid = 0x53 waiting for monitor entry [0xa1caf000] java.lang.Thread.State: BLOCKED (on object monitor) at org.apache.log4j.AppenderSkeleton.doAppend (AppenderSkeleton.java:231) - waiting to lock (a org.apache.log4j.RollingFileAppender) at org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders (AppenderAttachableImpl.java:66) at org.apache.log4j.Category.callAppenders (Category.java:206. - locked apache.log4j.spi.RootLogger) at org.apache.log4j.Category.forcedLog (Category.javahaps91) at  org.apache.log4j.Category.info (Category.java:666) at com.airvana.controlapp.export.AbstractOMDataCollector.run (AbstractOMDataCollector.java:100) at java.util.TimerThread.mainLoop (Timer.javahaps12) at java.util.TimerThread.run (Timer.javarige62) "TrapHandlerThreadPool: Thread-10" prio = 3 tid = 0x014dac00 nid = 0x4f waiting for monitor entry [0xa1d6f000] java.lang.Thread.State: BLOCKED (on object monitor) at org.apache.log4j.AppenderSkeleton.doAppend (AppenderSkeleton.java:231) - waiting to lock (a org.apache.log4j.RollingFileAppender) at org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppachders ( 66) at org.apache.log4j.Category.callAppenders (Category.java:206) - locked (a org.apache.log4j.Logger) at org.apache.log4j.Category.forcedLog (Category.javahaps91) at org .apache.log4j.Category.info (Category.java:666) at com.airvana.faultServer.db.ConnectionPool.printDataSourceStats (ConnectionPool.java:146) at com.airvana.faultServer.db.SQLUtil.freeConnection (SQLUtil.java :  267) at com.airvana.faultServer.db.DbAPI.addEventOrAlarmOptimized (DbAPI.java:904) at com.airvana.faultServer.eventProcessing.EventProcessor.processEvent (EventProcessor.java:24) at com.airvana.faultServerilererilererilererilererilererilererilererilererererilererer.fer .processTrap (BasicTrapFilter.java:80) at com.airvana.faultServer.eventEngine.EventEngine.notifyTrapProcessors (EventEngine.javahaps14) at com.airvana.faultServer.eventEngine.NodewiseTrapQueue.run (NodewiseTrapQue. atj. airvana.common.utils.ThreadPool $ PoolThread.run (ThreadPool.javahaps56) "RMI TCP Connection (27927) -10.193.3.41" daemon prio = 3 tid = 0x0186c800 nid = 0x1d53 waiting for monitor entry [0x9f84e000] java.lang .Thread.State: BLOCKED (on object monitor) at org.apache.log4j.AppenderSkeleton.doAppend (AppenderSkeleton.java:231) - waiting to lock (a org.apache.log4j.RollingFileAppender) at org.apache.log4j.helpers .AppenderAttachableImpl.appendLoopOnAppenders (AppenderAttachableImpl.java:66) at org.apache.log4j.Category.callAppenders (Category.java:206) - locked  (a org.apache.log4j.Logger) at org.apache.log4j.Category.forcedLog (Category.javahaps91) at org.apache.log4j.Category.info (Category.java:666) at com.airvana.faultServer .processCommunications.ConfigAppCommReceiver.sendEvent (ConfigAppCommReceiver.java:178) at sun.reflect.GeneratedMethodAccessor14.invoke (Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.. (Method.javaPoint97) at sun.rmi.server.UnicastServerRef.dispatch (UnicastServerRef.java:305) at sun.rmi.transport.Transport $ 1.run (Transport.java:159) at java.security.AccessController.doPrivileged (Native Method) at sun.rmi.transport.Transport.serviceCall (Transport.java:155) at sun.rmi.transport.tcp.TCPTransport.handleMessages (TCPTransport.java UP35) at sun.rmi.transport.tcp.TCPTransport $ ConnectionHandler.run0 (TCPTransport.java:790) at sun.rmi.transport.tcp.TCPTransport $ ConnectionHandler.run (TCPTransport.java:649) at java.util.concurrent.ThreadPoolExecutor $ Wor  ker.runTask (ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor $ Worker.run (ThreadPoolExecutor.java:908) at java.lang.Thread.run (Thread.java:619) "pool-3-thread -49 "prio = 3 tid = 0x01257800 nid = 0xa1 waiting for monitor entry [0xa0def000] java.lang.Thread.State: BLOCKED (on object monitor) at org.apache.log4j.Category.callAppenders (Category.java:204) - waiting to lock (a org.apache.log4j.Logger) at org.apache.log4j.Category.forcedLog (Category.javahaps91) at org.apache.log4j.Category.info (Category.java:666) at com .airvana.faultServer.niohandlers.NioNotificationHandler.processSeqNumber (NioNotificationHandler.javaβ–Ί 4848) at com.airvana.faultServer.niohandlers.NioNotificationHandler.parseAndQueueData.otioNerner.nernerner.nernerner.nerndernerndlernerndlernerndlernerndlernerndlernlernerndlernerndlernerndlernerndlernerndlernerndlernerndlernerndler.f .java: 145) at org.jboss.netty.channel.SimpleChannelHandler.handleUpstream (SimpleChannelHandler.java:105) at org.jboss.netty.channel.DefaultChannelPipe  line.sendUpstream (DefaultChannelPipeline.java//67) at org.jboss.netty.channel.DefaultChannelPipeline $ DefaultChannelHandlerContext.sendUpstream (DefaultChannelPipeline.java:803) at org.jboss.netty.channel.Channels.fireMessageReceived Chann at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived (FrameDecoder.java:324) at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode (FrameDecoder.java:306) at org.jboss .netty.handler.codec.frame.FrameDecoder.messageReceived (FrameDecoder.java:223) at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream (SimpleChannelUpstreamHandler.java:87) at org.jboss.netty.channp.Defeline (DefaultChannelPipeline.javaPoint67) at org.jboss.netty.channel.DefaultChannelPipeline $ DefaultChannelHandlerContext.sendUpstream (DefaultChannelPipeline.java:803) at org.jboss.netty.handler.timeout.ReadTimeoutHandler.messageHand9 Read org.jboss.netty.channel.SimpleChanne  lUpstreamHandler.handleUpstream (SimpleChannelUpstreamHandler.java:87) at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream (DefaultChannelPipeline.java UP67) at org.jboss.netty.channel.DefaultChannelPipeline $ DefaultChannelPanelphelpelendpannelhandpel thread-44 "prio = 3 tid = 0x00927800 nid = 0x9b waiting for monitor entry [0xa0f0f000] java.lang.Thread.State: BLOCKED (on object monitor) at org.apache.log4j.Category.callAppenders (Category.java:204 ) - waiting to lock (a org.apache.log4j.Logger) at org.apache.log4j.Category.forcedLog (Category.javahaps91) at org.apache.log4j.Category.info (Category.java:666) at com.airvana.faultServer.niohandlers.NioNotificationHandler.parseAndQueueData (NioNotificationHandler.java:296) at com.airvana.faultServer.niohandlers.NioNotificationHandler.messageReceived (NioNotificationHandler.javajannChannel.handle.handle.handlann.helle.ndl SimpleChannelHandler.java:105) at org.jboss.netty.channel.DefaultChannelPipe  line.sendUpstream (DefaultChannelPipeline.java//67) at org.jboss.netty.channel.DefaultChannelPipeline $ DefaultChannelHandlerContext.sendUpstream (DefaultChannelPipeline.java:803) at org.jboss.netty.channel.Channels.fireMessageReceived Chann at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived (FrameDecoder.java:324) at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode (FrameDecoder.java:306) at org.jboss .netty.handler.codec.frame.FrameDecoder.messageReceived (FrameDecoder.java:223) at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream (SimpleChannelUpstreamHandler.java:87) at org.jboss.netty.channp.Defeline (DefaultChannelPipeline.javaPoint67) at org.jboss.netty.channel.DefaultChannelPipeline $ DefaultChannelHandlerContext.sendUpstream (DefaultChannelPipeline.java:803) at org.jboss.netty.handler.timeout.ReadTimeoutHandler.messageHand9 Read org.jboss.netty.channel.SimpleChanne  lUpstreamHandler.handleUpstream (SimpleChannelUpstreamHandler.java:87) at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream (DefaultChannelPipeline.javahaps6767) at org.jboss.netty.channel.DefaultChannelPipeline $ DefaultChanneleHendendPannelhelndend at org.jboss.netty.channel.Channels.fireMessageReceived (Channels.javahaps8585) at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived (FrameDecoder.java:324) at org.jboss.netty.handler .codec.frame.FrameDecoder.callDecode (FrameDecoder.java:306) at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived (FrameDecoder.java:221) at org.jboss.netty.channel.SimpleChannelUpstreamHstreamler.handle (SimpleChannelUpstreamHandler.java:87) at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream (DefaultChannelPipeline.javahaps67) at org.jboss.netty.channel.DefaultChannelPipeline $ DefaultChannelHandlerContext.sendUpstream organnel DefaultChannel jboss.netty.hand  ler.execution.ChannelEventRunnable.run (ChannelEventRunnable.java:76) at org.jboss.netty.handler.execution.OrderedMemoryAwareThreadPoolExecutor $ ChildExecutor.run (OrderedMemoryAwareThreadPoolExecutor.java data14) at javarutut.erecutut.ututututerututor.ilutututututor.ilutututorutilututorutilutututor ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor $ Worker.run (ThreadPoolExecutor.java:908) at java.lang.Thread.run (Thread.java:619) 
+6
source share
3 answers

The description of this problem is similar to error 41214 . I'm not sure if this is due to your problem, but some of the elements in your trace of the monitored stack are similar to those in this error.

This way you can follow Steven's advice to make sure that you have too many log calls from multiple threads, resulting in a lot of lock conflicts. Switching the logger level to a higher level can help, although this is not recommended if you need journal entries; I would advise that this be used after a thorough discussion.

+2
source

OutOfMemoryError due to the limited GC limit occurs when the JVM decides that too much percentage of the time is spent on the garbage collector. This is a classic sign that the heap is nearly full.

If the heap is too full, the JVM is spending more and more time collecting garbage to recover a decreasing amount of memory. The corresponding percentage of time left for useful work is reduced.

My hypothesis is that your registrars are backed up due to the lack of time between GC cycles to cope with the registration speed. Thus, a large number of blocked threads is a secondary symptom, and not the main cause of your problems.


Assuming the above is true, a short fix is ​​to restart the application using JVM options for a larger heap. You can also change the upper threshold of the GC so that your application dies faster. (This may seem strange, but it is probably best if your application dies quickly and does not stop within minutes or hours).

The real solution is to keep track of why you are running out of empty space. You need to turn on the GC-log and observe the trends in memory usage, as the application works ... for several hours, days, weeks. If you notice that memory usage is increasing in the long run, it is likely that you have some kind of memory leak. You will need to use a memory profiler to track it.

+7
source

So far, we have not seen this problem.

I'm not sure when it looks like the problems were narrow after upgrading the JDK from JDK1.6.0.20 to JDK1.6.0.24.

0
source

Source: https://habr.com/ru/post/889521/


All Articles