Confuses Tomcat session persistent memory profile

As with any memory management issue, this is a long story, so move on to it.

We have an application that suffers from some memory management problems, so I am trying to profile the application to understand where the problem lies. Today I saw this topic:

Tomcat session clipping for OutOfMemoryError exception

... which seems to follow what I saw in the profiler. Basically, if I get into an application with a bunch of users with Jmeter, it will remain in the heap memory for a long time, in the end, until the sessions begin to expire. However, unlike the poster in this thread, I have the source and the ability to try to implement persistent state sessions with Tomcat, and this is what I tried to do today, with limited success. I think this is some configuration setting that I skip. Here is what I have in context.xml:

<Manager className='org.apache.catalina.session.PersistentManager' saveOnRestart='false' maxActiveSessions='5' minIdelSwap='0' maxIdleSwap='1' maxInactiveInterval='600' maxIdleBackup='0'> <Store className='org.apache.catalina.session.FileStore'/> </Manager> 

And in web.xml I have this:

 <session-config> <session-timeout> 10 </session-timeout> </session-config> 

Ideally, I would like the sessions to time out in an hour, but for testing purposes this is normal. Now, after I played with some things and finally came to these settings, I look at the application in the profiler, and this is what I see:

enter image description here

As you can see in the image, I added some annotations. This part is circled around a question mark, which I really don't understand the most. You can see that I use the Run GC button of the Jconsole at several different points, and you can see the part in the diagram where I click on the application with many clients with Jmeter.

If I remember correctly (I would have to go back and really document it to make sure), before I applied the constant state, the GC button will not do almost as much as to clear the heap. The strange bit here is that it seems to me that I need to manually start the GC for this constant state in order to actually help anything.

Alternatively, is this just a simple memory leak scenario? Earlier today, I took a bunch of heaps and loaded it into the Eclipse Memory Analyzer tool and used the "Detect Leak" function, and all that was done reinforced the theory that it was a session size problem; the only leak detected was in java.util.concurrent.ConcurrentHashMap $ The segment that led me to this thread Memory fully used by Java ConcurrentHashMap (under Tomcat)

which makes me think that this is not an application that really leaks.

Other relevant details: - Running / testing this on my local machine at the moment. Where all these results come from. - Using Tomcat 6 - This is a JSF2.0 application - I added the system property -Dorg.apache.catalina.STRICT_SERVLET_COMPLIANCE = true according to the Tomcat documentation

So, I think there are a few questions here: 1. Have I configured this correctly? 2. Is there a memory leak? 3. What happens in this memory profile? 4. Is it (relatively) normal?

Thanks in advance.

UPDATE

So, I tried Sean's advice and found new interesting things.

The session listener works great and played an important role in analyzing this scenario. Something else that I forgot to mention is that loading the application is actually confused by only one page, which reaches functional complexity almost comically. Thus, when testing, I sometimes try to get to this page, and sometimes I avoid it. So in the next round of tests this time, using the session listener, I found the following:

1) Click an application with dozens of clients, just go to a simple page. I noted that the sessions were released as expected after the deadline, and the memory was released. The same thing works fine with the trivial number of customers, for a difficult case, getting on the "big" page.

2) Then I tried to use the application with a complex use case with several dozen clients. This time a few dozen more sessions were launched than expected. Each client seems to have initiated one to three sessions. At the end of the session, a small memory was released, but, according to the session listener, only about a third of the sessions were destroyed. Contradictory, the folder that actually contains the session data is empty. Most of the used memory is also stored. But, after exactly one hour after the stress test, the garbage collector works, and everything returns to normal.

So, the following questions include:

1) Why can sessions be handled properly in the simple case, but when things get more intense do they stop being managed correctly? Is the session handler invalid information or is using JMeter in some way?

2) Why does the garbage collector wait one hour to run? Is there a system parameter that requires the garbage collector MUST run for a given period of time, or if I have some kind of configuration parameter?

Thanks again for continued support.

UPDATE 2

Just a short remark: playing with this a little more, I found out why I get different reports on the number of live sessions, due to the fact that I use a constant session; if I turn it off, everything will work as expected. He really says on the Tomcat page for the ongoing session that this is an experimental feature. It should trigger sessions of events that the listener listens to when it would be impossible to expect.

+6
source share
2 answers

Let me try to solve the best questions.

  • For your configuration, this looks good to me on paper. If you want to make sure your sessions are created and destroyed properly, you need to set up a session listener to register or display a list of open sessions. Here is a link to example

  • I do not think there is a memory leak. Incremental GC is running. it seems that most of the memory is advancing to the old generation. But as soon as your application can start the full GC, the memory will return to normal. If you need your users session to have a lot of data stored in them, memory will be your compromise. As long as you destroy the data, as soon as they exit the system / timeout, the memory should be in order in the long run. Your question area (???) would be a gray area where your sessions are still active, so the data stored in them will be held, possibly promoted to the old generation at the moment. As soon as the memory advances to the Old General, he needs a full GC to clear it. Clicking the button did this. The application would eventually schedule a full GC.

  • I think # 2 helps answer this question

  • I am sure that this is normal for a stress test. Try to start it again without clicking the "Run GC" button. See if memory will lose on its own. Note that the garbage collector will make a full GC when it feels that it should. To clear the old gene, it is necessary to suspend the work so that the GC does not perform this action, unless it feels that it does not have enough resources to properly manage the application.

A few other things you might want to do for future tests. Pay attention to the use of heap between the young generation and the old generation of heap. In the screenshot we see that there is 872 MB of heap, so to break down the place where the memory sits during stress tests, it will help to correctly determine where the heap is stored.

Similarly, consider enabling the detailed GC protocol so that you can get a heap decay report. There are many tools to help you schedule your GC log, so you don’t need to read the log manually. This can also be written to a file (-Xloggc: file) if you do not want it sent to standard output.

If the session size becomes too large to manage, you may want to unload this memory elsewhere. A few examples may include using BigMemory, DB Persistance, or other ways to reduce session sizes.

+5
source

I do not see where someone pointed out your typo:

 minIdelSwap='0' 

Not sure if any use is for anyone, but this will certainly lead to the PersistenceManager behaving differently than you might expect.

0
source

Source: https://habr.com/ru/post/901769/


All Articles