As with any memory management issue, this is a long story, so move on to it.
We have an application that suffers from some memory management problems, so I am trying to profile the application to understand where the problem lies. Today I saw this topic:
Tomcat session clipping for OutOfMemoryError exception
... which seems to follow what I saw in the profiler. Basically, if I get into an application with a bunch of users with Jmeter, it will remain in the heap memory for a long time, in the end, until the sessions begin to expire. However, unlike the poster in this thread, I have the source and the ability to try to implement persistent state sessions with Tomcat, and this is what I tried to do today, with limited success. I think this is some configuration setting that I skip. Here is what I have in context.xml:
<Manager className='org.apache.catalina.session.PersistentManager' saveOnRestart='false' maxActiveSessions='5' minIdelSwap='0' maxIdleSwap='1' maxInactiveInterval='600' maxIdleBackup='0'> <Store className='org.apache.catalina.session.FileStore'/> </Manager>
And in web.xml I have this:
<session-config> <session-timeout> 10 </session-timeout> </session-config>
Ideally, I would like the sessions to time out in an hour, but for testing purposes this is normal. Now, after I played with some things and finally came to these settings, I look at the application in the profiler, and this is what I see:

As you can see in the image, I added some annotations. This part is circled around a question mark, which I really don't understand the most. You can see that I use the Run GC button of the Jconsole at several different points, and you can see the part in the diagram where I click on the application with many clients with Jmeter.
If I remember correctly (I would have to go back and really document it to make sure), before I applied the constant state, the GC button will not do almost as much as to clear the heap. The strange bit here is that it seems to me that I need to manually start the GC for this constant state in order to actually help anything.
Alternatively, is this just a simple memory leak scenario? Earlier today, I took a bunch of heaps and loaded it into the Eclipse Memory Analyzer tool and used the "Detect Leak" function, and all that was done reinforced the theory that it was a session size problem; the only leak detected was in java.util.concurrent.ConcurrentHashMap $ The segment that led me to this thread Memory fully used by Java ConcurrentHashMap (under Tomcat)
which makes me think that this is not an application that really leaks.
Other relevant details: - Running / testing this on my local machine at the moment. Where all these results come from. - Using Tomcat 6 - This is a JSF2.0 application - I added the system property -Dorg.apache.catalina.STRICT_SERVLET_COMPLIANCE = true according to the Tomcat documentation
So, I think there are a few questions here: 1. Have I configured this correctly? 2. Is there a memory leak? 3. What happens in this memory profile? 4. Is it (relatively) normal?
Thanks in advance.
UPDATE
So, I tried Sean's advice and found new interesting things.
The session listener works great and played an important role in analyzing this scenario. Something else that I forgot to mention is that loading the application is actually confused by only one page, which reaches functional complexity almost comically. Thus, when testing, I sometimes try to get to this page, and sometimes I avoid it. So in the next round of tests this time, using the session listener, I found the following:
1) Click an application with dozens of clients, just go to a simple page. I noted that the sessions were released as expected after the deadline, and the memory was released. The same thing works fine with the trivial number of customers, for a difficult case, getting on the "big" page.
2) Then I tried to use the application with a complex use case with several dozen clients. This time a few dozen more sessions were launched than expected. Each client seems to have initiated one to three sessions. At the end of the session, a small memory was released, but, according to the session listener, only about a third of the sessions were destroyed. Contradictory, the folder that actually contains the session data is empty. Most of the used memory is also stored. But, after exactly one hour after the stress test, the garbage collector works, and everything returns to normal.
So, the following questions include:
1) Why can sessions be handled properly in the simple case, but when things get more intense do they stop being managed correctly? Is the session handler invalid information or is using JMeter in some way?
2) Why does the garbage collector wait one hour to run? Is there a system parameter that requires the garbage collector MUST run for a given period of time, or if I have some kind of configuration parameter?
Thanks again for continued support.
UPDATE 2
Just a short remark: playing with this a little more, I found out why I get different reports on the number of live sessions, due to the fact that I use a constant session; if I turn it off, everything will work as expected. He really says on the Tomcat page for the ongoing session that this is an experimental feature. It should trigger sessions of events that the listener listens to when it would be impossible to expect.