We recently started using New Relic to monitor our web server, which was hosted on the tomcat 7.0.6 server, but we noticed that the catβs memory is constantly increasing and within a week it eats up the entire server (AWS High-Memory Double Extra Large Instance) and become immune, the only way to get it back is to restart it. We provide Xms and Xmx arguments when starting tomcat, but for several hours using the memory using the Xmx value of the tomcat processing process, and it continues to increase until all server memory runs out. Here is the process command:
/usr/java/jdk1.6.0_24//bin/java -Djava.util.logging.config.file=/xxx/xxx/xxx/xxx/apache-tomcat-7.0.6/conf/logging.properties -Xms8192m -Xmx8192m -javaagent:/xxx/xxx/xxx/xxx/apache-tomcat-7.0.6/newrelic/newrelic.jar -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager -Duser.timezone=Asia/Calcutta -Djava.endorsed.dirs=/xxx/xxx/xxx/xxx/apache-tomcat-7.0.6/endorsed -classpath /xxx/xxx/xxx/xxx/apache-tomcat-7.0.6/bin/bootstrap.jar:/xxx/xxx/xxx/xxx/apache-tomcat-7.0.6/bin/tomcat-juli.jar -Dcatalina.base=/xxx/xxx/xxx/xxx/apache-tomcat-7.0.6 -Dcatalina.home=/xxx/xxx/xxx/xxx/apache-tomcat-7.0.6 -Djava.io.tmpdir=/xxx/xxx/xxx/xxx/apache-tomcat-7.0.6/temp org.apache.catalina.startup.Bootstrap start"
Ideally, I would expect this process to not use more than 8 GB of memory, but in a few hours it will exceed 10 GB and in a few days it will exceed 20 GB, and everything else on this server will suffer because of this ( I use "top" to view memory usage). How is this possible?
source share