Java process memory usage (jcmd vs pmap)

I have a Java application running on Java 8 inside a docker container. The process starts the Jetty 9 server and the web application is deployed. Transmitted the following JVM parameters: -Xms768m -Xmx768m.

I recently noticed that a process takes up a lot of memory:

$ ps aux 1
USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
app          1  0.1 48.9 5268992 2989492 ?     Ssl  Sep23   4:47 java -server ...

$ pmap -x 1
Address           Kbytes     RSS   Dirty Mode  Mapping
...
total kB         5280504 2994384 2980776

$ jcmd 1 VM.native_memory summary
1:

Native Memory Tracking:

Total: reserved=1378791KB, committed=1049931KB
-                 Java Heap (reserved=786432KB, committed=786432KB)
                            (mmap: reserved=786432KB, committed=786432KB) 

-                     Class (reserved=220113KB, committed=101073KB)
                            (classes #17246)
                            (malloc=7121KB #25927) 
                            (mmap: reserved=212992KB, committed=93952KB) 

-                    Thread (reserved=47684KB, committed=47684KB)
                            (thread #47)
                            (stack: reserved=47288KB, committed=47288KB)
                            (malloc=150KB #236) 
                            (arena=246KB #92)

-                      Code (reserved=257980KB, committed=48160KB)
                            (malloc=8380KB #11150) 
                            (mmap: reserved=249600KB, committed=39780KB) 

-                        GC (reserved=34513KB, committed=34513KB)
                            (malloc=5777KB #280) 
                            (mmap: reserved=28736KB, committed=28736KB) 

-                  Compiler (reserved=276KB, committed=276KB)
                            (malloc=146KB #398) 
                            (arena=131KB #3)

-                  Internal (reserved=8247KB, committed=8247KB)
                            (malloc=8215KB #20172) 
                            (mmap: reserved=32KB, committed=32KB) 

-                    Symbol (reserved=19338KB, committed=19338KB)
                            (malloc=16805KB #184025) 
                            (arena=2533KB #1)

-    Native Memory Tracking (reserved=4019KB, committed=4019KB)
                            (malloc=186KB #2933) 
                            (tracking overhead=3833KB)

-               Arena Chunk (reserved=187KB, committed=187KB)
                            (malloc=187KB) 

As you can see, there is a huge difference between RSS (2.8 GB) and what is actually displayed by VM memory statistics (1.0 GB, 1.3 GB reserved).

? , RSS , pmap verbose , , -, [anon]. JVM ?

: JVM , linux? , RSS, JVM. .

+5
3

Apache Spark, . , Hibernate , , java.util.zip.Inflater.inflateBytes hibernate, 1,5 , , https://hibernate.atlassian.net/browse/HHH-10938?attachmentOrder=desc, , , , .

+3

: https://gdstechnology.blog.gov.uk/2015/12/11/using-jemalloc-to-get-to-the-bottom-of-a-memory-leak/ , java.util.zip.Inflater.

, java.util.zip.Inflater.inflateBytes .

+2

NMT , JVM, , / .

+1
source

Source: https://habr.com/ru/post/1655728/


All Articles