I'm trying to debug some performance issues with the MongoDB configuration, and I noticed that the use of resident memory is very small (about 25% of the system memory), despite the fact that sometimes a large number of crashes occur, I am surprised to see that the use is so low that MongoDB is so memory dependent.
Here is a picture sorted by memory. You can see that no other process uses significant memory:
top - 21:00:47 up 136 days, 2:45, 1 user, load average: 1.35, 1.51, 0.83 Tasks: 62 total, 1 running, 61 sleeping, 0 stopped, 0 zombie Cpu(s): 13.7%us, 5.2%sy, 0.0%ni, 77.3%id, 0.3%wa, 0.0%hi, 1.0%si, 2.4%st Mem: 1692600k total, 1676900k used, 15700k free, 12092k buffers Swap: 917500k total, 54088k used, 863412k free, 1473148k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 2461 mongodb 20 0 29.5g 564m 492m S 22.6 34.2 40947:09 mongod 20306 ubuntu 20 0 24864 7412 1712 S 0.0 0.4 0:00.76 bash 20157 root 20 0 73352 3576 2772 S 0.0 0.2 0:00.01 sshd 609 syslog 20 0 248m 3240 520 S 0.0 0.2 38:31.35 rsyslogd 20304 ubuntu 20 0 73352 1668 872 S 0.0 0.1 0:00.00 sshd 1 root 20 0 24312 1448 708 S 0.0 0.1 0:08.71 init 20442 ubuntu 20 0 17308 1232 944 R 0.0 0.1 0:00.54 top
I would like to at least understand why the memory is better not used by the server, and ideally, to learn how to optimize the server configuration or requests for better performance.
UPDATE: It is true that memory usage looks high, which may lead to the conclusion of this other process. There are no other processes using significant memory on the server; memory seems to be consumed in the cache, but I don't understand why it would be like this:
$free -m total used free shared buffers cached Mem: 1652 1602 50 0 14 1415 -/+ buffers/cache: 172 1480 Swap: 895 53 842
UPDATE: You can see that the database is still working with a page error:
insert query update delete getmore command flushes mapped vsize res faults locked db idx miss % qr|qw ar|aw netIn netOut conn set repl time 0 402 377 0 1167 446 0 24.2g 51.4g 3g 0 <redacted>:9.7% 0 0|0 1|0 217k 420k 457 mover PRI 03:58:43 10 295 323 0 961 592 0 24.2g 51.4g 3.01g 0 <redacted>:10.9% 0 14|0 1|1 228k 500k 485 mover PRI 03:58:44 10 240 220 0 698 342 0 24.2g 51.4g 3.02g 5 <redacted>:10.4% 0 0|0 0|0 164k 429k 478 mover PRI 03:58:45 25 449 359 0 981 479 0 24.2g 51.4g 3.02g 32 <redacted>:20.2% 0 0|0 0|0 237k 503k 479 mover PRI 03:58:46 18 469 337 0 958 466 0 24.2g 51.4g 3g 29 <redacted>:20.1% 0 0|0 0|0 223k 500k 490 mover PRI 03:58:47 9 306 238 1 759 325 0 24.2g 51.4g 2.99g 18 <redacted>:10.8% 0 6|0 1|0 154k 321k 495 mover PRI 03:58:48 6 301 236 1 765 325 0 24.2g 51.4g 2.99g 20 <redacted>:11.0% 0 0|0 0|0 156k 344k 501 mover PRI 03:58:49 11 397 318 0 995 395 0 24.2g 51.4g 2.98g 21 <redacted>:13.4% 0 0|0 0|0 198k 424k 507 mover PRI 03:58:50 10 544 428 0 1237 532 0 24.2g 51.4g 2.99g 13 <redacted>:15.4% 0 0|0 0|0 262k 571k 513 mover PRI 03:58:51 5 291 264 0 878 335 0 24.2g 51.4g 2.98g 11 <redacted>:9.8% 0 0|0 0|0 163k 330k 513 mover PRI 03:58:52