I have a linux hardware server with 16 GB of physical memory and running some applications. This server has been up and running for about 365 days so far, and I am observing how "free -m" indicates that memory is running at a low level.
total used free shared buffers cached Mem: 14966 13451 1515 0 234 237 -/+ buffers/cache: 12979 1987 Swap: 4094 367 3727
I understand that 1987 is the actual free memory in the system, which is less than 14%. If I add part of% MEM to the output "ps -A v" or "top", it will not add up to 100%.
I need to understand why the memory has gone so low?
Update (29 / Feb / 2012):
Let me divide this problem into two parts:
1) A system with less free memory.
2) Identification of where the used memory has disappeared.
For 1), I understand; if the system runs on free memory, we can observe a gradual performance degradation. At some point, paging will give additional free memory to the system, which will lead to restoration of system performance. Correct me if I am wrong.
For 2). Now this is what I want to understand where the used memory has disappeared. If I summarize% MEM in the output of "ps -A v" or "top -n 1 -b", it reaches no more than 50%. So, where to take into account the remaining 40% of untraceable memory. We have our own kernel modules on the server. If these modules are memory leaks, they will be counted. Is it possible to find out the number of leaks in the kernel modules.
source share