C ++ std :: vector behaves like a memory leak in certain situations

I found a situation where vectors behave like a memory leak and can reduce it to a minimal working example. In this example, I am making (in a function) a vector that contains three char vectors. First, these char vectors eject a large number of elements, and their capacities are compressed to their sizes. Then vectors with one element are assigned by large vectors. The problem is that the memory used is too large, and even when the function returns and the vectors are destroyed, the memory is not freed . How to recover memory? Why is he showing this bahavir? What can I do to avoid this leaking behavior?

Here is a sample code (sorry for the length):

#include <iostream> #include <vector> #include <string> #include <fstream> using namespace std; // see http://man7.org/linux/man-pages/man5/proc.5.html at /proc/[pid]/status string meminfo() { // memory information is in lines 11 - 20 of /proc/self/status ifstream stat_stream("/proc/self/status",ios_base::in); // get VmSize from line 12 string s; for ( int linenum = 0; linenum < 12; ++linenum ) getline(stat_stream,s); stat_stream.close(); return s; } void f() { vector<vector<char>> mem(3); // with 1,2 memory is fine size_t size = 16777215; // with 16777216 or greater memory is fine for ( vector<char>& v : mem ) { for ( unsigned int i = 0; i < size; ++i ) v.push_back(i); v.shrink_to_fit(); // without this call memory is fine } cout << "Allocated vectors with capacities "; for ( vector<char>& v : mem ) cout << v.capacity() << ", "; cout << endl << "used memory is now: " << meminfo() << endl; for ( vector<char>& v : mem ) { v = vector<char>{1}; if ( v.size() != v.capacity() ) cout << "Capacity larger than size." << endl; } cout << "Shrinked vectors down to capacity 1." << endl << "Used memory is now: " << meminfo() << endl; } int main() { cout << "At beginning of main: " << meminfo() << endl; f(); cout << "At end of main: " << meminfo() << endl; return 0; } 

And the output on my machine is:

 At beginning of main: VmSize: 12516 kB Allocated vectors with capacities 16777215, 16777215, 16777215, used memory is now: VmSize: 78060 kB Shrinked vectors down to capacity 1. Used memory is now: VmSize: 61672 kB At end of main: VmSize: 61672 kB 

However, valgrind does not see a memory leak.

I think the parameters in the example are system dependent to show a strange bahavius. I am using Linux Mint Debian Edition with g ++ 4.8.2 kernel and x86_64. I am compiling with:

 g++ -std=c++11 -O0 -Wall memory.cpp -o memory 

and also tried -O3 and did not have an explicit tuning for optimization.

Some interesting points:

  • When I replace v = vector<char>{1}; on v.clear(); v.shrink_to_fit(); v.push_back(1); v.clear(); v.shrink_to_fit(); v.push_back(1); , the problem remains the same. Replacing pushing and compression for large vectors with v = vector<char>(16777215); "solves" the memory problem.
  • 16777215 = 2 ^ 24 - 1, so maybe he should do something with the boundaries of the memory.
  • In addition, one would expect from the memory that the program uses at the beginning of the main (12516 kiB) plus the memory of large vectors that the program should use a total of approximately 3 * 16777216 B + 12516 kiB = 61668 kiB, which is approximately equal to the memory that she uses at the end.

In a real-world application, I use vectors to collect the operations that should be applied to the FEM simulation stiffness matrix. Since I want to go to the limits of what is possible with available memory (also in terms of speed), I need to keep free memory to avoid sharing. Since the exchange does occur, I believe the VmSize value is reliable.

+6
source share
2 answers

The problem is that you misunderstand the meaning of "freeing" memory in a C ++ context. When your application frees memory (using shrink_to_fit or deleting objects or something else), it actually just frees memory for the C ++ runtime and DOESN'T NEED to release it back to the system for other processes to be used. The C ++ runtime can save memory for reuse later in the same process.

As a rule, this happens when the memory is fragmented - free memory is surrounded (in the VM space of the program) by the used memory. Only when the freed memory is at the end of the program memory space does the C ++ runtime decide (or can) return it to the system.

Typically, this memory retention is not a problem since it can usually be reused when an application requests more memory. You may have a problem in that since the C ++ runtime cannot move around the used block of memory, you cannot reuse free chunks that are too small. There are all sorts of tricks and heuristics that can be used at run time to avoid the situation, but they do not always work.

+4
source

Just keep in mind that you are not directly involved in OS memory allocation procedures. Usually in the center there is another memory manager that does this work for you.

The OS must be present at the request of all processes running on the machine, so it is not recommended to set small pieces of memory many times. It is better to ask for large pieces less times. The memory manager running in your application will save some pieces of memory in the hope that they will need your application later.

0
source

Source: https://habr.com/ru/post/978108/


All Articles