The program terminates with std :: bad_alloc

I am running a C ++ program that dies with std::bad_alloc at arbitrary points that depend on the specified input. Here are some comments / points about the program:

  • for shorter starts (operating time depends on input), the program terminates normally. The problem arises only for large runs.
  • The program has no detectable memory leaks. This has been tested with Valgrind / Memcheck for small runs. Moreover, all my code has no pointers (all dynamic allocations are performed by libraries, for example, in std::vector and std::string ; this does not correspond to the distribution inside these library classes), so memory leaks are extremely unlikely.
  • Several objects are looped and then moved to containers. Some of these objects are designed to live until the very end of the program.
  • I suspected fragmentation of the heap might be a problem (see C ++ program dying with std :: bad_alloc, but valgrind does not report a memory leak ), but I'm on a 64-bit system with a 64-bit compiler (in particular Linux with g ++) and heap fragmentation on 64-bit ground makes me think that heap fragmentation cannot be a problem for 64-bit systems.

Is there anything else I should try? Any specific tools that could help? Any other suggestions?

UPDATE: finally, it turned out that virtual memory was limited before ulimit -v . I forgot about this later and, therefore, exhausted my memory. Returning to unlimited , fixed the problem.

+4
source share
1 answer

std::bad_alloc means that you requested more memory than is available.

You may have situations where the program does not leak, but memory is still running out:

 vector<long> v; long n = 0; for(;;) { v.push_back(n++); } 

ultimately exhaust all available memory on any machine that you have, but it does not leak - all memory is taken into account in the vector. Obviously, any container can be done in the same way, vector , list , map , does not really matter.

Valgrind only finds instances where you "refuse" to allocate, and not where you populate the system with currently available memory.

What LIKELY is doing is the slower form above - you save more and more in some kind of container. It can be that you cache, or that you do not delete when you consider that it was deleted.

Monitoring the amount of memory in the application is actually used in some kind of setup program ("top" on Linux / Unix, "task manager" on Windows) and in that it really grows. If so, then you need to figure out what is growing - for a large program that can be complicated (and some things MUST grow, others not ...)

Of course, it is also possible that you suddenly get a bad calculation, for example. request a negative number of elements in T* p = new T[elements]; will cause poor alloc, since elements are converted to unsigned, and negative unsigned numbers are HUGE.

If you can catch bad_alloc in the debugger, this kind of thing is usually pretty easy to spot because the large amount requested by new will be quite obvious.

Catching an exception in the debugger should generally help, although of course it is possible that you just allocate memory for a small line when it goes wrong, if you have something that is leaking, it’s not uncommon that this is what sets off when he goes wrong.

If you use the Unix flavor, you can also speed up the search for errors, set the amount of memory that the application can use for smaller sizes using ulimit -m size (in kilobytes) or ulimit -v size .

+5
source

Source: https://habr.com/ru/post/1497701/


All Articles