How and why can distribution memory fail?

It was a question that I asked myself when I was a student, but having not received a satisfactory answer, I did not get this out of my head ... until today.

I know that I can deal with a distribution memory error, either by checking whether the returned pointer is NULL or by handling the bad_alloc exception.

Good, but I wonder: how and why the challenge of the new can fail? As far as I know, distribution memory can fail if there is not enough free space in free storage. But is this situation really happening at present, with several GB of RAM (at least on a regular computer, I'm not talking about embedded systems)? Can we have other situations in which distribution memory failure may occur?

+6
source share
4 answers

Despite the fact that you have received several answers about why and how memory can fail, most of them seem to ignore reality.

In fact, on real systems, most of these arguments do not describe how everything works. Although they are correct from the point of view that these are the reasons, attempts to allocate memory may fail, they are mostly erroneous in terms of describing how the work usually happens.

Simply, for example, on Linux, if you try to allocate more memory than is available on the system, your allocation will not fail (i.e. you will not get a null pointer or a strd :: bad_alloc exception). Instead, the system will be "over commit", so you get what seems like a valid pointer, but when / if you try to use all this memory, you will get an exception and / or OOM Killer will work, trying to free memory by killing processes that use a lot of memory. Unfortunately, this can just as easily kill a program making a request as other programs (in fact, many of the examples given that try to cause a distribution failure by repeatedly allocating large blocks of memory should probably be among the first to be killed) .

Windows is a little closer to how C and C ++ standards represent things (but only a little). Typically, Windows is configured to expand the page file, if necessary, to satisfy the memory allocation request. This means that as you allocate more memory, the system will be half-crazy with swapping memory around, creating large and large paging files to satisfy your request.

This will ultimately fail, but in a system with lots of disk space, it can run for several hours (most of them mix the data on the disk madly) before this happens. At least on a typical client machine, where the user is actually ... well, using a computer, he will notice that everyone dragged them to a stop and did something to stop him long before the distribution fails.

So, to get a memory allocation that really fails, you usually look for something other than a regular desktop computer. A few examples include a server that runs unattended for several weeks at a time, and loads so easily that no one notices that it crashes a disk, say, for 12 hours in a row, or on a machine with MS-DOS or some RTOS, virtual memory.

Bottom line: you are basically right, and they are mostly wrong. Although, of course, it is true that if you allocate more memory than your computer supports, something should give, it is usually not true that a failure will necessarily happen as prescribed by the C ++ standard - and, in fact, for ordinary desktop computers, which is more exception (pardon pun) than the rule.

+14
source

Besides the obvious “out of memory” memory fragmentation can also cause this. Imagine a program that performs the following actions:

  • until the main memory is full:
    • allocate 1020 bytes
    • allocate 4 bytes
  • free all 1020 byte blocks

If the memory manager puts all of this sequentially into memory in the order in which they were allocated, we now have a lot of free memory, but any allocation exceeding 1020 bytes will not be able to find adjacent space to accommodate them.

+4
source

Usually on modern machines, it fails due to a lack of virtual address space; if you have a 32-bit process that is trying to allocate more than 2/3 GB of memory 1 even if the physical RAM (or the swap file) would satisfy the allocation, there simply will not be space in the virtual address space to display such newly allocated memory.

Another (similar) situation arises when the virtual address space is highly fragmented, and, therefore, the distribution is not performed, because there are not enough adjacent addresses for it.

In addition, a lack of memory may end, and in fact I got into this situation last week; but several operating systems (Linux in particular) in this case do not return NULL: Linux will gladly give you a pointer to a memory area that has not yet been committed, and actually allocates it when the program tries to write it; if at this moment there is not enough memory, the kernel will try to destroy some processes of storing memory in free memory (an exception to this behavior is similar to when you try to allocate more than the entire capacity of RAM and the swap partition - in this case you will receive a NULL advance )

Another reason for getting NULL from malloc may be due to the limitations provided by the OS over the process; for example, trying to run this code

 #include <cstdlib> #include <iostream> #include <limits> void mallocbsearch(std::size_t lower, std::size_t upper) { std::cout<<"["<<lower<<", "<<upper<<"]\n"; if(upper-lower<=1) { std::cout<<"Found! "<<lower<<"\n"; return; } std::size_t mid=lower+(upper-lower)/2; void *ptr=std::malloc(mid); if(ptr) { free(ptr); mallocbsearch(mid, upper); } else mallocbsearch(lower, mid); } int main() { mallocbsearch(0, std::numeric_limits<std::size_t>::max()); return 0; } 

in Ideone, you will find that the maximum allocation size is about 530 MB, which is probably the limit imposed by setrlimit (similar mechanisms exist on Windows).


  • it varies between OSs and can often be customized; the total virtual address space of a 32-bit process is 4 GB, but for all current major OSs, a large chunk (upper 2 GB for 32-bit Windows with default settings) is reserved for kernel data.
+2
source

The amount of available memory for this process is limited. If the process runs out of memory and tries to allocate more, the distribution will fail.

There are other reasons why distribution may fail. For example, a heap may be fragmented and not have a single free block large enough to satisfy a placement request.

0
source

Source: https://habr.com/ru/post/953435/


All Articles