Unable to allocate 2-4 GB of RAM with the new [] / C ++ / Linux / x86_64

For this simple test and a Linux box with 4 GB or RAM, 0 bytes of swap and processor in x86_64 mode, I can not allocate more than 1 GB of the array.

A source:

#include <cstdio> int main() { for(int i=0;i<33;i++) { char*a=new char[1<<i]; *a=1; delete[]a; printf("%d\n",i); fflush(stdout); } } 

Run:

 $ file test test: ELF 64-bit LSB executable, AMD x86-64, version 1 (SYSV) $ ./test ... 24 25 26 27 28 29 30 terminate called after throwing an instance of 'std::bad_alloc' what(): St9bad_alloc Aborted 

There is no ulimit for memory:

 virtual memory (kbytes, -v) unlimited data seg size (kbytes, -d) unlimited 

Why a mistake?

Glibc - 2.3.4, core - 2.6.9

UPDATE: compiler - gcc4.1

Thanks! The test definitely has an error, 1ull<<i gives me up to 31 (2 GB). This error was unintentional. But the real bad code

  for(j=0;j<2;j++) for(i=0;i<25;i++) some_array[j][i] = new int[1<<24]; 

therefore, in the real code there is no sign overflow.

The size of int is 4 bytes:

 $ echo 'main(){return sizeof(int);}'| gcc -xc - && ./a.out; echo $? 4 

each request will be equal to 1 <24 * 4 = 1 <26; full memory requires 2 * 25 * (1 <26) 3355443200 bytes + 50 * sizeof (pointer) for some_array + 50 * (size of new [] service data.)

+6
source share
4 answers

EDIT: In other answers, I see that the problem is most likely due to the number passed to new[] becoming negative. I agree that this is most likely the case, I leave this answer only because I think that it contains information that may be relevant in some similar cases, where the problem is not in calling new[] with a negative number.


The first question that comes to mind is whether you have enough memory available. With 4 GB of RAM and no swap, the total amount of memory that can be allocated for all processes and the kernel is 4 GB.

Please note that even if you had more than 1 GB of memory available for the process, malloc and free (which are called under new[] and delete[] , may not return the memory to the system, and they may actually store each of the purchased / released blocks, so the memory of your program can reach 2 GB (you would have to check this with the malloc implementation in your kernel, since many implementations really return large blocks).

Finally, when you request a 1Gb array, you request 1Gb of contiguous memory, and it may just be that you have a lot more memory, but none of the blocks are big enough for this particular request.

+5
source

The bare constant in C is int. Signed int. So, 1 << 31 - -2147483648. because 1<<31 = 0x10000000 = -2147483648

Try (size_t)1 << i

+14
source

What are the values ​​of /proc/sys/vm/overcommit_memory and /proc/sys/vm/overcommit_ratio on your system? If you turned off the memory, you cannot allocate all the memory in your system. When you enable overcommit (set /proc/sys/vm/overcommit_memory to 0), you should be able to allocate essentially unlimited size arrays (of course, 10 GB) on a 64-bit system.

0
source

Although it is usually true that on a 64-bit machine you have a lot of address space for allocating several GB of contiguous virtual memory, you are trying to allocate it with new / malloc. The new / malloc traditionally does not request any memory , but for a certain part of the memory that is allocated using the {s,} brk system call, which basically moves the end of the process data segment. I think you should allocate such a large amount of memory using mmap , which leaves the OS free to choose any address block.

0
source

Source: https://habr.com/ru/post/888268/


All Articles