Experimenting with C - Why can't I allocate and use 2 GB of memory?

I continue to experiment with C. I have this program that allows you to determine how much RAM you want to eat.

char * eatRAM() { unsigned long long toEat; unsigned long long i = 0; float input; char * pMemory = NULL; int megaByte = 1048576; puts("How much RAM do you want to eat? (in Mega Bytes)"); puts("NOTE: If you want to eat more RAM than you have available\nin your system, the program will crash"); printf("\n>> MB: "); scanf("%f", &input); toEat = (unsigned long long)(input * megaByte); pMemory = malloc(toEat); printf("\n\nEating in total: %llu Bytes\n", toEat); puts("Check your task manager!\n"); if(pMemory != NULL) { printf("\n\nEating in total: %llu Bytes\n", toEat); puts("Check your task manager!\n"); for(i; i < toEat; i++) { pMemory[i] = 'x'; } } else { puts("\nSeems like that amount of memory couldn't be allocated :( \n"); } return pMemory; } 

UPDATED QUESTION:

The fact is that ... if I enter, for example, 1024 MB , it works, I see that in the task manager it uses 1 GB of RAM. Even if I enter 1500 MB , it works.

But if I enter 2048 MB , he will say

It seems that the amount of memory cannot be allocated :(

or even if I enter 1756MB

Remember that I am new to C, maybe I am omitting something important related to how the OS allows me to access memory, what could it be?

+5
source share
3 answers

A 32-bit Windows process has a default address space of 2 gigabytes. The lower half of the full address space (2, 32), the upper 2 GB is used by the operating system. Since almost no one else uses a 32-bit OS, you can get 4 GB when you link your program to / LARGEADESSESSAWARE.

This 2 GB virtual machine space must be shared by code and data. Your program usually loads with 0x00400000, any operating system DLL used (for example, kernel32.dll and ntdll.dll) has large download addresses (outside of 0x7F000000). And at least the default thread stack and the heap of the process are created by default before your program starts, their addresses are usually unpredictable.

Your program will be susceptible to virus attacks packaged in shrink wrappers, in most cases when you install the OS, you will have DLLs that will provide "services", such as protection against malware and cloud storage. The download address of these DLLs is unpredictable. Also, any DLL files that you linked to and implicitly loaded when your program started. Few programmers pay attention to their preferred base address and leave it at default 0x1000000. You can see these DLL files from the debugger modules window. Such DLLs often have their own CRT and tend to create their own heaps.

The calculations you highlight, especially the very large ones that cannot be obtained from the heap of low fragmentation, must find the address space in the holes left between the existing code and the data distribution. If you get 1500 MB, then your virtual machine will be pretty clean. In general, you will encounter problems of more than 650 MB, quickly losing time when the program is running for some time and fragmenting the space of the virtual machine. Highlighting failures is almost always caused by the fact that the OS cannot find a large enough hole, and not because you do not have enough VMs. The sum of the holes can be significantly larger than your unsuccessful distribution request.

These details are quickly becoming a folk tale, there are very few remaining reasons to continue to focus on x86. Target fragmentation of x64 and address space will not be a problem for the next 20 years, it is very difficult to fragment 8 terabytes of virtual machine. With a large margin to grow beyond this.

So, it should be clear why you cannot get 2048 MB, you cannot get all this. Get more information from the VMMap utility from SysInternals , it shows you how the VM is cut. And Mark Russinovich's blog post and book give a lot of background.

+5
source

This is a limitation of the OS, not a limitation of C.

To access a more than 4 GB system system, you need to run a 64-bit OS, and for one process with an address of more than 4 GB it must be built as a 64-bit application. Win32 has a limit of 2 Gbps. 5Gb of physical RAM is largely irrelevant as the memory is virtualized.

Unlike the theoretical limitations of 32 and 64-bit systems and applications, the OS can still set limits. Different versions and editions (Home, Pro, Server, etc.) of Windows, for example, impose certain restrictions for commercial reasons.

The specific answer in your case will require information about your system, tools and build options. If you use Windows and VC ++, you need to consider the /LARGEADDRESAWARE option; it is not enabled by default in the 32-bit compiler, but Win32 has a default limit of 2Gb in any case if the physical address extension is not enabled.

I believe that a 32-bit process running on Win64 can address a 32-bit 4Gb address space, but in this case you will definitely need to build /LARGEADDRESAWARE . Even then, not all of this space will be available to the heap, and any separate distribution should be contiguous, so it can be limited by previous allocations and fragmentation of the heap.

+3
source

Allocation will not work if the amount of remaining free memory is less than the amount you are trying to allocate.

Also, immediately after this line:

 pMemory = (char *) malloc(toEat); 

Add the following:

 if (!pMemory){ printf("Can't allocate memory\n"); return NULL; } 

Thus, instead of receiving related segmentation error messages, you will see the message โ€œUnable to allocate memoryโ€, and your function will return NULL.

Make sure that you perform a similar check on the values โ€‹โ€‹in the functions that call the eatRam function, or that you receive segmentation error messages. and also use debuggers like gdb.

0
source

Source: https://habr.com/ru/post/1237037/


All Articles