Ram allocation shows doubled ram usage in task manager

Doing some profiling (mem and speed) made me sick because win7 seems to allocate exactly twice as much RAM as I ask for ... Please note that this is the first time I do such an active profiling on win7, therefore, I really do not know what to expect.

I allocate the exact amount of RAM in the loop using the express version of MSVC under win7 (64-bit version). The application is compiled and runs in 32 bits.

I allocate 24 MB of RAM, and the task manager shows my application as using 48 MB (under all columns of memory, including fixed ones, since I really memcopy'ing in new regions). When I get another 24 (now should be 48 MB), my application switches to 96, etc.

They stand out as 1,000,000 24-byte structures.

I searched the net but did not find anything to match my observations exactly.

Who has a key?

If this is just OS fraud (or incompetence?), Is there any tool that can give me real memory consumption in the process? (its hard to find leaks when the application rushes to the start ;-)

[----------- edited, additional information -----------]

Notice (along the path in the console title bar) that I create in the release mode (using all the standard "empty" settings of the MSVC 2010 project), therefore there is no additional dedicated debugging memory (which can be quite extensive for some projects).

here is a short, complete C application that illustrates the behavior:

#include <stdio.h> #include <assert.h> #include <conio.h> #include <stdlib.h> typedef unsigned int u32; typedef struct myStruct MYS; struct myStruct { u32 type; union { u32 value; char * str; void * data; MYS ** block; MYS * plug; }; u32 state, msg, count, index; }; int main(int argc, char *argv[]){ int i, j; MYS *ref; printf ("size of myStruct: %d\n\n", sizeof(MYS)); for(i=0; i < 10; i ++){ printf("allocating started...\n"); for (j = 0; j < 1000000 ; j ++){ ref = (MYS *) malloc(sizeof(MYS)); assert(ref); memset(ref, 0, sizeof(MYS)); } printf(" Done... Press 'enter' for Next Batch\n"); _getch(); } _getch(); return 0; } 

and an image that shows the memory on my machine after one cycle. Every other run, it adds ~ 48 MB instead of 24 MB!

process info after 1 loop (should be ~ 24MB)

+6
source share
2 answers

This is likely due to a combination of indentation, internal house-building structures, and memory alignment restrictions.

When you call malloc(size) you are not actually getting a buffer of size bytes. You get a buffer of at least size bytes. This is because, for reasons of efficiency, your OS prefers to transfer memory buffers for just a couple of different sizes and will not adapt the buffers to save space. For example, inf you request 24 bytes on Mac OS, you will get a buffer of 32 bytes (25% waste).

Add to these distributed resources the structures that your OS uses to manage the malloc ed buffers (probably accounting for a few extra bytes for allocation), and the fact that filling can increase the size of your object (the compiler prefers alignment to several of yours), and you will see that allocating millions of small objects into separate buffers is very expensive.

In short: allocate only one large buffer, sizeof (YourType) * 1000000 , and you will not see any noticeable overhead. Select a million sizeof (YourType) objects sizeof (YourType) and you will end up wasting a lot of space.

+3
source

malloc not an OS service on Windows; it is implemented by your compiler. It may have its own distribution strategy, as evidenced by another answer, and it is often built on top of HeapAlloc , which has some overhead.

Call VirtualAlloc if you want to allocate a certain number of pages.

0
source

Source: https://habr.com/ru/post/892523/


All Articles