Does New / Malloc memory fragmentation decrease?

Short background:

I am developing a system that should work for several months and use dynamic allocations.

Question:

I heard that memory fragmentation slows the new and malloc statements because they need to "find" a place in one of the "holes" that I left in memory, and not just "go ahead" on the heap.

I read the following question: What is memory fragmentation?

But none of the answers mentioned anything about performance, only the refusal to allocate large blocks of memory.

So memory fragmentation makes new more time to allocate memory? If so, how much? How do I know if a new memory has "Hard Time" on the heap?

I tried to find which data structures / algorithms are used by GCC to find a "hole" in memory for placement inside. But could not find an explanation of the descent.

+5
source share
1 answer

Memory allocation varies by platform, depending on platform.

I would say: β€œYes, new takes time to allocate memory. How much time depends on many factors, such as algorithm, fragmentation level, processor speed, optimization, etc.

The best answer to how long it takes is profile and measurement. Write a simple program that fragmentes memory and then measures the time to allocate memory.

There is no direct method for a program to learn about the difficulties of finding available memory locations. You can read the clock, allocate memory, and then read again. Another idea is to set a timer.

Note. In many embedded systems, dynamic memory allocation is frowned upon. In critical systems, fragmentation can be the enemy. This is how fixed arrays are used. Fixed memory allocations (at compile time) remove fragmentation as a defect problem.

Edit 1: Search
Typically, a function call is required to allocate memory. The effect of this is that the processor may have to reload its instruction cache or pipeline, consuming additional processing time. There may also be additional instructions for passing parameters, such as the minimum size. Local variables and distributions at compile time usually do not require a function call to allocate.

If the distribution algorithm is not linear (access to an array of thoughts), it will need steps to find an available slot. Some memory management algorithms use different strategies based on the requested size. For example, some memory managers may have separate pools of 64 bits or less.

If you think that the memory manager has a linked list of blocks, the manager will need to find the first block that is greater than or equal to the size of the request. If the block size exceeds the requested size, it can be split, and the remaining memory is then created in a new block and added to the list.

There is no standard memory management algorithm. They vary depending on the needs of the system. Memory managers for platforms with a limited (small) memory size will differ from those that have large amounts of memory. The memory allocation for critical systems may differ from the memory allocation for non-critical systems. The C ++ standard does not control the behavior of the memory manager, but only some requirements. For example, a memory manager may be allocated from a hard drive or network device.

The impact value depends on the memory allocation algorithm. The best way is to measure performance on your target platform.

+7
source

Source: https://habr.com/ru/post/1269236/


All Articles