Memory Pool Questions

I need some clarification for the concept and implementation of the memory pool.

According to the memory pool on the wiki, he says that

also called allocation of fixed block sizes, ... because these implementations suffer from fragmentation due to variable block sizes, they cannot be used in real time due to performance.

How does the "variable fragmentation block size" happen? How can a fixed size distribution solve this? This wiki description is a little wrong for me. I think fragmentation is not excluded by placing a fixed size or caused by a variable size. In the context of a memory pool, fragmentation is eliminated by dedicated dedicated memory allocators for a particular application or reduced by limited use of the intended memory block.

Also, several implementation examples, for example Code Example 1 and Code Example 2 , as it seems to me, to use a memory pool, the developer should know the data type well, then cut, split or organize the data in the combined memory fragments (if the data is close to the linked list) or hierarchical linked pieces (if the data is more hierarchically ordered, such as files). In addition, the developer must predict in advance how much memory he needs.

Well, I could imagine that this works well for an array of primitive data. What about C ++ non primitive data classes in which the memory model is not so obvious? Even for primitive data, should the developer consider data type alignment?

Is there a good memory pool library for C and C ++?

Thanks for any comments!

+6
source share
5 answers

In a scenario where you always allocate blocks of a fixed size, you either have enough space for one more block or not. If you have, the block fits in the available space, because all free or used spaces are the same size. Fragmentation is not a problem.

In a scenario with blocks with a variable size, you can get several separate free blocks with different sizes. A request for a block that is smaller than free memory may not be possible because one adjacent block does not exist for it. For example, imagine that you end up with two separate 2KB free blocks and must satisfy a 3KB request. None of these blocks will be sufficient to ensure this, even if there is enough memory.

+3
source

A variable block size does cause fragmentation. Look at the picture I am attaching: enter image description here

The image ( here ) shows the situation in which A, B and C allocate pieces of memory, pieces of variable size.

At some point, B frees up all its chunks of memory, and suddenly you have fragmentation. For example, if C needs to allocate a large chunk of memory, which will still fit into the available memory, it will not be able to do this, since the available memory is divided into two blocks.

Now, if you think about the case when each piece of memory will be the same size, this situation clearly will not arise.

Memory pools, of course, have their drawbacks, as you yourself indicate. Therefore, you should not think that the memory pool is a magic wand. It has a cost, and it makes sense to pay for it under certain circumstances (i.e., an embedded system with limited memory, real-time restrictions, etc.).

As for which memory pool is good in C ++, I would say that it depends. I used one under VxWorks, which was provided by the OS; in a sense, a good memory pool is efficient when it is tightly integrated with the OS. In fact, every RTOS offers an implementation of memory pools, I think.

If you are looking for a generic memory pool implementation, check out this .

EDIT:

From your last comment, it seems to me that perhaps you think of memory pools as a “solution to the fragmentation problem”. Unfortunately, this is not the case. If you want, fragmentation is a manifestation of entropy at the memory level, i.e. It's unavoidable. On the other hand, memory pools are a way of managing memory in such a way as to effectively reduce the effect of fragmentation (as I said, and as mentioned in Wikipedia, mainly on specific systems, such as real-time systems). This is costly because the memory pool may be less efficient than the “normal” memory allocation technology because you have a minimum block size. In other words, entropy reappears under the mask.

In addition, there are many parameters that affect the efficiency of the memory pool system, such as block size, block allocation policy, or you have only one memory pool or you have several memory pools with different block sizes, different lifetimes or different policies.

Memory management is a really difficult matter, and memory pools are just a method that, like any other, improves the situation compared to other methods and accurately determines the cost.

+12
source

Both memory pools with a fixed size and a variable size will have fragmentation, i.e. some free pieces of memory between used ones will be used.

For the size of the variable, this can cause problems, as there may be an inactive piece large enough for a certain required size.

On the other hand, for pools with a fixed size, this is not a problem, since only parts of a given size can be requested. If there is free space, it is guaranteed to be large enough for a (multiple) one part.

+2
source

If you are implementing a rigid real-time system, you may need to know in advance that you can allocate memory for the maximum allowable time. This can be "allowed" with fixed-size memory pools.

I once worked on a military system, where we had to calculate the maximum possible number of memory blocks of each size that the system could ever use. Then these numbers were added to the total, and the system was configured with that amount of memory.

Insanely expensive, but worked to protect.


When you have several pools of a fixed size, you can get secondary fragmentation when your pool is outside the blocks, although there is a lot of space in some other pool. How do you share this?

+1
source

Operations with a memory pool can work as follows:

  • Save the global variable, which is a list of available objects (initially empty).
  • To get a new object, try returning it from the global list of available ones. If not, call the new operator to place the new object on the heap. Allocation occurs very quickly, which is important for some applications that can currently spend a lot of processor time on memory allocation.
  • To free an object, simply add it to the global list of available objects. You can put a limit on the number of items allowed in the global list; if the cap is reached, then the object will be freed, not returned to the list. The cover prevents a massive memory leak.

Note that this is always done for one data type of the same size; it doesn't work for large ones, and you probably have to use a bunch as usual.

It is very easy to implement; We use this strategy in our application. This leads to a failure of the memory allocations at the beginning of the program, but the memory is no longer freed / allocated, which leads to significant overhead.

+1
source

Source: https://habr.com/ru/post/893069/


All Articles