How to solve the problem of memory fragmentation

We sometimes had problems due to which our lengthy server processes (running on Windows Server 2003) threw an exception due to a failure in memory allocation. Our suspicion is that these allocations are not executed due to memory fragmentation.

Therefore, we are considering some alternative memory allocation mechanisms that may help us, and I hope someone will tell me the best:

1) Use Windows Heap with low fragmentation

2) jemalloc - as used in Firefox 3

3) Doug Lea malloc

Our server process is developed using cross-platform C ++ code, so any solution will ideally also be cross-platform (do * nix operating systems suffer from this kind of memory fragmentation?).

In addition, I am right in thinking that LFH is now the default memory allocation mechanism for Windows Server 2008 / Vista? ... Will my current problems go away if our clients just upgrade their os server?

+45
c ++ windows memory
Sep 13 '08 at 21:04
source share
9 answers

Firstly, I agree with the other posters that suggested the drain of resources. You really want to do this first.

We hope that the heap manager that you are currently using has a way to reset the actual total free space available on the heap (through all free ), as well as the total number of blocks that it is divided. If the average free block size is relatively small compared to the full free space on the heap, then you have a fragmentation problem. Alternatively, if you can reset the size of the largest free block and compare it with the total free space, this will be done with the same task. The largest free block will be small relative to the total free space available for all blocks if you use fragmentation.

To be very clear from the above, in all cases we are talking about free blocks on the heap, rather than allocated blocks on the heap. In any case, if the above conditions are not met, you really have some kind of leak.

So, once you have eliminated the leak, you might consider using a better distributor. The Doug Lea malloc suggested in the question is a very good distributor for general applications and very reliable in most cases. In other words, the time was checked to work very well for most applications. However, no algorithm is ideal for all applications, and any approach to the control algorithm may be violated by the correct pathological conditions against it.

Why do you have a fragmentation problem? - Sources of fragmentation problems are caused by the behavior of the application and are related to a significant lifetime in the same memory arena. That is, some objects are allocated and freed up regularly, while other types of objects are stored for long periods of time in the same heap. Think of longer life spans as knocking holes into larger areas of the arena and thereby preventing the pooling of adjacent blocks that have been freed.

To solve this problem, the best thing you can do is logically divide the heap into auxiliary arenas where the life spans are more similar. Essentially, you need a temporary heap and a permanent heap or heaps that group things of the same lifetime.

Some others have suggested a different approach to solving the problem, which is to try to make placement sizes more similar or identical, but this is less ideal because it creates a different type of fragmentation called internal fragmentation, which is actually wasted space that you have. allocating more memory in the block than you need.

In addition, with a good heap dispenser such as Doug Lea's, which makes the block sizes more similar, it is not necessary because the allocator will already use a two-stage bucketing scheme, which will make it completely unnecessary to artificially adjust the transferred sizes to malloc () - in fact, its manager heaps does this automatically for you much more reliably than the application can make adjustments.

+33
Sep 14 '08 at 2:12
source share

I think you mistakenly ruled out a memory leak too soon. Even a tiny memory leak can lead to serious memory fragmentation.

Assuming your application behaves as follows:
Allocate 10MB
Allocate 1 byte
Free 10MB
(oops, we did not release 1 byte, but who cares about 1 tiny byte)

This seems like a very small leak, you are unlikely to notice this when monitoring the entire allocated memory . But this leak will eventually make your application memory look like this:
.
.
Free - 10MB
.
.
[Allocated -1 byte]
.
.
Free - 10MB
.
.
[Allocated -1 byte]
.
.
Free - 10MB
.
.

This leak will not be noticed ... until you want to allocate 11 MB
Assuming your minimap contains all the memory information, I recommend using DebugDiag to detect possible leaks. In the generated memory report, carefully study the distribution count (not the size) .

+14
Oct 12 '08 at
source share

Do you think that Doug Lea malloc may work well. It is a cross platform, and it has been used in shipping code. At the very least, it is easy to integrate into your code for testing.

Having worked in a fixed memory environment for several years, this situation is certainly a problem even in non-fixed environments. We found that CRT distributors tend to deteriorate dramatically in terms of performance (speed, lost space efficiency, etc.). I strongly believe that if you have an extensive need for good memory allocation over a long period of time, you should write your own (or see if something like dlmalloc works). The trick is getting something written that works with your distribution patterns, and this has more to do with memory management efficiency, like almost everything.

Give dlmalloc a try. I definitely give him the thumbs. It is also quite configurable, so you can increase efficiency by changing some compile-time options.

Honestly, you should not depend on what "leaves" with new OS implementations. A service pack, patch, or other new OS N years later could make this problem worse. Again, for applications that require a reliable memory manager, do not use the stock versions available with your compiler. Find one that works for your situation. Start with dlmalloc and configure it to see if you can get the behavior that works best for your situation.

+5
Sep 14 '08 at 2:21
source share

You can help reduce fragmentation by reducing the amount you set aside to free.

eg. let's say for a web server running a server side script, it can create a string to display the page. Instead of allocating and freeing these lines for each page request, just maintain them in the pool, so you only select them when you need more, but you do not release (this means that after a while you get a situation that you don't allocate anymore because you have enough)

You can use _CrtDumpMemoryLeaks (); to dump memory leaks into the debug window when starting the debug build, however I believe this is specific to the Visual C compiler (this is in crtdbg.h)

+2
Sep 13 '08 at 21:21
source share

I suspect a leak before I suspect fragmentation.

For memory-intensive data structures, you can switch to the storage pool reuse mechanism. You can also allocate more material on the stack, as opposed to heap, but in practice this will not really matter, I think.

I would run a tool like valgrind, or do intensive registration to look for resources that are not being released.

+1
Sep 13 '08 at 21:10
source share

@nsaners - I'm sure the problem boils down to memory fragmentation. We analyzed minidumps that indicate a problem when a large (5-10 mb) piece of memory is allocated. We also monitored the process (on-site and in development) to check for memory leaks - none of them were detected (the amount of memory is usually quite low).

+1
Sep 13 '08 at 21:28
source share

The problem occurs on Unix, although this is usually not so bad.

A bunch of low recessions helped us, but my colleagues swear by Smart Heap (it has been used in several of our products for several years). Unfortunately, due to other circumstances, we were not able to use Smart Heap this time.

We also consider the allocation of blocks / fragments and try to use approaches / strategies, long-term things here, the whole request there, short-term things there, etc.

+1
Sep 13 '08 at 21:59
source share

As usual, you can usually lose memory in order to get some speed.

This method is not useful for a general purpose distributor, but it does.

Basically, the idea is to write a allocator that returns memory from a pool where all allocations are the same size. This pool can never become fragmented, because any block is as good as another. You can reduce memory loss by creating several pools with different chunks of size and select the smallest block size pool, which is even larger than the requested amount. I used this idea to create dispensers working in O (1).

+1
Sep 13 '08 at 22:55
source share

if you are talking about Win32 - you can try to compress something using LARGEADDRESSAWARE. You will have ~ 1 GB of additional defragmented memory, so your application will fragment it longer.

-one
Nov 29 '16 at 11:09
source share



All Articles