C ++ STL allocator vs operator new

According to the C ++ Primer 4th edition, p. 755, there is a note:

Modern C ++ programs usually need to use a allocator class to allocate memory. It is safer and more flexible.

I do not quite understand this statement. Until now, all the materials that I read learn with the help of new to allocate memory in C ++. An example of how a vector class uses a dispenser is shown in the book. However, I cannot think of other scenarios.

Can someone help clarify this statement? and give me more examples? When to use a dispenser and when to use new ? Thank!

+18
c ++ new-operator memory stl allocator
Apr 11 '11 at 10:01
source share
2 answers

For general programming, yes, you should use new and delete .

However, if you are writing a library, you should not! I don't have a tutorial, but I think it discusses allocators in the context of writing library code.

Library users may want to control what they receive from it. If all the library distributions went through new and delete , the user would not have the opportunity to have this fine-grained control level.

All STL containers accept an optional dispenser pattern argument. Then the container will use this allocator for its internal memory. By default, if you omit the allocator, it will use std::allocator , which uses new and delete (specifically ::operator new(size_t) and ::operator delete(void*) ).

Thus, the user of this container can control where the memory is allocated if they want.

Example of a Custom Distributor Implementation for Use with STL and Explanation: Improving Performance with Custom Distribution Pools for STL

Side Note: The STL approach to dispensers is not optimal in several ways. I recommend reading Towards a Better Allocator Model to discuss some of these issues.

+35
Apr 11 2018-11-11T00:
source share
— -

These two do not contradict each other. Distributors are the PolicyPattern or StrategyPattern used by the container adapters of the STL libraries to allocate pieces of memory for use with objects.

These allocators often optimize memory allocation by allowing * ranges of elements to be allocated immediately and then initialized using a new placement * elements to be selected from secondary specialized heaps depending on the block.

One way or another, the end result will (almost always) be that objects are assigned a new one (placement or default)




Another striking example is, for example, the boost library implements intelligent controllers. Because smart controllers are very small (with little overhead), distribution overhead can be a burden. For implementation, it would be advisable to define a specialized distributor for executing distributions, so one could have efficient std :: set <> smart pointers, std :: map <..., smartpointer>, etc.

(Now I'm almost sure that the upgrade actually optimizes memory for most smart controllers, avoiding any virtual machines, so vft, making the class a POD structure, with only the raw pointer as storage, some of the examples will not apply. But again, extrapolate to others types of smartpointer (refcounting smartpointers, pointers to member functions, pointers to member functions with reference to an instance, etc.).

+1
Apr 11 2018-11-11T00:
source share



All Articles