The unordered_map container has a reserve method, because it is implemented using a bucket, not a tree, as in map .
Ladle:
a slot in the containerβs internal hash table to which items are assigned based on the hash value of their key. Buckets are numbered 0 through (bucket_count-1). ( source )
One bucket contains a variable number of elements. This number is based on load_factor . When the load_factor value reaches a certain threshold, the container increases the number of buckets and redraws the map.
When you call reserve(n) , the container creates enough buckets to hold at least n items.
This contrasts with rehash(n) , which directly sets the number of buckets to n and starts rebuilding the entire hash table.
See also: Preset buckets in C ++ unordered_map
Edit in response to comments
Since I do not know the exact answer to the question posed in the comments, and since my preliminary studies did not prove fruitful, I decided to test it experimentally.
For reference, the question boils down to the following:
Could you explain if the reserved buckets for n elements are the same as allocating memory for n elements?
According to this answer, accurately extracting the size of the allocated space in unordered_map is complex and unreliable. So I decided to use the diagnostic tools of Visual Studio 2015.
Firstly, my test case is as follows:
#include <unordered_map> #include <cstdint> struct Foo { Foo() : x(0.0f), y(0.0f), z(0.0f) { } float x; float y; float z; }; int32_t main(int32_t argc, char** argv) { std::unordered_map<uint32_t, Foo> mapNoReserve; std::unordered_map<uint32_t, Foo> mapReserve; // --> Snapshot A mapReserve.reserve(1000); // --> Snapshot B for(uint32_t i = 0; i < 1000; ++i) { mapNoReserve.insert(std::make_pair(i, Foo())); mapReserve.insert(std::make_pair(i, Foo())); } // -> Snapshot C return 0; }
In cases where comments indicate, I took a snapshot of the memory.
The results were as follows:
Snapshot A:
ββββββββββββββββ¬βββββββββββββββ¬βββββββββββββββ | Map | Size (Bytes) | Bucket Count | |--------------|--------------|--------------| | mapNoReserve | 64 | 8 | | mapReserve | 64 | 8 | ββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββ
Snapshot B:
ββββββββββββββββ¬βββββββββββββββ¬βββββββββββββββ | Map | Size (Bytes) | Bucket Count | |--------------|--------------|--------------| | mapNoReserve | 64 | 8 | | mapReserve | 8231 | 1024 | ββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββ
Snapshot C:
ββββββββββββββββ¬βββββββββββββββ¬βββββββββββββββ | Map | Size (Bytes) | Bucket Count | |--------------|--------------|--------------| | mapNoReserve | 24024 | 1024 | | mapReserve | 24024 | 1024 | ββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββ
Interpretation:
As you can see from the snapshot, it seems that both cards grow in size as soon as we start adding elements to them, even those that called reserve .
So reserve gives an advantage, although memory is still allocated? I would say yes for two reasons: (1) it pre-allocates memory for buckets, and (2) it can prevent the need for rehash , which, as discussed earlier, completely rebuilds the map.