Which one is faster in Insert, Delete, Look-up? Which one takes less memory and less time to clear it from memory. Any explanation is warmly welcome !!!
For a specific use, you should try both your actual data and the usage patterns and see which is actually faster ... there are enough factors that are dangerous to assume or will always βwinβ.
implementation and characteristics of unordered maps / hash tables
Academically - as the number of elements grows to infinity, those operations with std::unordered_map (which is a C ++ library that offers a "hash map" or "hash table" for calculations) will continue to accept the same the amount of time is O (1) (ignoring memory restrictions / caching, etc.), whereas with std::map (balanced binary tree) each time the number of elements doubles, an additional comparison operation is usually required, therefore it gradually slows down O (log 2 n).
std::unordered_map implementation is mandatory to use open hashing : the fundamental expectation is that there will be a continuous array of buckets, each of which logically represents a container of any hashing values.
It usually serves to represent a hash table as vector<list<pair<key,value>>> , where getting from vector elements to a value involves at least one dereferencing of the pointer when you follow the header pointer stored in the bucket to the original list node; The performance of the insert / find / delete operation depends on the size of the list, which is on average equal to unordered_map load_factor .
If max_load_factor decreases (1.0 by default), then there will be less collisions, but more redistribution / renaming during insertion and more memory lost (which can hurt performance by increasing cache misses).
The memory usage for this most obvious implementation of unordered_map includes both a continuous array of bucket_count() list-head-iterator / size pointer and one double-linked list node for a key / value pair. As a rule, bucket_count() + 2 * size() additional overhead indicators adjusted for any rounding of sizes of dynamic memory allocation requests that may be performed by the implementation. For example, if you request 100 bytes, you can get 128 or 256 or 512. The implementation of dynamic memory programs may use some memory to track allocated / available areas.
However, the C ++ standard leaves room for real-world implementations to make some of its decisions on performance / memory usage. For example, they could store the old continuous array of buckets for some time after allocating a new larger array, so reusing the values ββin the latter can be done gradually to reduce the worst performance due to average performance, as both arrays are analyzed during operations.
implementation and characteristics of maps / balanced binary trees
A map is a binary tree, and you can expect to use pointers connecting different heap memory areas returned by various calls to new . Like key / value data, each node in the tree needs parent, left, and right pointers (see wikipedia binary tree article if lost).
Comparison
Thus, both unordered_map and map it is necessary to allocate nodes for key / value pairs, the first of which usually has overhead with two pointers / iterators for prev / next-node communication, and the last has three for parent / left right. But, unordered_map additionally has one contiguous distribution for bucket_count() buckets (== size() / load_factor() ).
In most cases, it is unlikely that a significant difference in memory usage and the difference in release time for one additional region would not be noticeable.
another alternative
In cases where the container is filled with a front and then re-viewed without additional inserts / deletions, it can sometimes be the fastest to use a sorted vector, which is used using standard binary_search , equal_range , lower_bound , upper_bound . The advantage of this is a one-time contiguous memory allocation, which is much more cache-safe. It always outperforms map , but unordered_map can still be faster - measure if you're interested.