Benefits of Vector and Linked Lists
The main advantage of vector and linked lists is memory locality.
Typically, each item is highlighted separately in a linked list. As a result, these elements are probably not adjacent to each other in memory. (The gaps between the elements in memory.)
The vector is guaranteed to save all contained elements. (Elements are next to each other, without spaces;)
Note: additional errors may occur ...;)
Imo, simplified key points regarding the superior performance of a sequential storage pattern and non-contiguous storage,
1. cache misses
Modern processors do not extract single bytes from memory, but slightly larger chunks. Thus, if the data size of your data is smaller than the size of these pieces, and if the storage is contiguous, you can get more than one item at a time, since several items can be in the same fragment.
Example:
A 64-bit block (normal cache line size) is suitable for sixteen 32-bit integers at a time. Thus, a cache miss (data not yet in the cache β loading from the required main memory) occurs in the very near future after processing 16 elements from the moment of the first cache entry. If a linked list is used, the first element may be the only one within the 64-bit block. It could theoretically happen that a cache miss occurs for each individual list item.
Specific example:
std::vector<std::uint32_t> v;
Imagine that the contents of v are not yet cached.
What happens when processing data in a for loop?
1) Check if v [0] is in the cache. β Nope
2) Extract 64 bytes starting from address v [0] from main memory to the cache line
3) Download v [0] from the cache and the process by adding its value to s
4) Is v 1 element in cache? β Yes, loaded with the previous selection, because the neighboring v [0]
5) Download v 1 from the cache and process by adding its value to s
6) Is element v [2] in the cache? β Yes ...
7) Download v [2] from the cache and process by adding its value to s
... etc.
34) Is element v [16] in the cache? β Nope
35) Extract 64 bytes starting from address v [16] from main memory into cache line
36) Download v [16] from the cache and the process by adding its value to s
37) Is item v [17] in the cache? β Yes, loaded with the previous sample, because neighboring v [16]
etc...
Getting data from main memory to the cache takes much longer than loading data from the cache into the processor registers and performing simple operations. Therefore, the fact that multiple values ββcan be in the same cache line can significantly improve performance.
Linked lists do not provide a continuous guarantee of storage, and you cannot hope to improve this performance. This is also the reason why random iteration (random access to items) is worse than forward forwarding (access to items is ok) for adjacent containers.
2. proactive sampling
The above effect is enhanced by a CPU function called a "prefetcher".
If a piece was loaded from the main memory, the prefetcher prepares to load the next fragment / already puts it in the cache, which significantly reduces the likelihood of loading material from this part of the main memory.
This, of course, is effective if and only if you need data from the next prepared fragment.
How do vectors usually work (inside)?
See: C ++ Vector, what happens when it expands / redistributes on the stack?