To take a step backward, there is a key requirement std :: vector : elements in a vector must be stored contiguously. This means that you can safely use the pointer to an element in the vector as a raw pointer to this type of element and therefore you can use the semantics and arithmetic of the pointer, as you would expect, with an array of fixed size.
This also means that accessing any element of the array by its index is very fast and takes the same time regardless of the size of the array and the index of the specific element you are accessing (this is called O (1) or "constant time").
The penalty that must be paid for this allowance is redistribution. When a vector is created, you usually donβt know how many elements you will use, so you allocate some amount of contiguous memory to store a certain amount of elements. If you continue to add to the vector, you will exceed this initial distribution. But you cannot just increase the amount of memory used by your vector, because your program may have allocated memory immediately after your current vector to some other variable. The only way to ensure that vector elements remain contiguous is to select a new block large enough to contain all elements, including the extra one, and then copy all elements to this new location.
As others noted, it is enough to distribute the size of the array exponentially, highlighting 1.5 or 2 times the original size of the array for each excess capacity. This reduces the regularity of the redistribution of the entire array as it grows.
If instead you want to get a collection of elements where it is always fast enough to add an element (or at least always takes as long), you can use a linked list ( std :: list ), but then you can no longer quickly access to any element by index ("random access").
source share