The complexity of implementing a dynamic stack array

In a typical implementation of a dynamic array, we double the stack when there is no room for a new element. In this case, the doubling of the average time for the push operation is O (n).

What is the difficulty of pressing if instead of doubling we increased the stack size by (n + k)?

My approach is as follows

Assuming the stack is empty and k = 10, we increase the stack to 10 elements. After 10 elements, we make 20 elements, etc.

The average time to copy items around is 10 + 20 + 30 + ...

So, the average time to click should be of the order of O (n)?

Is my approach right?

+4
source share
2 answers

Your calculations are incorrect. There is a reason why a typical implementation of a dynamic array doubles its size (or, more generally, increases its size by x percent).

If you increase the size from 1 to n = 2 i, the total number of elements that you copy is 1 + 2 + 4 + 8 + 16 + ... + 2 i . If you sum it up, it's less than 2 i + 1 so the total time complexity of O (n) and the amortized time complexity of one insert is O (1). If n is not a power of two, the calculation will be a little more complicated, but the result will be the same.

On the other hand, if you increase the size by k, from 0 to n = i * k, the total number of elements that you copy is k + 2k + 3k + ... + ik. If you sum it up, it will be (ik + k) * (ik / k) / 2 = O (n 2 ). And the amortized complexity of one insert will be O (n).

Because of this, your decision is much worse than doubling the size.

+4
source

Depending on how you increased the storage volume, and k is small enough, it can be O (1) for all cases or something else.

By β€œhow,” I mean, in managed languages, you can create a new array of size n + k, and then copy the old array (size n) to a new one and finally replace the link from the old array to the new one. It will be: O (1) (array creation, this assumption) + O (n) (data copy) + O (1) (help). We ignore the execution of the garbage collector because it is implementation dependent. In unmanaged languages, you can use something similar to realloc so that old elements are saved without having to be copied, because the new storage occupies the same memory area, only expanded to the number of required elements. In this case, it is O (1) for all cases. Please note that, however, sometimes expanding the storage to the number of required items is not possible due to memory fragmentation, so an approach similar to managed languages ​​is used instead (but is performed implicitly using the realloc implementation). For this, the complexity is the same as in managed languages: O (n).

What is the answer from a practical point of view, I hope you find the answer from an analytical point of view using the answer above. Good luck.

+1
source

Source: https://habr.com/ru/post/1388397/


All Articles