Amortized complexity std :: next_permutation?

I just read this other question about next_permutation complexity , and while I am happy with the answer (O (n)), it looks like the algorithm may have a good amortized analysis, which shows lower complexity. Does anyone know of such an analysis?

+18
c ++ algorithm big-o stl permutation
Feb 11 '11 at 19:16
source share
3 answers

It looks like I'm going to answer my question in the affirmative - yes , next_permutation works in O (1) amostated time.

Before proceeding to a formal proof of this, we quickly turn to how the algorithm works. First, it scans back from the end of the range to the beginning, identifying the longest adjacent decreasing subsequence in the range that ends at the last element. For example, at 0 3 4 2 1 algorithm identifies 4 2 1 as this subsequence. He then looks at the element immediately before this subsequence (in Example 3 above), then finds the smallest element in the subsequence larger than it (in Example 4 above). Then it exchanges the positions of these two elements, and then changes the specified sequence. So, if we started with 0 3 4 2 1 , we would replace 3 and 4 to get 0 4 3 2 1 , and then flip the last three elements to get 0 4 1 2 3 .

To show that this algorithm works in amortized O (1), we will use the potential method. Identify & Phi; three times the length of the longest contiguously decreasing subsequence at the end of the sequence. In this analysis, we will assume that all elements are distinct. Given this, think about the runtime of this algorithm. Suppose we scan back from the end of a sequence and find that the last m elements are part of a decreasing sequence. This requires a comparison of m + 1. Then we find the elements of this sequence, one of which is the smallest, larger than the element preceding this sequence. This takes the worst time, proportional to the length of the decreasing sequence, using linear scanning for other m comparisons. When replacing elements, say, 1 credit score of time is required, and to change the sequence no more than t operations are required. Thus, the real time to complete this step is approximately 3m + 1. However, we must take into account the potential change. After we cancel this sequence of length m, we end up reducing the length of the longest descending sequence at the end of the range to length 1, since the reverse decrease in the sequence at the end makes the last elements of the range sorted in ascending order, which means that our potential has changed with & Phi; = 3 m to? Phi; '= 3 * 1 = 3. Therefore, the net potential drop is 3 - 3 m, so our net amortized time is 3m + 1 + (3 - 3m) = 4 = O (1).

In a previous analysis, I made a simplifying assumption that all values โ€‹โ€‹are unique. As far as I know, this assumption is necessary for this proof to work. I will think about it and see if the proof can be changed to work in the case where the elements may contain duplicates, and I will post the redaction of this answer when I process the details.

+16
Feb 11 '11 at 20:09
source share

I'm not sure about the exact implementation of std :: next_permutation, but if it is the same as the Narayana Pandita algorithm, as described in the wiki here: http://en.wikipedia.org/wiki/Permutation#Systematic_generation_of_all_permutations ,

assuming the elements are different, it looks like O (1) is being depreciated! (Of course, there may be errors below)

Let's calculate the total number of swaps completed.

We get the recurrence relation

T (n + 1) = (n + 1) T (n) + & Theta; (n 2 )

(n + 1) T (n) comes from fixing the first element and performing swaps for the remaining n.

& Theta; (n 2 ) comes from a change in the first element. At the point we change the first element, we do & Theta; (n) swaps. Do it n times, you get & Theta; (n 2 ).

Now let's get X(n) = T(n)/n!

Then we get

X (n + 1) = X (n) + & Theta; (n 2 ) / (n + 1)!

ie, there exists some constant C such that

X (n + 1) <= X (n) + Cn 2 / (n + 1)!

Writing down n such inequalities, we get

X (n + 1) - X (n) <= Cn 2 / (n + 1)!

X (n) - X (n-1) <= C (n-1) 2 / (n)!

X (n-1) - X (n-2) <= C (n-2) 2 / (n-1)!

...

X (2) - X (1) <= C1 2 / (1 + 1)!

Adding these values โ€‹โ€‹gives us X(n+1) - X(1) <= C(\sum j = 1 to n (j^2)/(j+1)!) .

Since the infinite series \sum j = 1 to infinity j^2/(j+1)! converges to C ', say, we get X(n+1) - X(1) <= CC'

Remember that X (n) calculates the average number of swaps needed (T (n) / n!)

Thus, the average number of swaps is O (1).

Since the element search for swap is linear with the number of swaps, O (1) is amortized even if you take other operations into account.

+13
Feb 11 '11 at 20:10
source share

Here n denotes the number of elements in the container, and not the total number of possible permutations. The algorithm should iterate over the order of all elements in each call; he takes a pair of bidirectional iterators, which implies that in order to move to one element, the algorithm must first visit the one that was before it (if only its first or last element). A bidirectional iterator allows iteration backwards, so the algorithm can (should, in fact) perform half as many swaps as there are elements. I believe that the standard can offer overloading for an iterator that will support dumber iterators due to n swaps, not half n swaps. But alas, this is not so.

Of course, for n possible permutations, the algorithm works in O (1).

0
Feb 11 '11 at 19:22
source share



All Articles