Is my analysis of spatial complexity correct?

This is a problem 9.5 from hacking an interview with encoding 5 th edition

Problem: Write a method to compute all permutations of a string

Here is my solution encoded in Java (test it, it works :))

public static void generatePerm(String s) { Queue<Character> poss = new LinkedList<Character>(); int len = s.length(); for(int count=0;count<len;count++) poss.add(s.charAt(count)); generateRecurse(poss, len, ""); } private static void generateRecurse(Queue<Character> possibles, int n, String word) { if(n==0) System.out.println(word); else { for(int count=0;count<n;count++) { char first = possibles.remove(); generateRecurse(possibles, n-1, word+first); possibles.add(first); } } } 

I agreed with the author that my solution works in the time complexity of O (n!), Because to solve this problem you need to consider factorials, for example, for a word like β€œtop”, there are three possibilities for the first letter, 2 for the second and t .d ....

However, she did not mention cosmic complexity. I know that interviewers love to ask you the complexity of the time and space of your decision. What will be the spatial complexity of this solution?

My initial assumption was O (n 2 ), because at every level n there are n recursive calls. So you have to add n + n - 1 + n - 2 + n - 3 ..... + 1 to get n (n + 1) & frasl; 2 , which is in O (n 2 ). I reasoned that there are n recursive calls, because you have to retreat n times at each level and that the complexity of the space is the number of recursive calls that your algorithm makes. For example, when considering all permutations of "TOP" at level 3 of recursive calls gR ([O, P], 2, "T"), gR ([P, T], 2, "O") gR ([T, O] , 2, "P"). Is my analysis of spatial complexity correct?

+6
source share
1 answer

I think you got the right answer, but for the wrong reason. The number of recursive calls has nothing to do with this. When you make a recursive call, it adds a certain amount of space to the stack; but when this call leaves, the stack space is freed. Suppose you have something like this:

 void method(int n) { if (n == 1) { for (int i = 0; i < 10000; i++) { method(0); } } } method(1); 

Although method calls itself 10,000 times, there will still be no more than two method calls on the stack on the stack. Thus, the complexity of the space would be O (1) [constant].

The reason your algorithm has O (n 2 ) spatial complexity is explained by the word string. When n reset to 0, there will be len stack entries called by generateRecurse calls. To the greatest extent, there will be len entries in the stack, so the use of space on the stack will be only O (n); but each of these stack entries has its own word , which will all exist on the heap at the same time; and the lengths of these word parameters are 1, 2, ..., len , which, of course, are up to (len * (len+1)) / 2 , which means that the use of space will be O (n 2 ).

MORE ABOUT STACK FRAMES: It seems like an explanation of the basics of stack frames would be helpful ...

A "stack frame" is simply a region of memory that is part of a "stack". Typically, the stack is a predefined memory area; however, the location and size of the stack frames are not predetermined. When the program is first executed, there will be nothing on the stack (in fact, there will probably be some kind of source data, but let there be nothing there, just to make everything simple). Thus, the memory area on the stack is as follows:

 bottom of stack top of stack ------------------------------------------------------------------ | nothing | ------------------------------------------------------------------ ^ +--- stack pointer 

(It is assumed that the stack grows up, from lower to higher. Many machines have stacks that grow down. To simplify, I will continue to assume that this is a machine whose stack grows up.)

When a method is called (function, procedure, subroutine, etc.), a specific area of ​​the stack is allocated. There is enough area to hold local method variables (or links to them), parameters (or links to them), some data, so that the program knows where to return when you return and, possibly, other information - other information is highly dependent on the machine, programming language and compiler. In Java, the first method will be main

 bottom of stack top of stack ------------------------------------------------------------------ | main frame | nothing | ------------------------------------------------------------------ ^ +--- stack pointer 

Notice that the stack pointer has moved up. Now main calls method1 . Since method1 will return to main , local variables and main parameters should be preserved when main gets the opportunity to resume execution. A new frame of a certain size is allocated on the stack:

 bottom of stack top of stack ------------------------------------------------------------------ | main frame | method1 frame | nothing | ------------------------------------------------------------------ ^ +--- stack pointer 

and then method1 calls method2 :

 bottom of stack top of stack ------------------------------------------------------------------ | main frame | method1 frame | method2 frame | nothing | ------------------------------------------------------------------ ^ +--- stack pointer 

Now method2 back. After returning method2 its parameters and local variables will no longer be available. Therefore, the entire frame can be discarded. This is done by moving the stack pointer to where it was before. (The "previous stack pointer" is one of the things stored in some frame.) The stack returns to the following:

 bottom of stack top of stack ------------------------------------------------------------------ | main frame | method1 frame | nothing | ------------------------------------------------------------------ ^ +--- stack pointer 

This means that at this point the machine will see that part of the stack begins with the stack pointer as "unused". It is wrong to say that the method2 frame is reused. You cannot use something that has ceased to exist, and the method2 frame no longer exists. Conceptually, all that is is a large empty area of ​​the stack. If method1 calls another method, whether it is recursive method2 , method1 , System.out.println or something else, a new frame will be created in the place where the stack pointer is now indicated. This frame may be smaller, equal, or larger in size than the method2 frame. It will take part or all of the memory where the method2 frame method2 . If it calls another method2 call, it doesn't matter if it called the same or different parameters. It does not matter, because the program does not remember which parameters were used last time. All he knows is that the memory area starting with the stack pointer is empty and available for use. The program has no idea which framework has recently lived there. This frame has disappeared, gone, gone.

If you can follow this, you can see that when calculating the complexity of a space and finding only the amount of space used by the stack, the only thing that matters is how many frames can exist on the stack at any given time? Frames that may have existed in the past but are no longer running are irrelevant to the calculation, regardless of which parameters invoked the methods.

(PS In case someone planned to indicate how technically I am mistaken regarding this or that detail, I already know that this is a gross simplification.)

+8
source

Source: https://habr.com/ru/post/984166/


All Articles