Is recursion worse than iteration?

A few days ago, someone told me that recursion would be better than iteration, and should be used whenever possible.

So, I introduced recursion and tried to write a simple program to get the factorial of a number. This is recursion:

def fact(n): if n == 1: return 1 return n * fact(n - 1) 

and although this works great, it gets RuntimeError: maximum recursion depth exceeded as soon as n gets a value above 997.

So, I wrote a simple function that does the same thing, but with for loop .

 def fact(n): a = 1 for x in range (0, n, 1): a = a * (n - x) return a 

while n < 10000 it gives an answer within 150 ms.

So, I maybe recursion is faster with low numbers, but no. This takes longer: recursion and iteration time

So my question is:
Is there any reason to use recursion when writing Python programs?
and:
Are there any problems that can only be solved with recursion?

+5
source share
3 answers

There are no problems that need to be solved by explicit recursion; and there are no problems that need to be solved using explicit iteration - the Church and Turing have proven that they are interchangeable.

However, there are problems that have cleaner code and are easier to reason with if you use recursion. In fact, the standard Python library and even the C code itself in many places use recursion internally and therefore tend to hit the extreme limit of recursion in many places, for example, print repr nested tuples:

 >>> from functools import reduce >>> x = reduce(lambda x, y: (x,), range(10000)) >>> x # recursion limit hit in C code Traceback (most recent call last): File "<stdin>", line 1, in <module> RuntimeError: maximum recursion depth exceeded while getting the repr of a tuple >>> import pprint >>> pprint.pprint(x) # recursion limit hit in Python standard library code. Traceback (most recent call last): ... File "/usr/lib/python3.4/pprint.py", line 390, in _safe_repr orepr, oreadable, orecur = _safe_repr(o, context, maxlevels, level) File "/usr/lib/python3.4/pprint.py", line 340, in _safe_repr if issubclass(typ, dict) and r is dict.__repr__: RuntimeError: maximum recursion depth exceeded while calling a Python object 

You can always convert a recursive algorithm into an iteration with a finite state machine and a stack of intermediate arguments, because after all this, what the CPU does to implement the recursion is pretty.

+5
source

I canโ€™t say it is faster, but recursion definitely has its place in programming. For example, consider the following function, which should โ€œsmooth outโ€ nested lists into one large large list:

 def expand_list(lst): result = [] for element in lst: if isinstance(element, list): result += expand_list(element) else: result.append(element) return result 

(for example, expand_list([1,2,3,[4,5,[6,7]]]) corresponds to [1,2,3,4,5,6,7] )

An iteration will not be able to handle this very well.

(Side note: Python has functions to do the same as above - this was just an example.)

+2
source

Iteration in general (and in this case) is more efficient (in terms of performance, if you perform similar operations). Recursion has more overhead, since you have to perform a new function call every time that costs resources.

https://en.wikipedia.org/wiki/Recursion_(computer_science) says

"The power of recursion obviously lies in the ability to define an infinite set of objects by a finite statement."

but the use case must be very recursive specific in order to be a better choice.

On error, Python has a built-in function that returns the maximum recursion depth if you find it useful: https://docs.python.org/2/library/sys.html#sys.getrecursionlimit

+1
source

Source: https://habr.com/ru/post/1246222/


All Articles