Dismantling tells you that magic optimizations were not applied here, it is really just a reduction in comparison with genexpr. Python seems to be up to the challenge, even if it surprises you.
>>> import dis >>> dis.dis(f3) 5 0 LOAD_GLOBAL 0 (reduce) 3 LOAD_GLOBAL 1 (operator) 6 LOAD_DEREF 1 (d) 9 LOAD_CONST 1 (1) 12 BINARY_SUBTRACT 13 CALL_FUNCTION 1 16 LOAD_CLOSURE 0 (x) 19 BUILD_TUPLE 1 22 LOAD_CONST 2 (<code object <genexpr> at 0x7f32d325f830, file "<stdin>", line 5>) 25 MAKE_CLOSURE 0 28 LOAD_GLOBAL 2 (xrange) 31 LOAD_FAST 1 (y) 34 CALL_FUNCTION 1 37 GET_ITER 38 CALL_FUNCTION 1 41 CALL_FUNCTION 2 44 RETURN_VALUE
If you specifically look at your f5(2,4) call, it does not perform as many operations, in fact:
>>> counter = 0 >>> def adder(x, y): ... global counter ... counter += 1 ... return x + y ... >>> def op(d): ... if d <= 1: return adder ... return lambda x,y:reduce(op(d-1),(x for i in xrange(y))) ... >>> op(5)(2,4) >>> counter 65035 >>> counter = 0 >>> op(3)(4,100) >>> counter 297
The 65k add-ons, not to mention the 297 for exponentiation, don't even talk about when it comes to the fun-optimized, modern processors, so it comes as no surprise that it ends instantly. Try to increase one of the arguments to see how it quickly goes to the border of a quick estimate.
By the way, operator is a built-in module, and you should not call your own functions.
source share