I play with Python and decided to code the factorial function in two different ways:
import operator
def fact1(n):
return reduce(operator.__mul__,xrange(1,n+1))
and
def fact2(n):
return reduce(int.__mul__,xrange(1,n+1))
I was very surprised when I measured them:
timeit fact1(10)
1000000 loops, best of 3: 933 ns per loop
timeit fact2(10)
1000000 loops, best of 3: 1.46 us per loop
operator.__mul__seems to be twice as fast as int.__mul__(instead of not collapsing when factorial values get so big, Python converts it from intto long)
I'm new to Python, so I don't know how these functions differ. Is there a main reason for the performance difference?
source
share