I believe this is faster than other pure python methods:
from math import factorial from operator import mul def n_choose_k(n, k): if k < nk: return reduce(mul, xrange(n-k+1, n+1)) // factorial(k) else: return reduce(mul, xrange(k+1, n+1)) // factorial(nk)
It is slow compared to numpy solutions, but note that NumPy does not work in "unlimited integers" like Python. This means that with slow Python, you can return the correct results. NumPy's default behavior is not always ideal for combinatorics (at least when very large integers are involved).
eg.
In [1]: from scipy.misc import comb In [2]: from scipy.special import binom In [3]: %paste from math import factorial from operator import mul def n_choose_k(n, k): if k < nk: return reduce(mul, xrange(n-k+1, n+1)) // factorial(k) else: return reduce(mul, xrange(k+1, n+1)) // factorial(nk)
Not surprisingly, the error is so large that it is simply a truncation effect. Why cut back? NumPy is limited to using 64 bits to represent an integer. However, the requirement for such a large whole is somewhat larger:
In [8]: from math import log In [9]: log((n_choose_k(1000, 250)), 2) Out[9]: 806.1764820287578 In [10]: type(binom(1000, 250)) Out[10]: numpy.float64
source share