It would be wrong to say: "Matlab is always faster than NumPy," or vice versa. Often their performance is comparable. When using NumPy to get good, you should keep in mind that NumPy speed comes from invoking basic functions written in C / C ++ / Fortran. It works well when you apply these functions to whole arrays. In general, you get lower performance when you call this NumPy function on smaller arrays or scalars in a Python loop.
What is wrong with the Python loop you ask? Each iteration through a Python loop calls the next method. Each use of indexing [] is a call to __getitem__ . Each += is a __iadd__ call. Each dotted search attribute (for example, like np.dot ) includes function calls. These function calls greatly complicate the speed. These hooks give Python expressive power - indexing for strings means something else than indexing, for example, for dicts. The same syntax, different meanings. The magic is achieved by providing objects with different __getitem__ methods.
But this expressive power is expensive. Therefore, when you do not need all that dynamic expressiveness in order to improve performance, try to limit yourself. The NumPy function calls entire arrays.
So remove the for-loop; Use vectorized equations whenever possible. For example, instead of
for i in range(m): delta3 = -(x[i,:]-a3[i,:])*a3[i,:]* (1 - a3[i,:])
you can calculate delta3 for each i at the same time:
delta3 = -(x-a3)*a3*(1-a3)
While there is a vector in for-loop delta3 , using the vectorized delta3 equation is a matrix.
Some of the calculations in the for-loop are independent of i and therefore need to be raised outside the loop. For example, sum2 looks like a constant:
sum2 = sparse.beta*(-float(sparse.rho)/rhoest + float(1.0 - sparse.rho) / (1.0 - rhoest) )
Here is an example with an alternative implementation ( alt ) of your code ( orig ).
My timeit test shows a 6.8x speed improvement :
In [52]: %timeit orig() 1 loops, best of 3: 495 ms per loop In [53]: %timeit alt() 10 loops, best of 3: 72.6 ms per loop
import numpy as np class Bunch(object): """ http://code.activestate.com/recipes/52308 """ def __init__(self, **kwds): self.__dict__.update(kwds) m, n, p = 10 ** 4, 64, 25 sparse = Bunch( theta1=np.random.random((p, n)), theta2=np.random.random((n, p)), b1=np.random.random((p, 1)), b2=np.random.random((n, 1)), ) x = np.random.random((m, n)) a3 = np.random.random((m, n)) a2 = np.random.random((m, p)) a1 = np.random.random((m, n)) sum2 = np.random.random((p, )) sum2 = sum2[:, np.newaxis] def orig(): partial_j1 = np.zeros(sparse.theta1.shape) partial_j2 = np.zeros(sparse.theta2.shape) partial_b1 = np.zeros(sparse.b1.shape) partial_b2 = np.zeros(sparse.b2.shape) delta3t = (-(x - a3) * a3 * (1 - a3)).T for i in range(m): delta3 = delta3t[:, i:(i + 1)] sum1 = np.dot(sparse.theta2.T, delta3) delta2 = (sum1 + sum2) * a2[i:(i + 1), :].T * (1 - a2[i:(i + 1), :].T) partial_j1 += np.dot(delta2, a1[i:(i + 1), :]) partial_j2 += np.dot(delta3, a2[i:(i + 1), :]) partial_b1 += delta2 partial_b2 += delta3
Tip: Please note that I left in the comments the form of all intermediate arrays. Knowing the shape of the arrays helped me understand what your code does. The shape of arrays can help you make proper use of NumPy functions. Or at least by paying attention to the figures, it can help you understand whether the operation is reasonable. For example, when you calculate
np.dot(A, B)
and A.shape = (n, m) and B.shape = (m, p) , then np.dot(A, B) will be an array of the form (n, p) .
This can help build arrays in the C_CONTIGUOUS order (at least when using np.dot ). It can be like 3x acceleration:
Below x same as xf , except that x is C_CONTIGUOUS and xf is F_CONTIGUOUS - the same relationship for y and yf .
import numpy as np m, n, p = 10 ** 4, 64, 25 x = np.random.random((n, m)) xf = np.asarray(x, order='F') y = np.random.random((m, n)) yf = np.asarray(y, order='F') assert np.allclose(x, xf) assert np.allclose(y, yf) assert np.allclose(np.dot(x, y), np.dot(xf, y)) assert np.allclose(np.dot(x, y), np.dot(xf, yf))
Tests
%timeit show the difference in speed:
In [50]: %timeit np.dot(x, y) 100 loops, best of 3: 12.9 ms per loop In [51]: %timeit np.dot(xf, y) 10 loops, best of 3: 27.7 ms per loop In [56]: %timeit np.dot(x, yf) 10 loops, best of 3: 21.8 ms per loop In [53]: %timeit np.dot(xf, yf) 10 loops, best of 3: 33.3 ms per loop
Regarding benchmarking in Python:
This can be misleading to use the difference in pairs of time.time() calls to compare code speed in Python. You need to repeat the measurement many times. It is better to disable the automatic garbage collector. It is also important to measure long periods of time (for example, repeating at least 10 seconds) in order to avoid errors due to poor resolution of the clock timer and to reduce the significance of time.time service calls. Instead of writing all this code, Python provides you with a timeit module . I essentially use this at the time of the code snippets, except that I call it through which I linked to , according to time.time , the two code snippets were 1.7 times different, and the tests using timeit showed that the code snippets performed in basically identical amounts of time.