The "relu prime", or gradient of the ReLU function, is better known as the "step function".
Numpy 1.13 introduces ufunc for this:
def reluprime(x): return np.heaviside(x, 0)
The synchronization results on my machine show that this works rather poorly, offering more work there:
In [1]: x = np.random.randn(100000) In [2]: %timeit np.heaviside(x, 0) #mine 1.31 ms ± 58.3 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each) In [3]: %timeit np.where(x > 0, 1.0, 0.0) # Jonas Adler's 658 µs ± 74.2 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each) In [4]: %timeit (x>0).astype(x.dtype) # Miriam Farber's 172 µs ± 34.8 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
source share