Why is using the tanh logistic sigmoid definition faster than scipy expit?

I am using a logical sigmoid for an application. I compared times using the scipy.special , expit , compared to using the hyperbolic tangent sigmoid definition.

I found that the hyperbolic tangent was 3 times faster. What's going on here? I also tested the time on a sorted array to see if the result was different.

Here is an example that was run in IPython:

 In [1]: from scipy.special import expit In [2]: myexpit = lambda x: 0.5*tanh(0.5*x) + 0.5 In [3]: x = randn(100000) In [4]: allclose(expit(x), myexpit(x)) Out[4]: True In [5]: timeit expit(x) 100 loops, best of 3: 15.2 ms per loop In [6]: timeit myexpit(x) 100 loops, best of 3: 4.94 ms per loop In [7]: y = sort(x) In [8]: timeit expit(y) 100 loops, best of 3: 15.3 ms per loop In [9]: timeit myexpit(y) 100 loops, best of 3: 4.37 ms per loop 

Edit:

Information about the car:

  • Ubuntu 16.04
  • RAM: 7.4 GB
  • Intel Core i7-3517U CPU @ 1.90GHz × 4

Numpy / Scipy Information:

 In [1]: np.__version__ Out[1]: '1.12.0' In [2]: np.__config__.show() lapack_opt_info: libraries = ['openblas', 'openblas'] library_dirs = ['/usr/local/lib'] define_macros = [('HAVE_CBLAS', None)] language = c blas_opt_info: libraries = ['openblas', 'openblas'] library_dirs = ['/usr/local/lib'] define_macros = [('HAVE_CBLAS', None)] language = c openblas_info: libraries = ['openblas', 'openblas'] library_dirs = ['/usr/local/lib'] define_macros = [('HAVE_CBLAS', None)] language = c blis_info: NOT AVAILABLE openblas_lapack_info: libraries = ['openblas', 'openblas'] library_dirs = ['/usr/local/lib'] define_macros = [('HAVE_CBLAS', None)] language = c lapack_mkl_info: NOT AVAILABLE blas_mkl_info: NOT AVAILABLE In [3]: import scipy In [4]: scipy.__version__ Out[4]: '0.18.1' 
+5
source share
1 answer

edit:

I will tell future people this question .


To summarize useful comments:

"Why is using the tanh definition of a logistic sigmoid faster than scipy expit?"

Answer: This is not so; there are some funny things related to tanh and exp C functions on my particular machine.

It turns out that on my machine, the C function for tanh faster than exp . The answer to the question of why this is so obviously relates to another question. When I run the C ++ code below, I see

 tanh: 5.22203 exp: 14.9393 

which corresponds to an increase of ~ 3x of the tanh function when called from Python. The strange thing is that when I run identical code on a separate machine that has the same OS, I get similar time results for tanh and exp .

 #include <iostream> #include <cmath> #include <ctime> using namespace std; int main() { double a = -5; double b = 5; int N = 10001; double x[10001]; double y[10001]; double h = (ba) / (N-1); clock_t begin, end; for(int i=0; i < N; i++) x[i] = a + i*h; begin = clock(); for(int i=0; i < N; i++) for(int j=0; j < N; j++) y[i] = tanh(x[i]); end = clock(); cout << "tanh: " << double(end - begin) / CLOCKS_PER_SEC << "\n"; begin = clock(); for(int i=0; i < N; i++) for(int j=0; j < N; j++) y[i] = exp(x[i]); end = clock(); cout << "exp: " << double(end - begin) / CLOCKS_PER_SEC << "\n"; return 0; } 
+2
source

Source: https://habr.com/ru/post/1265930/


All Articles