Sse2 floating point multiplication

I tried to pass the code from FANN Lib (network of neurons written in C) to SSE2. But SSE2 performance has deteriorated than regular code. When running my SSE2 implementation, one run takes 5.50 minutes without 5.20 minutes.

How can SSE2 be slower than usual? Maybe because of _mm_set_ps ? I use the Apple LLVM compiler (Xcode 4) to compile the code (all SSE extension flags are enabled, the optimization level is -Os ).

Code without SSE2

  neuron_sum += fann_mult(weights[i], neurons[i].value) + fann_mult(weights[i + 1], neurons[i + 1].value) + fann_mult(weights[i + 2], neurons[i + 2].value) + fann_mult(weights[i + 3], neurons[i + 3].value); 

SSE2 Code

  __m128 a_line=_mm_loadu_ps(&weights[i]); __m128 b_line=_mm_set_ps(neurons[i+3].value,neurons[i+2].value,neurons[i+1].value,neurons[i].value); __m128 c_line=_mm_mul_ps(a_line, b_line); neuron_sum+=c_line[0]+c_line[1]+c_line[2]+c_line[3]; 
+4
source share
1 answer

To be able to see the acceleration here, you need to do the following:

  • make sure that weights[i] is 16 byte aligned, then use _mm_load_ps instead of _mm_loadu_ps
  • reorganize neurons[] so that it is SoA instead of AoS (and also 16 bytes aligned), and then use _mm_load_ps to load 4 values ​​at a time
  • move the horizontal sum from the loop (is there a loop, right?) - just save 4 partial sums in the vneurom_sum vector, and then make one final horizontal sum on this vector after the loop

Even then, you will not see huge acceleration, since you perform only one arithmetic operation for 2 loads and 1 store. Since most modern x86 processors have two scalar FPUs, in any case, you probably will not come close to the theoretical 4x acceleration for a 128-bit floating SIMD, I would expect no more, say, 50% faster relative to the scalar code.

+5
source

Source: https://habr.com/ru/post/1403532/


All Articles