I have a really interesting problem, but I solve it in 3 hours, and I just can’t understand what is happening and why it does not work. I tried google, but without any results.
I am programming a CUDA program. I have this very simple piece of code:
__global__ void calcErrorOutputLayer_kernel(*arguments...*) { int idx = blockIdx.x * blockDim.x + threadIdx.x; float gradient; float derivation; derivation = pow((2/(pow(euler, neuron_device[startIndex + idx].outputValue) + pow(euler, -neuron_device[startIndex + idx].outputValue))), 2); gradient = (backVector_device[idx] - neuron_device[startIndex + idx].outputValue); gradient = gradient * derivation;
ok, so the gradient is calculated correctly and also output. but when a line arrives where these two variables should be unrelated, nothing happens (the gradient value does not change), and on the next line the CUDA debugger tells me that: “derivation” does not matter at the target location "
gradient * 2.0 works correctly and changes the gradient value 2 times.
Can anybody help me?
source share