I read the backpropagation algorithm and realized that in order to calculate the gradients of the network parameters, the error must be returned back from the last output layer to the first hidden layer. But when I use the level of localization of contrast in convolutional neural networks, I find that it is unclear how to return the propagation of the error.
A description of the local contrast normalization layer can be found here: http://code.google.com/p/cuda-convnet/wiki/LayerParams#Data_layer
source share