ANN: Recursive backpropagation

I am trying to implement backpropagation with recursion for academic purposes, but it seems like I was mistaken somewhere. For some time they did this, but either did not study at all, or did not study according to the second model.

Please let me know where I was wrong. (This is javascript syntax) Note: errors reset to null before each training cycle.

 this.backpropagate = function(oAnn, aTargetOutput, nLearningRate) { nLearningRate = nLearningRate || 1; var oNode, n = 0; for (sNodeId in oAnn.getOutputGroup().getNodes()) { oNode = oAnn.getOutputGroup().getNodes()[sNodeId]; oNode.setError(aTargetOutput[n] - oNode.getOutputValue()); n ++; } for (sNodeId in oAnn.getInputGroup().getNodes()) { this.backpropagateNode(oAnn.getInputGroup().getNodes()[sNodeId], nLearningRate); } } this.backpropagateNode = function(oNode, nLearningRate) { var nError = oNode.getError(), oOutputNodes, oConn, nWeight, nOutputError, nDerivative = oNode.getOutputValue() * (1 - oNode.getOutputValue()), // Derivative for sigmoid activation funciton nInputValue = oNode.getInputValue(), n; if (nError === null /* Dont do the same node twice */ && oNode.hasOutputs()) { nDerivative = nDerivative || 0.000000000000001; nInputValue = nInputValue || 0.000000000000001; oOutputNodes = oNode.getOutputNodes(); for (n=0; n<oOutputNodes.length; n++) { nOutputError = this.backpropagateNode(oOutputNodes[n], nLearningRate); oConn = oAnn.getConnection(oNode, oOutputNodes[n]); nWeight = oConn.getWeight(); oConn.setWeight(nWeight + nLearningRate * nOutputError * nDerivative * nInputValue); nError += nOutputError * nWeight; } oNode.setError(nError); } return oNode.getError(); } 
+6
source share
1 answer

Allowed him. Apparently, low-dimensional networks often get stuck at local minima. This is easy to understand, knowing that larger networks are less likely to reach any lows, even global ones.

The realization of the momentum, which increases with each iteration, leads me to most of the lows. Thus, reinitializing the weights to random (-0.5 to 0.5) values ​​and conducting several training sessions ultimately leads me to all of them.

I am pleased to report that my network is now being trained in 100% of cases if the data is classified.

+2
source

Source: https://habr.com/ru/post/948976/


All Articles