after reading some articles about the neural network (back distribution) I try to write a simple neural network.
ive solved the XOR neural network, my problem is when I try to train the network, if I use only one example to train the network, say 1,1,0 (like input1, input2, targetOutput). after 500 trains + - the response of the network is 0.05. but if im trying more than one example (say, 2 different or all 4 possibilities), the network tends to 0.5 as an output :( I searched on google for errors that have no results: S I try to give as many details as possible so that help find what is wrong:
-used networks with 2.2.1 and 2.4.1 (inputlayer, hiddenlayer, outputlayer).
- output for each neuron, determined by:
double input = 0.0; for (int n = 0; n < layers[i].Count; n++) input += layers[i][n].Output * weights[n];
while "i" is the current layer, and weight is all the weights from the previous layer.
- the error of the last level (output level) is determined as follows:
value*(1-value)*(targetvalue-value);
while “value” is the neural output, and “targetvalue” is the target result for the current neuron.
- the error for the remaining neuroles is determined by:
foreach neural in the nextlayer sum+=neural.value*currentneural.weights[neural];
- all weights in the network are adapted according to this formula (weight from nervous → neural 2)
weight+=LearnRate*neural.myvalue*neural2.error;
while LearnRate is the speed of learning for newborns (0.25 on my network). - the displacement for each nerve is determined as follows:
bias+=LearnRate*neural.myerror*neural.Bias;
offset is const value = 1.
that almost everything I can detail, since I said that the output goal is 0.5 with various training examples :(
Thank you so much for your help ^ _ ^.