Encog: BasicNetwork: Online Learning Without Pre-Dataset

I am trying to use the encog library as a function approximator for a gain training problem. To be more precise, I am trying to launch a multiprocessor perceptron (BasicNetwork). Since my agent is somehow exploring the world based on any selected RL algorithm, I can’t pre-create any BasicNeuralDataSet, as shown in the XOR example . I probably should use the pause () and resume () functions, but since I cannot find any documentation or examples on them, I am somewhat lost in how to use these functions (if they even work in my version. I'm not really sure reading the answer to the question in the second link).

I am using Java and encog-core-2.5.3 jar. My current approach is as follows:

BasicNetwork network = new BasicNetwork(); network.addLayer(new BasicLayer(null, true,2)); network.addLayer(new BasicLayer(new ActivationTANH(), true,4)); network.addLayer(new BasicLayer(new ActivationTANH(), true,1)); network.getStructure().finalizeStructure(); network.reset(); TrainingContinuation cont = null; double error = 0; do { int rnd = random.nextInt(trainInputs.length); NeuralDataSet trainingSet = new BasicNeuralDataSet( new double[][] { trainInputs[rnd] }, new double[][] { trainOutputs[rnd] }); Backpropagation train = new Backpropagation(network, trainingSet); // train the neural network if (cont != null) { train.resume(cont); } train.iteration(); cont = train.pause(); error = train.getError(); } while (error > 0.01); 

This is obviously a minimal example when I just draw random data points from a toy sample (XOR). What happens is that the MLP does not converge. Logging shows me completely random errors, so I assume that in a sense, the trainer is reset and that my approach to pause / resume is incorrectly implemented.


PS: This is an exact copy of this question. Since Jeff Heaton seems to be the only person answering questions, it seems that you either have to wait weeks for an answer or you don’t get an answer at all. I hope that asking is also not recommended here.


PPS: Since I am not attached to Encoq, but I can use any infrastructure, I also appreciate the sample code that fits my requirements. So far I have tried Weka and Neuroph, but both did not seem to have real online training, where you can simply initiate training whenever a new sample is available (it is also possible to classify samples at any time)

+4
source share
1 answer

Sorry for the slow reply. Basically, it looks like you're asking for online training. That is, you simply represent one case, and the neural network scales are updated immediately. Thus, there is no need to create a whole set of workouts, you just train as needed. Unfortunately, Encog does not have good support. It has become a frequently asked question, and I plan to add it to the next issue.

Now, the only way you could do this is to create a set of workouts with one element, and then train for one iteration.

EDIT Online training has been added from Encog 3.2. See this FAQ for more information.

http://www.heatonresearch.com/faq/5/3

+8
source

Source: https://habr.com/ru/post/1442815/


All Articles