Multilayer perceptron for OCR

I intend to use a multi-level perceptron network trained with backpropagation (one hidden layer, inputs served as 8x8 bit matrices containing black and white image pixels). The following questions arise:

  • What type of training should I use: batch or online?
  • How can I estimate the correct number of nodes in a hidden layer? I intend to handle 26 letters of the English alphabet.
  • How can I stop the learning process to avoid retraining?
  • (not quite connected) is there another better NN that can perform better than MLP? I know that MLP goes into local lows, recycles, etc., So what is the best (based on soft calculations) approach?

thanks

+3
source share
2 answers

Most of these questions are what you need to try different options to see what works best. This is a problem with ANN. There is no β€œbetter” way to do almost anything. You need to find out what works for your specific problem. However, I will give advice for your questions.

1) I prefer gradual learning. I think it is important that the network scales are updated after each template.

2) This is a difficult question. It depends on the complexity of your network. How many input nodes, output nodes, and training patterns are. For your problem, I can start at 100 and try the ranges up and down from 100 to see if there are any improvements.

3) ( ) . 5 , , , , . . , .

4) , 26 , . . , , , .

+5

3, , ANN , - . - , ( 100%), , , , ANN . , , .

very , . , ANN .

, , , - k-fold cross validation.

+2

Source: https://habr.com/ru/post/1705509/


All Articles