Backpropagation problems

I have a couple of questions on how to code the backpropagation algorithm of neural networks:

The topology of my networks is the input layer, the hidden layer, and the output layer. Both the hidden layer and the output layer have sigmoid functions.

  1. First of all, should I use bias? Where should I connect the offset to my network? Should I put one offset unit per layer in both the hidden and the output layer? What about the input layer?
  2. In this link, they define the last delta as I / O and propagate the deltas back as shown. They hold a table to place all deltas until the errors actually propagate in the forward direction. Is this a departure from the standard backpropagation algorithm?alt text
  3. ?
  4. - , Resilient Propagation ?

: . d f1 (e)/de, , f1 (e) * [1- f1 (e)], ? alt text

+3
2

... ? ? - ? $0,02:

  • , . NN , . , .

  • backprop, .

  • , . . , . , , , .

  • backprop ( RProp MATLAB) , .

, , , . ...

+2
  • . , NN , . .

  • . Backpropagation , .

  • . . , BP, , 500- reset .

  • ..... RP.

+3

Source: https://habr.com/ru/post/1725466/


All Articles