Intuition for the perceptron weight update rule

I am having trouble understanding the weight update rule for persetrons :

w (t + 1) = w (t) + y (t) x (t).

Suppose we have a linearly shared dataset.

  • w is the set of weights [w0, w1, w2, ...], where w0 is the displacement.
  • x is a set of input parameters [x0, x1, x2, ...], where x0 is fixed at point 1 to accommodate the offset.

At the iteration t, where t = 0, 1, 2, ...,

  • w (t) is the set of weights at the iteration t.
  • x (t) is a classified example of learning.
  • y (t) is the target output x (t) (either -1 or 1).

Why does this update rule move the border in the right direction?

+4
source share
1

. , .

w (t + 1) = w (t) + y (t) x (t),

x (t) & sdot; w (t + 1) = x (t) & sdot; w (t) + x (t) & sdot; (y (t) x (t)) = x (t) & sdot; w (t) + y (t) [x (t) & sdot; ())].


, :

  • , x (t) .
  • x (t) & sdot; x (t) & ge; 0.

x (t)?

  • x (t) , , .
  • x (t) , y (t) = 1. , x (t) & sdot; x (t) ( ). , x (t).
  • , x (t) , y (t) = -1. , x (t) & sdot; x (t) ( ). , x (t).
+9

Source: https://habr.com/ru/post/1621839/


All Articles