I am having trouble understanding the weight update rule for persetrons :
w (t + 1) = w (t) + y (t) x (t).
Suppose we have a linearly shared dataset.
- w is the set of weights [w0, w1, w2, ...], where w0 is the displacement.
- x is a set of input parameters [x0, x1, x2, ...], where x0 is fixed at point 1 to accommodate the offset.
At the iteration t, where t = 0, 1, 2, ...,
- w (t) is the set of weights at the iteration t.
- x (t) is a classified example of learning.
- y (t) is the target output x (t) (either -1 or 1).
Why does this update rule move the border in the right direction?
source
share