What is the threshold point in the perceptron?

I'm having trouble understanding what the threshold actually does in a single-layer perceptron. Data is usually shared regardless of the threshold value. A lower threshold seems to divide the data more evenly; Is this what it is used for?

+6
source share
3 answers

In fact, you simply set the threshold when you are not using bias. Otherwise, the threshold is 0.

Remember that one neuron divides your input space into a hyperplane. Good?

Now imagine a neuron with 2 inputs X=[x1, x2] , 2 weighing W=[w1, w2] and threshold TH . The equation shows how this neuron works:

 x1.w1 + x2.w2 = TH 

this is equal to:

 x1.w1 + x2.w2 - 1.TH = 0 

Ie, this is your hyperplane equation that divides the input space.

Note that this neuron works if you manually set the threshold. The solution is to change TH to a different weight, therefore:

 x1.w1 + x2.w2 - 1.w0 = 0 

If the term 1.w0 is your BIAS. Now you can still draw a plane in your input space without manually setting a threshold value (i.e., the Threshold value is always 0). But, if you set the threshold to a different value, the weights will simply adapt to adjusting the equation, i.e. Weights ( INCLUDING BIAS ) absorb threshold effects.

+14
source

The sum of the products of weights and inputs is calculated in each node, and if the value is above a certain threshold (usually 0), the neuron is triggered and takes an activated value (usually 1); otherwise, it takes a deactivated value (usually -1). Neurons with this activation function are also called artificial neurons or linear threshold units.

+2
source

I think now I understand, with the help of Daoka. I just wanted to add information for other people to find.

Single-layer perceptron separator equation

Σw <sub> Jsub> x <sub> Jsub> + offset = threshold

This means that if the input exceeds a threshold value or

Σw j x j + bias> threshold, it is classified into one category, and if

Σw j x j + bias <threshold, it is classified in another.

Bias and threshold do serve the same purpose in order to translate a line (see The role of bias in neural networks ). However, being on opposite sides of the equation, they are "negatively proportional."

For example, if the offset is 0 and the threshold value is 0.5, this will be equivalent to the offset -0.5 and threshold 0.

+2
source

Source: https://habr.com/ru/post/891857/


All Articles