Perceptron Training - The Most Important Feature

For one of my assignments in my AI class, we were tasked with creating a perceptron interception implementation of the Wodrow Hoff delta rule. I encoded this implementation in java:

The following github link contains the project: https://github.com/dmcquillan314/CS440-Homework/tree/master/CS440-HW2-1

The problem I am facing is not creating a perceptron. This works fine.

In the project, after learning the perceptron, I then applied an unclassified dataset to the perceptron, to then study the classifications of each input vector. This also works great.

My problem relates to learning which input function is the most important.

For example, if the function set in each input vector was a color, a car model, and a car make, and we wanted to classify which function was the most important. How to do it.

My initial understanding of this led me to believe that the calculation of the correlation coefficient is the value of this function for each input and classification vector that is produced. However, this turned out to be a false assumption.

Is there any other way to find out the most important function?

EDIT

Sample Weight:

(-752, 4771, 17714, 762, 6, 676, 3060, -2004, 5459, 9591,299, 3832, 14963, 20912).

An example of input vectors:

(55, 1, 2, 130, 262, 0, 0, 155, 0, 0, 1, 0, 3, 0)

(59, 1, 3, 126, 218, 1, 0, 134, 0, 2.2, 2, 1, 6, 1)

(45, 1, 2, 128, 308, 0, 2, 170, 0, 0, 1, 0, 3, 0)

(59, 1, 4, 110, 239, 0, 2, 142, 1, 1,2, 2, 1, 7, 1)

- .

, . , , , .

+4
2

, . / :

, :

[1,0,1,0], [0,1,0,1]

0 1, . - :

[0,145,0,132], [0,176,0,140]

, , . , , .

, : [0,176,0,140]

, , .

+1

, f.

, ( ) f.

. . , .

( ), 2 3 . , (8) ( 3) - , .

, - . ,

w= (w1, w2,... wn)

, , - | wi | i- . , , ? ... , n- x w. x ( ). i- wi , .

, ( ) . , .

+3

Source: https://habr.com/ru/post/1526610/


All Articles