Does fuzzy logic really improve simple machine learning algorithms?

I read about fuzzy logic, and I just don’t understand how this can improve machine learning algorithms in most cases (which, apparently, is used relatively often).

Take, for example, k nearest neighbors. If you have a bunch of attributes like color: [red,blue,green,orange], temperature: [real number], shape: [round, square, triangle], you can't really fuzzify any of them except the real numbered attribute (please correct me if I'm wrong), and I don’t see how it can improve something more than joint use of things.

How can automatic fuzzy logic be used to improve machine learning? Examples of toys that you find on most websites are usually not suitable for most cases.

+3
source share
6 answers

Fuzzy logic is appropriate when variables have a natural interpretation of the form. For example, [very few, few, many, very many] have a good overlapping trapezoidal interpretation of the meanings.

Variables, such as color, may be missing. Fuzzy variables indicate the degree of belonging that become useful.

, , . , , , ( ) , , .

+6

, , (, ..). , , , , . , , , "PC AI" / 2002 , , , :

:

0

[round, square, triangle] - , (.. ). ( ). , ( , ). , , .

, . ", " ( ) , () .

0

, -, fuzzified controller , , /. , 0 1. , , , , . , . , , , , , , , . , , , [, , ] - , . , , , . , , .

0

, . "shape" " ", "", "", "" "". "" "", . "color", "" , RGB . , , , .

0

Is it possible to simply transform discrete sets into continuous ones and get the same effects as blurriness, being able to use all the methods of probability theory?

For example, the size ['small', 'medium', 'big'] ==> [0,1]

0
source

Source: https://habr.com/ru/post/1779441/


All Articles