Does SVM classification always provide a unique solution?

Linear classifiers on separability can have more than one boundary for classifying data. It is for this reason that we are looking for an SVM to select a border with a maximum margin (minimal generalization error from invisible data).

SVM classification always gives a unique solution (will we not get two boundaries of the maximum field in all possible data)?

The answer depends on SVM with hard label and soft margin SVM?

+4
source share
2 answers

Yes, both soft and hard formulations of standard SVM are convex optimization problems, therefore they have unique global optima. I believe that if the problem is incredibly large, the approximation methods will be so stingy that you will use them instead of exact solvers, and then your numerical solution technique may not find a global optimum, just because this advantageous advantage is to reduce the search time.

A typical approach to them is sequential minimal optimization - keep some variables fixed and optimize for a small subset of variables, and then repeat with different variables again and again until you can improve the objective function. Given this, I find it unlikely that anyone would solve these problems in a way that would not give a global optimum.

Of course, the global optimum you find may not match your data; it depends on how good your model is, noisy class labels, etc. are a data generation process. Therefore, resolving this does not guarantee that you have found an absolute right classifier or anything else.

Here are some lecture notes I found about this in a quick search: ( link )

Here is a more direct link regarding bulge claims: ( link )

+5
source

For hard field classifiers without regularization, the SVM problem can be transformed into a coercive quadratic programming problem with linear constraints (provided there is a solution / positive margin). Coercive quadratic linear constrained programming problems have unique global minima, and simple optimization methods (for example, gradient ordinal or perceptron algorithm) are guaranteed to converge to a global minimum. See for example

http://optimization-online.org/DB_FILE/2007/05/1662.pdf

For SVM with a soft edge and for SVM with regularization terms, I believe that there are unique global minimums, and conventional methods converge to a global minimum, but I do not know any evidence that covers all the possibilities.

+3
source

Source: https://habr.com/ru/post/1436428/


All Articles