First of all: your dataset seems very small for any practical purpose. Having said that, let's see what we can do.
SVMs are mostly popular in large settings. It is currently unclear whether this applies to your project. They build planes on several (or even single) supporting instances, and often outperform Neural Nets in large training situations. A priori, they may not be your worst choice.
Redistributing your data will not help much when using SVM. SVM is based on the concept of support vectors, which are mainly class outliers that determine what is in the class and what is not. Oversampling will not create a new support vector (I assume that you are already using a train set as a test set).
The usual oversampling in this scenario will also not give you any new information about certainty other than artifacts created by unbalanced oversampling, since the instances will be exact copies and no distribution changes will occur. You can find some information using SMOTE (Synthetic Minority Re-election Methodology). Basically, you will create synthetic instances based on the ones you have. Theoretically, this will provide you with new instances that will not be exact copies of the ones you have, and may thus be slightly different from the usual classification. Note. By definition, all of these examples will be between the original examples in your sample space. This does not mean that they will be between the predicted SVM space, possibly with training effects, which are actually not the case.
Finally, you can evaluate confidence with the distance to the hyperplane. Please see: https://stats.stackexchange.com/questions/55072/svm-confidence-according-to-distance-from-hyperline
source share