I know this was asked a while ago, but I would like to answer it, as you may find my answer helpful.
As already mentioned, you might consider using different weights for minority classes or using different penalties for incorrect classification. However, there is a smarter way to deal with unbalanced data sets.
You can use SMOTE (Synthetic O ver-sampling te chnique) to generate synthesized data for a minority class. This is a simple algorithm that handles some imbalance data sets very well.
In each iteration of the algorithm, SMOTE considers two random instances of the minority class and adds an artificial example of the same class somewhere in the middle. The algorithm continues to enter the data set with samples until the two classes become balanced or some other criteria (for example, add a certain number of examples). Below you can find an image describing the algorithm for a simple data set in the space of 2D objects.
Associated weight with a minority class is a special case of this algorithm. When you associate the weight of $ w_i $ with instance i, you basically add additional $ w_i - 1 $ instances on top of instance i!

What you need to do is increase the original dataset using the samples created by this algorithm and prepare SVM for this new dataset. You can also find many online implementations in different languages โโsuch as Python and Matlab.
There were other extensions to this algorithm, I can tell you more materials if you want.
To test the classifier, you need to split the data set into a test and train, add synthetic instances to the train set ( DO NOT ADD ANY INSTALLATION TEST ), prepare the model for the train set, and finally check it on the test set. If you look at the generated instances when testing, you get biased (and ridiculously higher) accuracy and feedback.
source share