Firstly, I am new to vector machine support, so I'm sorry if I am going to solve this problem incorrectly. I am trying to implement a very simple SVM from scratch that uses the identifier core function to classify linearly shared data into one of two classes. As an example of the type of data that I will use, consider the chart below this document :

Using points (1,0), (3, 1) and (3, -1) as reference vectors, we know that the following is true to calculate the solution plane (Screenshot from the same document):
Which, when messing around and rebuilding a bit, gives us Lagrange multipliers of -3.5, 0.75 and 0.75, respectively.
I understand how this algebra works on paper, but I'm not sure about the best approach to its implementation. So my question is: how are SVM Lagrange multipliers calculated in practice ? Is there any algorithm that I am missing that can determine these values ββfor arbitrary linearly shared support vectors? Should I use a standard math library to solve linear equations (I implement SVM in java)? Would such a math library be slow for large-scale learning? Please note that this is a training exercise, so I'm not just looking for a ready-made SVM library.
Any other tips would be greatly appreciated!
EDIT 1 : LutzL made a good conclusion that half of the problem actually determines which points should be used as reference vectors, therefore, to simplify things for this question, have already been calculated.
source share