Yes, both soft and hard formulations of standard SVM are convex optimization problems, therefore they have unique global optima. I believe that if the problem is incredibly large, the approximation methods will be so stingy that you will use them instead of exact solvers, and then your numerical solution technique may not find a global optimum, just because this advantageous advantage is to reduce the search time.
A typical approach to them is sequential minimal optimization - keep some variables fixed and optimize for a small subset of variables, and then repeat with different variables again and again until you can improve the objective function. Given this, I find it unlikely that anyone would solve these problems in a way that would not give a global optimum.
Of course, the global optimum you find may not match your data; it depends on how good your model is, noisy class labels, etc. are a data generation process. Therefore, resolving this does not guarantee that you have found an absolute right classifier or anything else.
Here are some lecture notes I found about this in a quick search: ( link )
Here is a more direct link regarding bulge claims: ( link )
source share