kNN basically says: "If you are close to the x coordinate, then the classification will be similar to the observed results at x." In SVM, a close analogue will use a high-dimensional core with a "small" bandwidth parameter, as this will cause SVM to process more. That is, the SVM will be closer to "if you are close to the x coordinate, then the classification will be similar to the classification observed at x".
I recommend that you start with the Gaussian kernel and check the results for different parameters. From my own experience (which, of course, focuses on certain types of data sets, so your mileage may vary), a customized SVM is superior to a customized kNN.
Questions for you:
1) How do you choose k in kNN?
2) What parameters did you try to use for SVM?
3) Do you measure accuracy in or out of the sample?
source share