Refer: An Introduction to Support Vector Machines and Other Kernel-based Learning Methods by Nello Cristianini and John Shawe-Taylor]
The training algorithm only depend on the data through dot products in H, i.e. on functions of the form Φ(x_i)·Φ(x_j). Now if there were a “kernel function” K such that
K(x_i,x_j) = Φ(x_i)·Φ(x_j),
we would only need to use K in the training algorithm, and would never need to explicitly even know what Φ is. One example is radial basis functions (RBF) or gaussian kernels where, H is inﬁnite dimensional, so it would not be very easy to work with Φ explicitly.
Training the model requires the choice of:
• the kernel function, that determines the shape of the decision surface
• parameters in the kernel function (eg: for gaussian kernel:variance of the Gaussian, for polynomial kernel: degree of the polynomial)
• the regularization parameter λ.
Bhartendu (2020). SVM using various Kernels (https://www.mathworks.com/matlabcentral/fileexchange/63033-svm-using-various-kernels), MATLAB Central File Exchange. Retrieved .
Thank you for sharing this code. Please; can you provide me with the way to create excel sheet for the predicted results for each kernel type within the source code and compare the predicted values and its correlation with the actual output.
Abbas M Abd
Thanks @yulai zhao