Fast weight calculation for kernel-based perceptron in two-class classification problems

We propose a method, called Direct Kernel Perceptron (DKP), to directly calculate the weights of a single perceptron using a closed-form expression which does not require any training stage. The weigths minimize a performance measure which simultaneously takes into account the training error and the classification margin of the perceptron. The ability to learn non-linearly separable problems is provided by a kernel mapping between the input and the hidden space. Using Gaussian kernels, DKP achieves better results than the standard Support Vector Machine (SVM) and Linear Discriminant Analysis (LDA) for a wide variety of benchmark two-class data sets. The computational cost of DKP linearly increases with the dimension of the input space and it is much lower than the corresponding to SVM.

keywords: neuralcomputing