Support vector machines (SVMs) and regularized least squares (RLS) are two recent promising techniques for classification. SVMs implement the structure risk minimization principle and use the kernel trick to extend it to the non-linear case. On the other hand, RLS minimizes a regularized functional directly in a reproducing kernel Hubert space defined by a kernel. While both have a sound mathematical foundation, RLS is strikingly simple. On the other hand, SVMs in general have a sparse representation of solutions. In addition, the performance of SVMs has been well documented but little can be said of RLS. This paper applies these two techniques to a collection of data sets and presents results demonstrating virtual identical performance by the two methods.
|Number of pages
|Proceedings - International Conference on Pattern Recognition
|Published - 17 Dec 2004
|Proceedings of the 17th International Conference on Pattern Recognition, ICPR 2004 - Cambridge, United Kingdom
Duration: 23 Aug 2004 → 26 Aug 2004