SVM vs regularized least squares classification

Peng Zhang, Jing Peng

Research output: Contribution to journalConference articleResearchpeer-review

51 Citations (Scopus)

Abstract

Support vector machines (SVMs) and regularized least squares (RLS) are two recent promising techniques for classification. SVMs implement the structure risk minimization principle and use the kernel trick to extend it to the non-linear case. On the other hand, RLS minimizes a regularized functional directly in a reproducing kernel Hubert space defined by a kernel. While both have a sound mathematical foundation, RLS is strikingly simple. On the other hand, SVMs in general have a sparse representation of solutions. In addition, the performance of SVMs has been well documented but little can be said of RLS. This paper applies these two techniques to a collection of data sets and presents results demonstrating virtual identical performance by the two methods.

Original languageEnglish
Pages (from-to)176-179
Number of pages4
JournalProceedings - International Conference on Pattern Recognition
Volume1
StatePublished - 17 Dec 2004
EventProceedings of the 17th International Conference on Pattern Recognition, ICPR 2004 - Cambridge, United Kingdom
Duration: 23 Aug 200426 Aug 2004

Fingerprint

Support vector machines
Acoustic waves

Cite this

@article{9844f55c6da1454f9bd64e47d947e95d,
title = "SVM vs regularized least squares classification",
abstract = "Support vector machines (SVMs) and regularized least squares (RLS) are two recent promising techniques for classification. SVMs implement the structure risk minimization principle and use the kernel trick to extend it to the non-linear case. On the other hand, RLS minimizes a regularized functional directly in a reproducing kernel Hubert space defined by a kernel. While both have a sound mathematical foundation, RLS is strikingly simple. On the other hand, SVMs in general have a sparse representation of solutions. In addition, the performance of SVMs has been well documented but little can be said of RLS. This paper applies these two techniques to a collection of data sets and presents results demonstrating virtual identical performance by the two methods.",
author = "Peng Zhang and Jing Peng",
year = "2004",
month = "12",
day = "17",
language = "English",
volume = "1",
pages = "176--179",
journal = "Proceedings - International Conference on Pattern Recognition",
issn = "1051-4651",
publisher = "Institute of Electrical and Electronics Engineers Inc.",

}

SVM vs regularized least squares classification. / Zhang, Peng; Peng, Jing.

In: Proceedings - International Conference on Pattern Recognition, Vol. 1, 17.12.2004, p. 176-179.

Research output: Contribution to journalConference articleResearchpeer-review

TY - JOUR

T1 - SVM vs regularized least squares classification

AU - Zhang, Peng

AU - Peng, Jing

PY - 2004/12/17

Y1 - 2004/12/17

N2 - Support vector machines (SVMs) and regularized least squares (RLS) are two recent promising techniques for classification. SVMs implement the structure risk minimization principle and use the kernel trick to extend it to the non-linear case. On the other hand, RLS minimizes a regularized functional directly in a reproducing kernel Hubert space defined by a kernel. While both have a sound mathematical foundation, RLS is strikingly simple. On the other hand, SVMs in general have a sparse representation of solutions. In addition, the performance of SVMs has been well documented but little can be said of RLS. This paper applies these two techniques to a collection of data sets and presents results demonstrating virtual identical performance by the two methods.

AB - Support vector machines (SVMs) and regularized least squares (RLS) are two recent promising techniques for classification. SVMs implement the structure risk minimization principle and use the kernel trick to extend it to the non-linear case. On the other hand, RLS minimizes a regularized functional directly in a reproducing kernel Hubert space defined by a kernel. While both have a sound mathematical foundation, RLS is strikingly simple. On the other hand, SVMs in general have a sparse representation of solutions. In addition, the performance of SVMs has been well documented but little can be said of RLS. This paper applies these two techniques to a collection of data sets and presents results demonstrating virtual identical performance by the two methods.

UR - http://www.scopus.com/inward/record.url?scp=10044277812&partnerID=8YFLogxK

M3 - Conference article

VL - 1

SP - 176

EP - 179

JO - Proceedings - International Conference on Pattern Recognition

JF - Proceedings - International Conference on Pattern Recognition

SN - 1051-4651

ER -