### Abstract

In machine learning, a good predictive model is the one that generalizes well over future unseen data. In general, this problem is ill-posed. To mitigate this problem, a predictive model can be constructed by simultaneously minimizing an empirical error over training samples and controlling the complexity of the model. Thus, the regularized least squares (RLS) is developed. RLS requires matrix inversion, which is expensive. And as such, its "big data" applications can be adversely affected. To address this issue, we have developed an efficient machine learning algorithm for pattern recognition that approximates RLS. The algorithm does not require matrix inversion, and achieves competitive performance against the RLS algorithm. It has been shown mathematically that RLS is a sound learning algorithm. Therefore, a definitive statement about the relationship between the new algorithm and RLS will lay a solid theoretical foundation for the new algorithm. A recent study shows that the spectral norm of the kernel matrix in RLS is tightly bounded above by the size of the matrix. This spectral norm becomes a constant when the training samples have independent centered sub-Gaussian coordinators. For example, typical sub-Gaussian random vectors such as the standard normal and Bernoulli satisfy this assumption. Basically, each sample is drawn from a product distribution formed from some centered univariate sub-Gaussian distributions. These new results allow us to establish a bound between the new algorithm and RLS in finite samples and show that the new algorithm converges to RLS in the limit. Experimental results are provided that validate the theoretical analysis and demonstrate the new algorithm to be very promising in solving "big data" classification problems.

Original language | English |
---|---|

Title of host publication | Pattern Recognition and Tracking XXIX |

Editors | Mohammad S. Alam |

Publisher | SPIE |

Volume | 10649 |

ISBN (Electronic) | 9781510618091 |

DOIs | |

State | Published - 1 Jan 2018 |

Event | Pattern Recognition and Tracking XXIX 2018 - Orlando, United States Duration: 18 Apr 2018 → 19 Apr 2018 |

### Other

Other | Pattern Recognition and Tracking XXIX 2018 |
---|---|

Country | United States |

City | Orlando |

Period | 18/04/18 → 19/04/18 |

### Fingerprint

### Keywords

- Classification
- regularized least squares
- ridge regression

### Cite this

*Pattern Recognition and Tracking XXIX*(Vol. 10649). [106490S] SPIE. https://doi.org/10.1117/12.2305075

}

*Pattern Recognition and Tracking XXIX.*vol. 10649, 106490S, SPIE, Pattern Recognition and Tracking XXIX 2018, Orlando, United States, 18/04/18. https://doi.org/10.1117/12.2305075

**Approximate regularized least squares algorithm for classification.** / Peng, Jing; Aved, Alex J.

Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Research › peer-review

TY - GEN

T1 - Approximate regularized least squares algorithm for classification

AU - Peng, Jing

AU - Aved, Alex J.

PY - 2018/1/1

Y1 - 2018/1/1

N2 - In machine learning, a good predictive model is the one that generalizes well over future unseen data. In general, this problem is ill-posed. To mitigate this problem, a predictive model can be constructed by simultaneously minimizing an empirical error over training samples and controlling the complexity of the model. Thus, the regularized least squares (RLS) is developed. RLS requires matrix inversion, which is expensive. And as such, its "big data" applications can be adversely affected. To address this issue, we have developed an efficient machine learning algorithm for pattern recognition that approximates RLS. The algorithm does not require matrix inversion, and achieves competitive performance against the RLS algorithm. It has been shown mathematically that RLS is a sound learning algorithm. Therefore, a definitive statement about the relationship between the new algorithm and RLS will lay a solid theoretical foundation for the new algorithm. A recent study shows that the spectral norm of the kernel matrix in RLS is tightly bounded above by the size of the matrix. This spectral norm becomes a constant when the training samples have independent centered sub-Gaussian coordinators. For example, typical sub-Gaussian random vectors such as the standard normal and Bernoulli satisfy this assumption. Basically, each sample is drawn from a product distribution formed from some centered univariate sub-Gaussian distributions. These new results allow us to establish a bound between the new algorithm and RLS in finite samples and show that the new algorithm converges to RLS in the limit. Experimental results are provided that validate the theoretical analysis and demonstrate the new algorithm to be very promising in solving "big data" classification problems.

AB - In machine learning, a good predictive model is the one that generalizes well over future unseen data. In general, this problem is ill-posed. To mitigate this problem, a predictive model can be constructed by simultaneously minimizing an empirical error over training samples and controlling the complexity of the model. Thus, the regularized least squares (RLS) is developed. RLS requires matrix inversion, which is expensive. And as such, its "big data" applications can be adversely affected. To address this issue, we have developed an efficient machine learning algorithm for pattern recognition that approximates RLS. The algorithm does not require matrix inversion, and achieves competitive performance against the RLS algorithm. It has been shown mathematically that RLS is a sound learning algorithm. Therefore, a definitive statement about the relationship between the new algorithm and RLS will lay a solid theoretical foundation for the new algorithm. A recent study shows that the spectral norm of the kernel matrix in RLS is tightly bounded above by the size of the matrix. This spectral norm becomes a constant when the training samples have independent centered sub-Gaussian coordinators. For example, typical sub-Gaussian random vectors such as the standard normal and Bernoulli satisfy this assumption. Basically, each sample is drawn from a product distribution formed from some centered univariate sub-Gaussian distributions. These new results allow us to establish a bound between the new algorithm and RLS in finite samples and show that the new algorithm converges to RLS in the limit. Experimental results are provided that validate the theoretical analysis and demonstrate the new algorithm to be very promising in solving "big data" classification problems.

KW - Classification

KW - regularized least squares

KW - ridge regression

UR - http://www.scopus.com/inward/record.url?scp=85049185425&partnerID=8YFLogxK

U2 - 10.1117/12.2305075

DO - 10.1117/12.2305075

M3 - Conference contribution

VL - 10649

BT - Pattern Recognition and Tracking XXIX

A2 - Alam, Mohammad S.

PB - SPIE

ER -