A scalable projective scaling algorithm for lp loss with convex penalizations

Hongbo Zhou, Qiang Cheng

Research output: Contribution to journalArticle

2 Citations (Scopus)

Abstract

This paper presents an accurate, efficient, and scalable algorithm for minimizing a special family of convex functions, which have a lp loss function as an additive component. For this problem, well-known learning algorithms often have well-established results on accuracy and efficiency, but there exists rarely any report on explicit linear scalability with respect to the problem size. The proposed approach starts with developing a second-order learning procedure with iterative descent for general convex penalization functions, and then builds efficient algorithms for a restricted family of functions, which satisfy the Karmarkar's projective scaling condition. Under this condition, a light weight, scalable message passing algorithm (MPA) is further developed by constructing a series of simpler equivalent problems. The proposed MPA is intrinsically scalable because it only involves matrix-vector multiplication and avoids matrix inversion operations. The MPA is proven to be globally convergent for convex formulations; for nonconvex situations, it converges to a stationary point. The accuracy, efficiency, scalability, and applicability of the proposed method are verified through extensive experiments on sparse signal recovery, face image classification, and over-complete dictionary learning problems.

Original languageEnglish
Article number6808493
Pages (from-to)265-276
Number of pages12
JournalIEEE Transactions on Neural Networks and Learning Systems
Volume26
Issue number2
DOIs
StatePublished - 1 Feb 2015

Fingerprint

Message passing
Scalability
Image classification
Glossaries
Learning algorithms
Recovery
Experiments

Keywords

  • Convex function
  • Karmarkar's projective scaling condition
  • l loss function
  • message passing algorithm (MPA)
  • minimization-majorization (MM)
  • nonconvex
  • scalability

Cite this

@article{fbcf60ce00d24793aaf70da19ad9c8b3,
title = "A scalable projective scaling algorithm for lp loss with convex penalizations",
abstract = "This paper presents an accurate, efficient, and scalable algorithm for minimizing a special family of convex functions, which have a lp loss function as an additive component. For this problem, well-known learning algorithms often have well-established results on accuracy and efficiency, but there exists rarely any report on explicit linear scalability with respect to the problem size. The proposed approach starts with developing a second-order learning procedure with iterative descent for general convex penalization functions, and then builds efficient algorithms for a restricted family of functions, which satisfy the Karmarkar's projective scaling condition. Under this condition, a light weight, scalable message passing algorithm (MPA) is further developed by constructing a series of simpler equivalent problems. The proposed MPA is intrinsically scalable because it only involves matrix-vector multiplication and avoids matrix inversion operations. The MPA is proven to be globally convergent for convex formulations; for nonconvex situations, it converges to a stationary point. The accuracy, efficiency, scalability, and applicability of the proposed method are verified through extensive experiments on sparse signal recovery, face image classification, and over-complete dictionary learning problems.",
keywords = "Convex function, Karmarkar's projective scaling condition, l loss function, message passing algorithm (MPA), minimization-majorization (MM), nonconvex, scalability",
author = "Hongbo Zhou and Qiang Cheng",
year = "2015",
month = "2",
day = "1",
doi = "10.1109/TNNLS.2014.2314129",
language = "English",
volume = "26",
pages = "265--276",
journal = "IEEE Transactions on Neural Networks and Learning Systems",
issn = "2162-237X",
publisher = "IEEE Computational Intelligence Society",
number = "2",

}

A scalable projective scaling algorithm for lp loss with convex penalizations. / Zhou, Hongbo; Cheng, Qiang.

In: IEEE Transactions on Neural Networks and Learning Systems, Vol. 26, No. 2, 6808493, 01.02.2015, p. 265-276.

Research output: Contribution to journalArticle

TY - JOUR

T1 - A scalable projective scaling algorithm for lp loss with convex penalizations

AU - Zhou, Hongbo

AU - Cheng, Qiang

PY - 2015/2/1

Y1 - 2015/2/1

N2 - This paper presents an accurate, efficient, and scalable algorithm for minimizing a special family of convex functions, which have a lp loss function as an additive component. For this problem, well-known learning algorithms often have well-established results on accuracy and efficiency, but there exists rarely any report on explicit linear scalability with respect to the problem size. The proposed approach starts with developing a second-order learning procedure with iterative descent for general convex penalization functions, and then builds efficient algorithms for a restricted family of functions, which satisfy the Karmarkar's projective scaling condition. Under this condition, a light weight, scalable message passing algorithm (MPA) is further developed by constructing a series of simpler equivalent problems. The proposed MPA is intrinsically scalable because it only involves matrix-vector multiplication and avoids matrix inversion operations. The MPA is proven to be globally convergent for convex formulations; for nonconvex situations, it converges to a stationary point. The accuracy, efficiency, scalability, and applicability of the proposed method are verified through extensive experiments on sparse signal recovery, face image classification, and over-complete dictionary learning problems.

AB - This paper presents an accurate, efficient, and scalable algorithm for minimizing a special family of convex functions, which have a lp loss function as an additive component. For this problem, well-known learning algorithms often have well-established results on accuracy and efficiency, but there exists rarely any report on explicit linear scalability with respect to the problem size. The proposed approach starts with developing a second-order learning procedure with iterative descent for general convex penalization functions, and then builds efficient algorithms for a restricted family of functions, which satisfy the Karmarkar's projective scaling condition. Under this condition, a light weight, scalable message passing algorithm (MPA) is further developed by constructing a series of simpler equivalent problems. The proposed MPA is intrinsically scalable because it only involves matrix-vector multiplication and avoids matrix inversion operations. The MPA is proven to be globally convergent for convex formulations; for nonconvex situations, it converges to a stationary point. The accuracy, efficiency, scalability, and applicability of the proposed method are verified through extensive experiments on sparse signal recovery, face image classification, and over-complete dictionary learning problems.

KW - Convex function

KW - Karmarkar's projective scaling condition

KW - l loss function

KW - message passing algorithm (MPA)

KW - minimization-majorization (MM)

KW - nonconvex

KW - scalability

UR - http://www.scopus.com/inward/record.url?scp=84921454442&partnerID=8YFLogxK

U2 - 10.1109/TNNLS.2014.2314129

DO - 10.1109/TNNLS.2014.2314129

M3 - Article

C2 - 25608289

AN - SCOPUS:84921454442

VL - 26

SP - 265

EP - 276

JO - IEEE Transactions on Neural Networks and Learning Systems

JF - IEEE Transactions on Neural Networks and Learning Systems

SN - 2162-237X

IS - 2

M1 - 6808493

ER -