A minimax framework for classification with applications to images and high dimensional data

Qiang Cheng, H. Zhou, Jie Cheng, Huiqing Li

Research output: Contribution to journalArticle

16 Scopus citations

Abstract

This paper introduces a minimax framework for multiclass classification, which is applicable to general data including, in particular, imagery and other types of high-dimensional data. The framework consists of estimating a representation model that minimizes the fitting errors under a class of distortions of interest to an application, and deriving subsequently categorical information based on the estimated model. A variety of commonly used regression models, including lasso, elastic net and ridge regression, can be regarded as special cases that correspond to specific classes of distortions. Optimal decision rules are derived for this classification framework. By using kernel techniques the framework can account for nonlinearity in the input space. To demonstrate the power of the framework we consider a class of signal-dependent distortions and build a new family of classifiers as new special cases. This family of new methods-minimax classification with generalized multiplicative distortions-often outperforms the state-of-the-art classification methods such as the support vector machine in accuracy. Extensive experimental results on images, gene expressions and other types of data verify the effectiveness of the proposed framework.

Original languageEnglish
Article number6824834
Pages (from-to)2117-2130
Number of pages14
JournalIEEE Transactions on Pattern Analysis and Machine Intelligence
Volume36
Issue number11
DOIs
StatePublished - 1 Nov 2014

Keywords

  • Bayesian optimal decision
  • generalized multiplicative distortion
  • high dimensional data
  • kernel
  • minimax optimization
  • Multiclass classification

Fingerprint Dive into the research topics of 'A minimax framework for classification with applications to images and high dimensional data'. Together they form a unique fingerprint.

  • Cite this