The fisher-markov selector: Fast selecting maximally separable feature subset for multiclass classification with applications to high-dimensional data

Qiang Cheng, Hongbo Zhou, Jie Cheng

Research output: Contribution to journalArticlepeer-review

128 Scopus citations

Abstract

Selecting features for multiclass classification is a critically important task for pattern recognition and machine learning applications. Especially challenging is selecting an optimal subset of features from high-dimensional data, which typically have many more variables than observations and contain significant noise, missing components, or outliers. Existing methods either cannot handle high-dimensional data efficiently or scalably, or can only obtain local optimum instead of global optimum. Toward the selection of the globally optimal subset of features efficiently, we introduce a new selectorwhich we call the Fisher-Markov selectorto identify those features that are the most useful in describing essential differences among the possible groups. In particular, in this paper we present a way to represent essential discriminating characteristics together with the sparsity as an optimization objective. With properly identified measures for the sparseness and discriminativeness in possibly high-dimensional settings, we take a systematic approach for optimizing the measures to choose the best feature subset. We use Markov random field optimization techniques to solve the formulated objective functions for simultaneous feature selection. Our results are noncombinatorial, and they can achieve the exact global optimum of the objective function for some special kernels. The method is fast; in particular, it can be linear in the number of features and quadratic in the number of observations. We apply our procedure to a variety of real-world data, including mid - dimensional optical handwritten digit data set and high-dimensional microarray gene expression data sets. The effectiveness of our method is confirmed by experimental results. In pattern recognition and from a model selection viewpoint, our procedure says that it is possible to select the most discriminating subset of variables by solving a very simple unconstrained objective function which in fact can be obtained with an explicit expression.

Original languageEnglish
Article number5611544
Pages (from-to)1217-1233
Number of pages17
JournalIEEE Transactions on Pattern Analysis and Machine Intelligence
Volume33
Issue number6
DOIs
StatePublished - 2011

Keywords

  • Classification
  • Fisher's linear discriminant analysis
  • Markov random field
  • feature subset selection
  • high-dimensional data
  • kernel

Fingerprint

Dive into the research topics of 'The fisher-markov selector: Fast selecting maximally separable feature subset for multiclass classification with applications to high-dimensional data'. Together they form a unique fingerprint.

Cite this