Empirical comparison of robustness of classifiers on IR imagery

Peng Zhang, Jing Peng, Kun Zhang, S. Richard F. Sims

Research output: Contribution to journalConference articleResearchpeer-review

3 Citations (Scopus)

Abstract

Many classifiers have been proposed for ATR applications. Given a set of training data, a classifier is built from the labeled training data, and then applied to predict the label of a new test point. If there is enough training data, and the test points are drawn from the same distribution (i.i.d.) as training data, then many classifiers perform quite well. However, in reality, there will never be enough training data or with limited computational resources we can only use part of the training data. Likewise, the distribution of new test points might be different from that of the training data, whereby the training data is not representative of the test data. In this paper, we empirically compare several classifiers, namely support vector machines, regularized least squares classifiers, C4.4, C4.5, random decision trees, bagged C4.4, and bagged C4.5 on IR imagery. We reduce the training data by half (less representative of the test data) each time and evaluate the resulting classifiers on the test data. This allows us to assess the robustness of classifiers against a varying knowledge base. A robust classifier is the one whose accuracy is the least sensitive to changes in the training data. Our results show that ensemble methods (random decision trees, bagged C4.4 and bagged C4.5) outlast single classifiers as the training data size decreases.

Original languageEnglish
Article number41
Pages (from-to)370-379
Number of pages10
JournalProceedings of SPIE - The International Society for Optical Engineering
Volume5807
DOIs
StatePublished - 10 Nov 2005
EventAutomatic Target Recognition XV - Orlando, FL, United States
Duration: 29 Mar 200531 Mar 2005

Fingerprint

classifiers
imagery
Classifiers
education
Classifier
Robustness
Decision trees
Imagery
Decision tree
Training
Support vector machines
Labels
Ensemble Methods
Least Squares Support Vector Machine
resources
Knowledge Base

Keywords

  • ATR
  • Decision tree
  • Ensemble method
  • Robustness
  • SVMs

Cite this

@article{c5036aeeb61a4988aa239756420854a0,
title = "Empirical comparison of robustness of classifiers on IR imagery",
abstract = "Many classifiers have been proposed for ATR applications. Given a set of training data, a classifier is built from the labeled training data, and then applied to predict the label of a new test point. If there is enough training data, and the test points are drawn from the same distribution (i.i.d.) as training data, then many classifiers perform quite well. However, in reality, there will never be enough training data or with limited computational resources we can only use part of the training data. Likewise, the distribution of new test points might be different from that of the training data, whereby the training data is not representative of the test data. In this paper, we empirically compare several classifiers, namely support vector machines, regularized least squares classifiers, C4.4, C4.5, random decision trees, bagged C4.4, and bagged C4.5 on IR imagery. We reduce the training data by half (less representative of the test data) each time and evaluate the resulting classifiers on the test data. This allows us to assess the robustness of classifiers against a varying knowledge base. A robust classifier is the one whose accuracy is the least sensitive to changes in the training data. Our results show that ensemble methods (random decision trees, bagged C4.4 and bagged C4.5) outlast single classifiers as the training data size decreases.",
keywords = "ATR, Decision tree, Ensemble method, Robustness, SVMs",
author = "Peng Zhang and Jing Peng and Kun Zhang and Sims, {S. Richard F.}",
year = "2005",
month = "11",
day = "10",
doi = "10.1117/12.604163",
language = "English",
volume = "5807",
pages = "370--379",
journal = "Proceedings of SPIE - The International Society for Optical Engineering",
issn = "0277-786X",
publisher = "SPIE",

}

Empirical comparison of robustness of classifiers on IR imagery. / Zhang, Peng; Peng, Jing; Zhang, Kun; Sims, S. Richard F.

In: Proceedings of SPIE - The International Society for Optical Engineering, Vol. 5807, 41, 10.11.2005, p. 370-379.

Research output: Contribution to journalConference articleResearchpeer-review

TY - JOUR

T1 - Empirical comparison of robustness of classifiers on IR imagery

AU - Zhang, Peng

AU - Peng, Jing

AU - Zhang, Kun

AU - Sims, S. Richard F.

PY - 2005/11/10

Y1 - 2005/11/10

N2 - Many classifiers have been proposed for ATR applications. Given a set of training data, a classifier is built from the labeled training data, and then applied to predict the label of a new test point. If there is enough training data, and the test points are drawn from the same distribution (i.i.d.) as training data, then many classifiers perform quite well. However, in reality, there will never be enough training data or with limited computational resources we can only use part of the training data. Likewise, the distribution of new test points might be different from that of the training data, whereby the training data is not representative of the test data. In this paper, we empirically compare several classifiers, namely support vector machines, regularized least squares classifiers, C4.4, C4.5, random decision trees, bagged C4.4, and bagged C4.5 on IR imagery. We reduce the training data by half (less representative of the test data) each time and evaluate the resulting classifiers on the test data. This allows us to assess the robustness of classifiers against a varying knowledge base. A robust classifier is the one whose accuracy is the least sensitive to changes in the training data. Our results show that ensemble methods (random decision trees, bagged C4.4 and bagged C4.5) outlast single classifiers as the training data size decreases.

AB - Many classifiers have been proposed for ATR applications. Given a set of training data, a classifier is built from the labeled training data, and then applied to predict the label of a new test point. If there is enough training data, and the test points are drawn from the same distribution (i.i.d.) as training data, then many classifiers perform quite well. However, in reality, there will never be enough training data or with limited computational resources we can only use part of the training data. Likewise, the distribution of new test points might be different from that of the training data, whereby the training data is not representative of the test data. In this paper, we empirically compare several classifiers, namely support vector machines, regularized least squares classifiers, C4.4, C4.5, random decision trees, bagged C4.4, and bagged C4.5 on IR imagery. We reduce the training data by half (less representative of the test data) each time and evaluate the resulting classifiers on the test data. This allows us to assess the robustness of classifiers against a varying knowledge base. A robust classifier is the one whose accuracy is the least sensitive to changes in the training data. Our results show that ensemble methods (random decision trees, bagged C4.4 and bagged C4.5) outlast single classifiers as the training data size decreases.

KW - ATR

KW - Decision tree

KW - Ensemble method

KW - Robustness

KW - SVMs

UR - http://www.scopus.com/inward/record.url?scp=27544515950&partnerID=8YFLogxK

U2 - 10.1117/12.604163

DO - 10.1117/12.604163

M3 - Conference article

VL - 5807

SP - 370

EP - 379

JO - Proceedings of SPIE - The International Society for Optical Engineering

JF - Proceedings of SPIE - The International Society for Optical Engineering

SN - 0277-786X

M1 - 41

ER -