Linear unmixing based feature extraction for hyperspectral data in a high performance computing environment

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

1 Scopus citations

Abstract

Most hyperspectral processing techniques suffer from high execution times. They are iterative with each step's complexity dependent on the size of the data. As the size of the data continues to increase the computational cost to process will only go up. In this paper a new group of distributed algorithms for linear unmixing is introduced. The approach employs parallelization of recent techniques such as Nonnegative Matrix Factorization. A theoretical introduction to NMF is presented and its computational costs are discussed. Next the design of parallel algorithms that minimize the data distribution and communication overhead is shown. The experimental results support the claim that the distributed algorithms provide a significant computational compared to their sequential counterparts.

Original languageEnglish
Title of host publicationImaging Spectrometry XVII
DOIs
StatePublished - 2012
EventImaging Spectrometry XVII - San Diego, CA, United States
Duration: 13 Aug 201214 Aug 2012

Publication series

NameProceedings of SPIE - The International Society for Optical Engineering
Volume8515
ISSN (Print)0277-786X

Other

OtherImaging Spectrometry XVII
Country/TerritoryUnited States
CitySan Diego, CA
Period13/08/1214/08/12

Keywords

  • Computer clusters
  • Distributed computing
  • Hyperspectral data
  • Linear unmixing
  • Nonnegative matrix factorization

Fingerprint

Dive into the research topics of 'Linear unmixing based feature extraction for hyperspectral data in a high performance computing environment'. Together they form a unique fingerprint.

Cite this