Linear unmixing based feature extraction for hyperspectral data in a high performance computing environment

Research output: Chapter in Book/Report/Conference proceedingConference contributionResearchpeer-review

1 Citation (Scopus)

Abstract

Most hyperspectral processing techniques suffer from high execution times. They are iterative with each step's complexity dependent on the size of the data. As the size of the data continues to increase the computational cost to process will only go up. In this paper a new group of distributed algorithms for linear unmixing is introduced. The approach employs parallelization of recent techniques such as Nonnegative Matrix Factorization. A theoretical introduction to NMF is presented and its computational costs are discussed. Next the design of parallel algorithms that minimize the data distribution and communication overhead is shown. The experimental results support the claim that the distributed algorithms provide a significant computational compared to their sequential counterparts.

Original languageEnglish
Title of host publicationImaging Spectrometry XVII
Volume8515
DOIs
StatePublished - 1 Dec 2012
EventImaging Spectrometry XVII - San Diego, CA, United States
Duration: 13 Aug 201214 Aug 2012

Other

OtherImaging Spectrometry XVII
CountryUnited States
CitySan Diego, CA
Period13/08/1214/08/12

Fingerprint

Hyperspectral Data
Distributed Algorithms
Parallel algorithms
pattern recognition
Feature Extraction
Feature extraction
Computational Cost
High Performance
Non-negative Matrix Factorization
Computing
Data Communication
Data Distribution
Parallelization
Parallel Algorithms
Execution Time
costs
Continue
Factorization
Minimise
factorization

Keywords

  • Computer clusters
  • Distributed computing
  • Hyperspectral data
  • Linear unmixing
  • Nonnegative matrix factorization

Cite this

@inproceedings{e522b1993c9a4a5ea3d9593b9bde631e,
title = "Linear unmixing based feature extraction for hyperspectral data in a high performance computing environment",
abstract = "Most hyperspectral processing techniques suffer from high execution times. They are iterative with each step's complexity dependent on the size of the data. As the size of the data continues to increase the computational cost to process will only go up. In this paper a new group of distributed algorithms for linear unmixing is introduced. The approach employs parallelization of recent techniques such as Nonnegative Matrix Factorization. A theoretical introduction to NMF is presented and its computational costs are discussed. Next the design of parallel algorithms that minimize the data distribution and communication overhead is shown. The experimental results support the claim that the distributed algorithms provide a significant computational compared to their sequential counterparts.",
keywords = "Computer clusters, Distributed computing, Hyperspectral data, Linear unmixing, Nonnegative matrix factorization",
author = "Stefan Robila",
year = "2012",
month = "12",
day = "1",
doi = "10.1117/12.928666",
language = "English",
isbn = "9780819492326",
volume = "8515",
booktitle = "Imaging Spectrometry XVII",

}

Robila, S 2012, Linear unmixing based feature extraction for hyperspectral data in a high performance computing environment. in Imaging Spectrometry XVII. vol. 8515, 85150M, Imaging Spectrometry XVII, San Diego, CA, United States, 13/08/12. https://doi.org/10.1117/12.928666

Linear unmixing based feature extraction for hyperspectral data in a high performance computing environment. / Robila, Stefan.

Imaging Spectrometry XVII. Vol. 8515 2012. 85150M.

Research output: Chapter in Book/Report/Conference proceedingConference contributionResearchpeer-review

TY - GEN

T1 - Linear unmixing based feature extraction for hyperspectral data in a high performance computing environment

AU - Robila, Stefan

PY - 2012/12/1

Y1 - 2012/12/1

N2 - Most hyperspectral processing techniques suffer from high execution times. They are iterative with each step's complexity dependent on the size of the data. As the size of the data continues to increase the computational cost to process will only go up. In this paper a new group of distributed algorithms for linear unmixing is introduced. The approach employs parallelization of recent techniques such as Nonnegative Matrix Factorization. A theoretical introduction to NMF is presented and its computational costs are discussed. Next the design of parallel algorithms that minimize the data distribution and communication overhead is shown. The experimental results support the claim that the distributed algorithms provide a significant computational compared to their sequential counterparts.

AB - Most hyperspectral processing techniques suffer from high execution times. They are iterative with each step's complexity dependent on the size of the data. As the size of the data continues to increase the computational cost to process will only go up. In this paper a new group of distributed algorithms for linear unmixing is introduced. The approach employs parallelization of recent techniques such as Nonnegative Matrix Factorization. A theoretical introduction to NMF is presented and its computational costs are discussed. Next the design of parallel algorithms that minimize the data distribution and communication overhead is shown. The experimental results support the claim that the distributed algorithms provide a significant computational compared to their sequential counterparts.

KW - Computer clusters

KW - Distributed computing

KW - Hyperspectral data

KW - Linear unmixing

KW - Nonnegative matrix factorization

UR - http://www.scopus.com/inward/record.url?scp=84872538035&partnerID=8YFLogxK

U2 - 10.1117/12.928666

DO - 10.1117/12.928666

M3 - Conference contribution

SN - 9780819492326

VL - 8515

BT - Imaging Spectrometry XVII

ER -