Loading...
Research Project
Strategic Project - LA 8 - 2011-2012
Funder
Authors
Publications
Parallel hyperspectral unmixing method on GPU
Publication . Nascimento, Jose; Silva, Vítor; Bioucas-Dias, José M.; Alves, Rodríguez
Many Hyperspectral imagery applications require a response in real time or near-real time. To meet this requirement this paper proposes a parallel unmixing method developed for graphics processing units (GPU). This method is based on the vertex component analysis (VCA), which is a geometrical based method highly parallelizable. VCA is a very fast and accurate method that extracts endmember signatures from large hyperspectral datasets without the use of any a priori knowledge about the constituent spectra. Experimental results obtained for simulated and real hyperspectral datasets reveal considerable acceleration factors, up to 24 times.
Hyperspectral imagery framework for unmixing and dimensionality estimation
Publication . Nascimento, Jose; Bioucas-Dias, José M.
In hyperspectral imagery a pixel typically consists mixture of spectral signatures of reference substances, also called endmembers. Linear spectral mixture analysis, or linear unmixing, aims at estimating the number of endmembers, their spectral signatures, and their abundance fractions.
This paper proposes a framework for hyperpsectral unmixing. A blind method (SISAL) is used for the estimation of the unknown endmember signature and their abundance fractions. This method solve a non-convex problem by a sequence of augmented Lagrangian optimizations, where the positivity constraints, forcing the spectral vectors to belong to the convex hull of the endmember signatures, are replaced by soft constraints. The proposed framework simultaneously estimates the number of endmembers present in the hyperspectral image by an algorithm based on the minimum description length (MDL) principle. Experimental results on both synthetic and real hyperspectral data demonstrate the effectiveness of the proposed algorithm.
An unsupervised approach to feature discretization and selection
Publication . J. Ferreira, Artur; Figueiredo, Mário A. T.
Many learning problems require handling high dimensional data sets with a relatively small number of instances. Learning algorithms are thus confronted with the curse of dimensionality, and need to address it in order to be effective. Examples of these types of data include the bag-of-words representation in text classification problems and gene expression data for tumor detection/classification. Usually, among the high number of features characterizing the instances, many may be irrelevante (oreven detrimental) for the learning tasks. It ist hus clear that the reisaneed for adequate techniques for feature representation, reduction, and selection, to improve both the classification accuracy and the memory requirements. In this paper, we propose combined unsupervised feature discretization and feature selection techniques, suitable for médium and high-dimensional datas ets. The experimental results on several standard data sets, with both sparse and dense features, showthe efficiency of the proposed techniques as well as improvements over previous related techniques.
Organizational Units
Description
Keywords
Contributors
Funders
Funding agency
Fundação para a Ciência e a Tecnologia
Funding programme
6817 - DCRRNI ID
Funding Award Number
PEst-OE/EEI/LA0008/2011