Repository logo
 

Search Results

Now showing 1 - 2 of 2
  • Parallel GPU architecture for hyperspectral unmixing based on augmented Lagrangian method
    Publication . Sevilla, Jorge; Nascimento, Jose
    Hyperspectral imaging has become one of the main topics in remote sensing applications, which comprise hundreds of spectral bands at different (almost contiguous) wavelength channels over the same area generating large data volumes comprising several GBs per flight. This high spectral resolution can be used for object detection and for discriminate between different objects based on their spectral characteristics. One of the main problems involved in hyperspectral analysis is the presence of mixed pixels, which arise when the spacial resolution of the sensor is not able to separate spectrally distinct materials. Spectral unmixing is one of the most important task for hyperspectral data exploitation. However, the unmixing algorithms can be computationally very expensive, and even high power consuming, which compromises the use in applications under on-board constraints. In recent years, graphics processing units (GPUs) have evolved into highly parallel and programmable systems. Specifically, several hyperspectral imaging algorithms have shown to be able to benefit from this hardware taking advantage of the extremely high floating-point processing performance, compact size, huge memory bandwidth, and relatively low cost of these units, which make them appealing for onboard data processing. In this paper, we propose a parallel implementation of an augmented Lagragian based method for unsupervised hyperspectral linear unmixing on GPUs using CUDA. The method called simplex identification via split augmented Lagrangian (SISAL) aims to identify the endmembers of a scene, i.e., is able to unmix hyperspectral data sets in which the pure pixel assumption is violated. The efficient implementation of SISAL method presented in this work exploits the GPU architecture at low level, using shared memory and coalesced accesses to memory.
  • Hyperspectral image reconstruction from random projections on GPU
    Publication . Sevilla, Jorge; Martin, Gabriel; Nascimento, Jose; Bioucas-Dias, José
    Hyperspectral data compression and dimensionality reduction has received considerable interest in recent years due to the high spectral resolution of these images. Contrarily to the conventional dimensionality reduction schemes, the spectral compressive acquisition method (SpeCA) performs dimensionality reduction based on random projections. The SpeCA methodology has applications in Hyperspectral Compressive Sensing and also in dimensionality reduction. Due to the extremely large volumes of data collected by imaging spectrometers, high performance computing architectures are needed for data compression of high dimensional hyperspectral data under real-time constrained applications. In this paper a parallel implementation of SpeCA on Graphics Processing Units (GPUs) using the compute unified device architecture (CUDA) is proposed. The proposed implementation is performed in a pixel-by-pixel fashion using coalesced accesses to memory and exploiting shared memory to store temporary data. Furthermore, the kemeIs have been optimized to minimize the threads divergence, therefore, achieving high GPU occupancy. The experimental results obtained for simulated and real hyperspectral data sets reveal speedups up to 21 tim es, which demonstrates that the GPU implementation can significantly accelerate the methods execution over big datasets while maintaining the methods accuracy.