15 resultados para Spatial Query Processing And Optimization

em Reposit


Relevância:

100.00% 100.00%

Publicador:

Resumo:

A previously developed model is used to numerically simulate real clinical cases of the surgical correction of scoliosis. This model consists of one-dimensional finite elements with spatial deformation in which (i) the column is represented by its axis; (ii) the vertebrae are assumed to be rigid; and (iii) the deformability of the column is concentrated in springs that connect the successive rigid elements. The metallic rods used for the surgical correction are modeled by beam elements with linear elastic behavior. To obtain the forces at the connections between the metallic rods and the vertebrae geometrically, non-linear finite element analyses are performed. The tightening sequence determines the magnitude of the forces applied to the patient column, and it is desirable to keep those forces as small as possible. In this study, a Genetic Algorithm optimization is applied to this model in order to determine the sequence that minimizes the corrective forces applied during the surgery. This amounts to find the optimal permutation of integers 1, ... , n, n being the number of vertebrae involved. As such, we are faced with a combinatorial optimization problem isomorph to the Traveling Salesman Problem. The fitness evaluation requires one computing intensive Finite Element Analysis per candidate solution and, thus, a parallel implementation of the Genetic Algorithm is developed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A realização do presente trabalho teve como principais objectivos o desenvolvimento de espumas de poliuretano de um componente com propriedades de resistência à chama superiores (B1 & B2), aplicadas por pistola ou por adaptador/tubo e a optimização de uma espuma de poliuretano de um componente de inverno aplicada por pistola. Todo o trabalho desenvolvido está dividido em dois projectos distintos: i. O primeiro projecto consistiu em desenvolver espumas de um componente com propriedades de resistência à chama (classificadas como B1 e B2 de acordo com a norma alemã DIN 4102), aplicadas por pistola (GWB1 e GWB2) ou por adaptador/tubo (AWB), utilizando polióis poliésteres aromáticos modificados e aditivos retardantes de chama halogenados. Estas espumas deveriam apresentar também propriedades aceitáveis a baixas temperaturas. Após realizar várias formulações foi possível desenvolver uma espuma AWB2 com apenas 3,3% de poliol poliéster no pré-polímero e com propriedades equivalentes às da melhor espuma comercial mesmo a 5/-10 (temperatura da lata/cura da espuma em °C) e também com uma altura de chama de apenas 11 cm. A partir de duas formulações (AWB2) que passaram o Teste B2, foram obtidas também, uma espuma GWB2 e outra GWB1 com propriedades equivalentes às da melhor espuma da concorrência a -10/-10 e a 23/5, respectivamente, embora não tenham sido submetidas ao teste B2 e B1 após as modificações efectuadas. ii. O segundo projecto consistiu em optimizar uma espuma de poliuretano de um componente de inverno aplicada por pistola (GWB3). A espuma inicial tinha problemas de glass bubbles quando esta era dispensada a partir de uma lata cheia, sendo necessário ultrapassar este problema. Este problema foi resolvido diminuindo a razão de GPL/DME através do aumento da percentagem em volume de DME no pré-polímero para 14% no entanto, a estabilidade dimensional piorou um pouco. O reagente FCA 400 foi removido da formulação anterior (6925) numa tentativa de diminuir o custo da espuma, obtendo-se uma espuma aceitável a 23/23 e a 5/5, com uma redução de 4% no custo da produção e com uma redução de 5,5% no custo por litro de espuma dispensada, quando comparada com a sua antecessora. Por último, foi avaliada a influência da concentração de diferentes surfactantes na formulação 6925, verificando-se o melhoramento da estrutura celular da espuma para concentrções mais elevadas de surfactante, sendo este efeito mais notório a temperaturas mais baixas (5/5). Dos surfactantes estudados, o B 8871 mostrou o melhor desempenho a 5/5 com a concentração mais baixa, sendo portanto o melhor surfactante, enquanto o Struksilon 8003 demonstrou ser o menos adequado para esta formulação específica, apresentando piores resultados globais. Pode-se ainda acrescentar que os surfactantes L-5351, L-5352 e B 8526 também não são adequados para esta formulação uma vez que as espumas resultantes apresentam cell collapse, especialmente a 5/5. No caso dos surfactantes L-5351 e L-5352, esta propriedade piora com concentrações mais elevadas. Em cada projecto foram também efectuados testes de benchmark em determinadas espumas comerciais com o principal objectivo de comparar todos os resultados das espumas desenvolvidas, em ambos os projectos, com espumas da concorrência.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present the modeling efforts on antenna design and frequency selection to monitor brain temperature during prolonged surgery using noninvasive microwave radiometry. A tapered log-spiral antenna design is chosen for its wideband characteristics that allow higher power collection from deep brain. Parametric analysis with the software HFSS is used to optimize antenna performance for deep brain temperature sensing. Radiometric antenna efficiency (eta) is evaluated in terms of the ratio of power collected from brain to total power received by the antenna. Anatomical information extracted from several adult computed tomography scans is used to establish design parameters for constructing an accurate layered 3-D tissue phantom. This head phantom includes separate brain and scalp regions, with tissue equivalent liquids circulating at independent temperatures on either side of an intact skull. The optimized frequency band is 1.1-1.6 GHz producing an average antenna efficiency of 50.3% from a two turn log-spiral antenna. The entire sensor package is contained in a lightweight and low-profile 2.8 cm diameter by 1.5 cm high assembly that can be held in place over the skin with an electromagnetic interference shielding adhesive patch. The calculated radiometric equivalent brain temperature tracks within 0.4 degrees C of the measured brain phantom temperature when the brain phantom is lowered 10. C and then returned to the original temperature (37 degrees C) over a 4.6-h experiment. The numerical and experimental results demonstrate that the optimized 2.5-cm log-spiral antenna is well suited for the noninvasive radiometric sensing of deep brain temperature.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The increasing integration of wind energy in power systems can be responsible for the occurrence of over-generation, especially during the off-peak periods. This paper presents a dedicated methodology to identify and quantify the occurrence of this over-generation and to evaluate some of the solutions that can be adopted to mitigate this problem. The methodology is applied to the Portuguese power system, in which the wind energy is expected to represent more than 25% of the installed capacity in a near future. The results show that the pumped-hydro units will not provide enough energy storage capacity and, therefore, wind curtailments are expected to occur in the Portuguese system. Additional energy storage devices can be implemented to offset the wind energy curtailments. However, the investment analysis performed show that they are not economically viable, due to the present high capital costs involved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work, plasticizer agents were incorporated in a chitosan based formulation, as a strategy to improve the fragile structure of chitosan based-materials. Three different plasticizers: ethylene glycol, glycerol and sorbitol, were blended with chitosan to prepare 3D dense chitosan specimens. The properties of the obtained structures were assessed for mechanical, microstructural, physical and biocompatibility behavior. The results obtained revealed that from the different specimens prepared, the blend of chitosan with glycerol has superior mechanical properties and good biological behavior, making this chitosan based formulation a good candidate to improve robust chitosan structures for the construction of bioabsorbable orthopedic implants.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objectives - Review available guidance for quality assurance (QA) in mammography and discuss its contribution to harmonise practices worldwide. Methods - Literature search was performed on different sources to identify guidance documents for QA in mammography available worldwide in international bodies, healthcare providers, professional/scientific associations. The guidance documents identified were reviewed and a selection was compared for type of guidance (clinical/technical), technology and proposed QA methodologies focusing on dose and image quality (IQ) performance assessment. Results - Fourteen protocols (targeted at conventional and digital mammography) were reviewed. All included recommendations for testing acquisition, processing and display systems associated with mammographic equipment. All guidance reviewed highlighted the importance of dose assessment and testing the Automatic Exposure Control (AEC) system. Recommended tests for assessment of IQ showed variations in the proposed methodologies. Recommended testing focused on assessment of low-contrast detection, spatial resolution and noise. QC of image display is recommended following the American Association of Physicists in Medicine guidelines. Conclusions - The existing QA guidance for mammography is derived from key documents (American College of Radiology and European Union guidelines) and proposes similar tests despite the variations in detail and methodologies. Studies reported on QA data should provide detail on experimental technique to allow robust data comparison. Countries aiming to implement a mammography/QA program may select/prioritise the tests depending on available technology and resources.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Coronary artery disease (CAD) is currently one of the most prevalent diseases in the world population and calcium deposits in coronary arteries are one direct risk factor. These can be assessed by the calcium score (CS) application, available via a computed tomography (CT) scan, which gives an accurate indication of the development of the disease. However, the ionising radiation applied to patients is high. This study aimed to optimise the protocol acquisition in order to reduce the radiation dose and explain the flow of procedures to quantify CAD. The main differences in the clinical results, when automated or semiautomated post-processing is used, will be shown, and the epidemiology, imaging, risk factors and prognosis of the disease described. The software steps and the values that allow the risk of developingCADto be predicted will be presented. A64-row multidetector CT scan with dual source and two phantoms (pig hearts) were used to demonstrate the advantages and disadvantages of the Agatston method. The tube energy was balanced. Two measurements were obtained in each of the three experimental protocols (64, 128, 256 mAs). Considerable changes appeared between the values of CS relating to the protocol variation. The predefined standard protocol provided the lowest dose of radiation (0.43 mGy). This study found that the variation in the radiation dose between protocols, taking into consideration the dose control systems attached to the CT equipment and image quality, was not sufficient to justify changing the default protocol provided by the manufacturer.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A presente dissertação pretende conceber e implementar um sistema de controlo tolerante a falhas, no canal experimental de rega da Universidade de Évora, utilizando um modelo implementado em MATLAB/SIMULINK®. Como forma de responder a este desafio, analisaram-se várias técnicas de diagnóstico de falhas, tendo-se optado por técnicas baseadas em redes neuronais para o desenvolvimento de um sistema de detecção e isolamento de falhas no canal de rega, sem ter em conta o tipo de sistema de controlo utilizado. As redes neuronais foram, assim, os processadores não lineares utilizados e mais aconselhados em situações onde exista uma abundância de dados do processo, porque aprendem por exemplos e são suportadas por teorias estatísticas e de optimização, focando não somente o processamento de sinais, como também expandindo os horizontes desse processamento. A ênfase dos modelos das redes neuronais está na sua dinâmica, na sua estabilidade e no seu comportamento. Portanto, o trabalho de investigação do qual resultou esta Dissertação teve como principais objectivos o desenvolvimento de modelos de redes neuronais que representassem da melhor forma a dinâmica do canal de rega, de modo a obter um sistema de detecção de falhas que faça uma comparação entre os valores obtidos nos modelos e no processo. Com esta diferença de valores, da qual resultará um resíduo, é possível desenvolver tanto o sistema de detecção como de isolamento de falhas baseados nas redes neuronais, possibilitando assim o desenvolvimento dum sistema de controlo tolerante a falhas, que engloba os módulos de detecção, de isolamento/diagnóstico e de reconfiguração do canal de rega. Em síntese, na Dissertação realizada desenvolveu-se um sistema que permite reconfigurar o processo em caso de ocorrência de falhas, melhorando significativamente o desempenho do canal de rega.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this review paper different designs based on stacked p-i'-n-p-i-n heterojunctions are presented and compared with the single p-i-n sensing structures. The imagers utilise self-field induced depletion layers for light detection and a modulated laser beam for sequential readout. The effect of the sensing element structure, cell configurations (single or tandem), and light source properties (intensity and wavelength) are correlated with the sensor output characteristics (light-to-dark sensivity, spatial resolution, linearity and S/N ratio). The readout frequency is optimized showing that scans speeds up to 104 lines per second can be achieved without degradation in the resolution. Multilayered p-i'-n-p-i-n heterostructures can also be used as wavelength-division multiplexing /demultiplexing devices in the visible range. Here the sensor element faces the modulated light from different input colour channels, each one with a specific wavelength and bit rate. By reading out the photocurrent at appropriated applied bias, the information is multiplexed or demultiplexed and can be transmitted or recovered again. Electrical models are present to support the sensing methodologies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Wastewater from cork processing industry present high levels of organic and phenolic compounds, such as tannins, with a low biodegradability and a significant toxicity. These compounds are not readily removed by conventional municipal wastewater treatment, which is largely based on primary sedimentation followed by biological treatment. The purpose of this work is to study the biodegradability of different cork wastewater fractions, obtained through membrane separation, in order to assess its potential for biological treatment and having in view its valorisation through tannins recovery, which could be applied in other industries. Various ultrafiltration and nanofiltration membranes where used, with molecular weight cut-offs (MWCO) ranging from 0.125 to 91 kDa. The wastewater and the different permeated fractions were analyzed in terms of Total Organic Carbon (TOC), Chemical Oxygen Demand (COD), Biochemical Oxygen Demand (BOD), Total Phenols (TP), Tannins, Color, pH and Conductivity. Results for the wastewater shown that it is characterized by a high organic content (670.5-1056.8 mg TOC/L, 2285-2604 mg COD/L, 1000-1225 mg BOD/L), a relatively low biodegradability (0.35-0.38 for BODs/COD and 0.44-0.47 for BOD20/COD) and a high content of phenols (360-410 mg tannic acid/L) and tannins (250-270 mg tannic acid/L). The results for the wastewater fractions shown a general decrease on the pollutant content of permeates, and an increase of its biodegradability, with the decrease of the membrane MWCO applied. Particularly, the permeated fraction from the membrane MWCO of 3.8 kDa, presented a favourable index of biodegradability (0.8) and a minimized phenols toxicity that enables it to undergo a biological treatment and so, to be treated in a municipal wastewater treatment plant. Also, within the perspective of valorisation, the rejected fraction obtained through this membrane MWCO may have a significant potential for tannins recovery. Permeated fractions from membranes with MWCO lower than 3.8 kDa, presented a particularly significant decline of organic matter and phenols, enabling this permeates to be reused in the cork processing and so, representing an interesting perspective of zero discharge for the cork industry, with evident environmental and economic advantages. (C) 2010 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Organic waste is a rich substrate for microbial growth, and because of that, workers from waste industry are at higher risk of exposure to bioaerosols. This study aimed to assess fungal contamination in two plants handling solid waste management. Air samples from the two plants were collected through an impaction method. Surface samples were also collected by swabbing surfaces of the same indoor sites. All collected samples were incubated at 27◦C for 5 to 7 d. After lab processing and incubation of collected samples, quantitative and qualitative results were obtained with identification of the isolated fungal species. Air samples were also subjected to molecular methods by real-time polymerase chain reaction (RT PCR) using an impinger method to measure DNA of Aspergillus flavus complex and Stachybotrys chartarum. Assessment of particulate matter (PM) was also conducted with portable direct-reading equipment. Particles concentration measurement was performed at five different sizes (PM0.5; PM1; PM2.5; PM5; PM10). With respect to the waste sorting plant, three species more frequently isolated in air and surfaces were A. niger (73.9%; 66.1%), A. fumigatus (16%; 13.8%), and A. flavus (8.7%; 14.2%). In the incineration plant, the most prevalent species detected in air samples were Penicillium sp. (62.9%), A. fumigatus (18%), and A. flavus (6%), while the most frequently isolated in surface samples were Penicillium sp. (57.5%), A. fumigatus (22.3%) and A. niger (12.8%). Stachybotrys chartarum and other toxinogenic strains from A. flavus complex were not detected. The most common PM sizes obtained were the PM10 and PM5 (inhalable fraction). Since waste is the main internal fungal source in the analyzed settings, preventive and protective measures need to be maintained to avoid worker exposure to fungi and their metabolites.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the increasing complexity of current networks, it became evident the need for Self-Organizing Networks (SON), which aims to automate most of the associated radio planning and optimization tasks. Within SON, this paper aims to optimize the Neighbour Cell List (NCL) for Long Term Evolution (LTE) evolved NodeBs (eNBs). An algorithm composed by three decisions were were developed: distance-based, Radio Frequency (RF) measurement-based and Handover (HO) stats-based. The distance-based decision, proposes a new NCL taking account the eNB location and interference tiers, based in the quadrants method. The last two algorithms consider signal strength measurements and HO statistics, respectively; they also define a ranking to each eNB and neighbour relation addition/removal based on user defined constraints. The algorithms were developed and implemented over an already existent radio network optimization professional tool. Several case studies were produced using real data from a Portuguese LTE mobile operator. © 2014 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Hyperspectral imaging can be used for object detection and for discriminating between different objects based on their spectral characteristics. One of the main problems of hyperspectral data analysis is the presence of mixed pixels, due to the low spatial resolution of such images. This means that several spectrally pure signatures (endmembers) are combined into the same mixed pixel. Linear spectral unmixing follows an unsupervised approach which aims at inferring pure spectral signatures and their material fractions at each pixel of the scene. The huge data volumes acquired by such sensors put stringent requirements on processing and unmixing methods. This paper proposes an efficient implementation of a unsupervised linear unmixing method on GPUs using CUDA. The method finds the smallest simplex by solving a sequence of nonsmooth convex subproblems using variable splitting to obtain a constraint formulation, and then applying an augmented Lagrangian technique. The parallel implementation of SISAL presented in this work exploits the GPU architecture at low level, using shared memory and coalesced accesses to memory. The results herein presented indicate that the GPU implementation can significantly accelerate the method's execution over big datasets while maintaining the methods accuracy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

One of the main problems of hyperspectral data analysis is the presence of mixed pixels due to the low spatial resolution of such images. Linear spectral unmixing aims at inferring pure spectral signatures and their fractions at each pixel of the scene. The huge data volumes acquired by hyperspectral sensors put stringent requirements on processing and unmixing methods. This letter proposes an efficient implementation of the method called simplex identification via split augmented Lagrangian (SISAL) which exploits the graphics processing unit (GPU) architecture at low level using Compute Unified Device Architecture. SISAL aims to identify the endmembers of a scene, i.e., is able to unmix hyperspectral data sets in which the pure pixel assumption is violated. The proposed implementation is performed in a pixel-by-pixel fashion using coalesced accesses to memory and exploiting shared memory to store temporary data. Furthermore, the kernels have been optimized to minimize the threads divergence, therefore achieving high GPU occupancy. The experimental results obtained for the simulated and real hyperspectral data sets reveal speedups up to 49 times, which demonstrates that the GPU implementation can significantly accelerate the method's execution over big data sets while maintaining the methods accuracy.