857 resultados para Doppler Return Signal, SNR,Signal Estimation, Multi-Component Quadratic
Resumo:
Algae are a new potential biomass for energy production but there is limited information on their pyrolysis and kinetics. The main aim of this thesis is to investigate the pyrolytic behaviour and kinetics of Chlorella vulgaris, a green microalga. Under pyrolysis conditions, these microalgae show their comparable capabilities to terrestrial biomass for energy and chemicals production. Also, the evidence from a preliminary pyrolysis by the intermediate pilot-scale reactor supports the applicability of these microalgae in the existing pyrolysis reactor. Thermal decomposition of Chlorella vulgaris occurs in a wide range of temperature (200-550°C) with multi-step reactions. To evaluate the kinetic parameters of their pyrolysis process, two approaches which are isothermal and non-isothermal experiments are applied in this work. New developed Pyrolysis-Mass Spectrometry (Py-MS) technique has the potential for isothermal measurements with a short run time and small sample size requirement. The equipment and procedure are assessed by the kinetic evaluation of thermal decomposition of polyethylene and lignocellulosic derived materials (cellulose, hemicellulose, and lignin). In the case of non-isothermal experiment, Thermogravimetry- Mass Spectrometry (TG-MS) technique is used in this work. Evolved gas analysis provides the information on the evolution of volatiles and these data lead to a multi-component model. Triplet kinetic values (apparent activation energy, pre-exponential factor, and apparent reaction order) from isothermal experiment are 57 (kJ/mol), 5.32 (logA, min-1), 1.21-1.45; 9 (kJ/mol), 1.75 (logA, min-1), 1.45 and 40 (kJ/mol), 3.88 (logA, min-1), 1.45- 1.15 for low, middle and high temperature region, respectively. The kinetic parameters from non-isothermal experiment are varied depending on the different fractions in algal biomass when the range of apparent activation energies are 73-207 (kJ/mol); pre-exponential factor are 5-16 (logA, min-1); and apparent reaction orders are 1.32–2.00. The kinetic procedures reported in this thesis are able to be applied to other kinds of biomass and algae for future works.
Resumo:
Supported living and retirement villages are becoming a significant option for older adults with impairments, with independence concerns or for forward planning in older age, but evidence as to psychological benefits for residents is sparse. This study examined the hypothesis that the multi-component advantages of moving into a supported and physically and socially accessible “extra care” independent living environment will impact on psychological and functioning measures. Using an observational longitudinal design, 161 new residents were assessed initially and three months later, in comparison to 33 older adults staying in their original homes. Initial group differences were apparent but some reduced after three months. Residents showed improvement in depression, perceived health, aspects of cognitive function, and reduced functional limitations, while controls showed increased functional limitations (worsening). Ability to recall specific autobiographical memories, known to be related to social-problem solving, depression and functioning in social relationships, predicted change in communication limitations, and cognitive change predicted changes in recreational limitations. Change in anxiety and memory predicted change in depression. Findings suggest that older adults with independent living concerns who move to an independent but supported environment can show significant benefits in psychological outcomes and reduction in perceived impact of health on functional limitations in a short period. Targets for focussed rehabilitation are indicated, but findings also validate development of untargeted general supportive environments.
Resumo:
La carbonatation minérale dans les résidus miniers est un moyen sûr et permanent de séquestrer le CO2 atmosphérique. C’est un processus naturel et passif qui ne nécessite aucun traitement particulier et donc avantageux d’un point de vue économique. Bien que la quantité de CO2 qu’il soit possible de séquestrer selon ce processus est faible à l’échelle globale, dans le cadre d’un marché du carbone, les entreprises minières pourraient obtenir des crédits et ainsi revaloriser leurs résidus. À l’heure actuelle, il y a peu d’informations pour quantifier le potentiel de séquestration du CO2 de façon naturelle et passive dans les piles de résidus miniers. Il est donc nécessaire d’étudier le phénomène pour comprendre comment évolue la réaction à travers le temps et estimer la quantité de CO2 qui peut être séquestrée naturellement dans les piles de résidus. Plusieurs travaux de recherche se sont intéressés aux résidus miniers de Thetford Mines (Québec, Canada), avec une approche principalement expérimentale en laboratoire. Ces travaux ont permis d’améliorer la compréhension du processus de carbonatation, mais ils nécessitent une validation à plus grande échelle sous des conditions atmosphériques réelles. L’objectif général de cette étude est de quantifier le processus de carbonatation minérale des résidus miniers sous des conditions naturelles, afin d’estimer la quantité de CO2 pouvant être piégée par ce processus. La méthodologie utilisée repose sur la construction de deux parcelles expérimentales de résidus miniers situées dans l’enceinte de la mine Black Lake (Thetford Mines). Les résidus miniers sont principalement constitués de grains et de fibres de chrysotile et lizardite mal triés, avec de petites quantités d’antigorite, de brucite et de magnétite. Des observations spatiales et temporelles ont été effectuées dans les parcelles concernant la composition et la pression des gaz, la température des résidus, la teneur en eau volumique, la composition minérale des résidus ainsi que la chimie de l’eau des précipitations et des lixiviats provenant des parcelles. Ces travaux ont permis d’observer un appauvrissement notable du CO2 dans les gaz des parcelles (< 50 ppm) ainsi que la précipitation d’hydromagnésite dans les résidus, ce qui suggère que la carbonatation minérale naturelle et passive est un processus potentiellement important dans les résidus miniers. Après 4 ans d’observations, le taux de séquestration du CO2 dans les parcelles expérimentales a été estimé entre 3,5 et 4 kg/m3/an. Ces observations ont permis de développer un modèle conceptuel de la carbonatation minérale naturelle et passive dans les parcelles expérimentales. Dans ce modèle conceptuel, le CO2 atmosphérique (~ 400 ppm) se dissout dans l’eau hygroscopique contenue dans les parcelles, où l’altération des silicates de magnésium forme des carbonates de magnésium. La saturation en eau dans les cellules est relativement stable dans le temps et varie entre 0,4 et 0,65, ce qui est plus élevé que les valeurs de saturation optimales proposées dans la littérature, réduisant ainsi le transport de CO2 dans la zone non saturée. Les concentrations de CO2 en phase gazeuse, ainsi que des mesures de la vitesse d’écoulement du gaz dans les cellules suggèrent que la réaction est plus active près de la surface et que la diffusion du CO2 est le mécanisme de transport dominant dans les résidus. Un modèle numérique a été utilisé pour simuler ces processus couplés et valider le modèle conceptuel avec les observations de terrain. Le modèle de transport réactif multiphase et multicomposant MIN3P a été utilisé pour réaliser des simulations en 1D qui comprennent l’infiltration d’eau à travers le milieu partiellement saturé, la diffusion du gaz, et le transport de masse réactif par advection et dispersion. Même si les écoulements et le contenu du lixivat simulés sont assez proches des observations de terrain, le taux de séquestration simulé est 22 fois plus faible que celui mesuré. Dans les simulations, les carbonates précipitent principalement dans la partie supérieure de la parcelle, près de la surface, alors qu’ils ont été observés dans toute la parcelle. Cette différence importante pourrait être expliquée par un apport insuffisant de CO2 dans la parcelle, qui serait le facteur limitant la carbonatation. En effet, l’advection des gaz n’a pas été considérée dans les simulations et seule la diffusion moléculaire a été simulée. En effet, la mobilité des gaz engendrée par les fluctuations de pression barométrique et l’infiltration de l’eau, ainsi que l’effet du vent doivent jouer un rôle conséquent pour alimenter les parcelles en CO2.
Resumo:
In Brazil and around the world, oil companies are looking for, and expected development of new technologies and processes that can increase the oil recovery factor in mature reservoirs, in a simple and inexpensive way. So, the latest research has developed a new process called Gas Assisted Gravity Drainage (GAGD) which was classified as a gas injection IOR. The process, which is undergoing pilot testing in the field, is being extensively studied through physical scale models and core-floods laboratory, due to high oil recoveries in relation to other gas injection IOR. This process consists of injecting gas at the top of a reservoir through horizontal or vertical injector wells and displacing the oil, taking advantage of natural gravity segregation of fluids, to a horizontal producer well placed at the bottom of the reservoir. To study this process it was modeled a homogeneous reservoir and a model of multi-component fluid with characteristics similar to light oil Brazilian fields through a compositional simulator, to optimize the operational parameters. The model of the process was simulated in GEM (CMG, 2009.10). The operational parameters studied were the gas injection rate, the type of gas injection, the location of the injector and production well. We also studied the presence of water drive in the process. The results showed that the maximum vertical spacing between the two wells, caused the maximum recovery of oil in GAGD. Also, it was found that the largest flow injection, it obtained the largest recovery factors. This parameter controls the speed of the front of the gas injected and determined if the gravitational force dominates or not the process in the recovery of oil. Natural gas had better performance than CO2 and that the presence of aquifer in the reservoir was less influential in the process. In economic analysis found that by injecting natural gas is obtained more economically beneficial than CO2
Resumo:
Questa tesi propone un indagine sulla memoria del retorno a partire da una prospettiva critica che assume il “sud globale di lingua portoghese” come spazio storico e concettuale di riferimento. Si riflette sull'idea di specificità attribuita alla colonizzazione promossa dal Portogallo in Africa tenendo conto delle contraddizioni associate al movimento migratorio innescato dal processo violento di decolonizzazione dell’Africa portoghese. Le memorie trauamatiche sul retorno espongono la violenza come componente costitutiva della realtà coloniale ma ripropongono anche dinamiche che permettono l’occultamento del razzismo. L'esplorazione della “soffitta”, assunta come metafora della memoria familiare custodita nello spazio domestico, accompagna quella dell’archivio pubblico. L’analisi dell’archivio ufficiale e della memoria familiare riflette il tentativo di stabilire un dialogo tra storia e memoria superando la logica di antitesi che tradizionalmente le contrappone. Utilizzando il concetto criticamente problematico di “postmemoria” si riflette sulla riconfigurazione del rapporto con il passato in funzione di un’idea di “eredità come compito” assunto nel presente). La possibilità di “salvare” il passato dalla progressiva scomparsa dei testimoni comporta un pericolo di abuso ideologico connaturato al processo di trasmissione. La traduzione delle memorie coloniali sul retorno dallo spazio intimo allo spazio del dibattito pubblico mostra la relazione tra la costruzione della mitologia familiare e l’adozione del discorso lusotropicale. Il tentativo di definire la natura indecifrabile del retornado comporta la possibilità di sanzionare la violenza coloniale negando una responsabilità collettiva riferita al colonialismo. Si presenta il tentativo di configurare i termini di una questione post-coloniale portoghese dai contorni opachi. Questa tesi approda ad una conclusione aperta, articolata sul rischio sempre presente di appropriazione delle categorie critiche post-coloniali da parte dell’ideologia egemonica. Attraverso le (post)memorie (post)coloniali la denuncia del razzismo in quanto eredità permanente e la riconfigurazione dell’archivio coloniale costituiscono operazioni possibili, necessarie, ma non per questo scontate o prive di rischi.
Resumo:
Mémoire numérisé par la Division de la gestion de documents et des archives de l'Université de Montréal
Resumo:
Expressions relating spectral efficiency, power, and Doppler spectrum, are derived for Rayleigh-faded wireless channels with Gaussian signal transmission. No side information on the state of the channel is assumed at the receiver. Rather, periodic reference signals are postulated in accordance with the functioning of most wireless systems. The analysis relies on a well-established lower bound, generally tight and asymptotically exact at low SNR. In contrast with most previous studies, which relied on block-fading channel models, a continuous-fading model is adopted. This embeds the Doppler spectrum directly in the derived expressions, imbuing them with practical significance. Closed-form relationships are obtained for the popular Clarke-Jakes spectrum and informative expansions, valid for arbitrary spectra, are found for the low- and high-power regimes. While the paper focuses on scalar channels, the extension to multiantenna settings is also discussed.
Resumo:
The Electrohysterogram (EHG) is a new instrument for pregnancy monitoring. It measures the uterine muscle electrical signal, which is closely related with uterine contractions. The EHG is described as a viable alternative and a more precise instrument than the currently most widely used method for the description of uterine contractions: the external tocogram. The EHG has also been indicated as a promising tool in the assessment of preterm delivery risk. This work intends to contribute towards the EHG characterization through the inventory of its components which are: • Contractions; • Labor contractions; • Alvarez waves; • Fetal movements; • Long Duration Low Frequency Waves; The instruments used for cataloging were: Spectral Analysis, parametric and non-parametric, energy estimators, time-frequency methods and the tocogram annotated by expert physicians. The EHG and respective tocograms were obtained from the Icelandic 16-electrode Electrohysterogram Database. 288 components were classified. There is not a component database of this type available for consultation. The spectral analysis module and power estimation was added to Uterine Explorer, an EHG analysis software developed in FCT-UNL. The importance of this component database is related to the need to improve the understanding of the EHG which is a relatively complex signal, as well as contributing towards the detection of preterm birth. Preterm birth accounts for 10% of all births and is one of the most relevant obstetric conditions. Despite the technological and scientific advances in perinatal medicine, in developed countries, prematurity is the major cause of neonatal death. Although various risk factors such as previous preterm births, infection, uterine malformations, multiple gestation and short uterine cervix in second trimester, have been associated with this condition, its etiology remains unknown [1][2][3].
Resumo:
Autonomic control of heart rate variability and the central location of vagal preganglionic neurones (VPN) were examined in the rattlesnake ( Crotalus durissus terrificus), in order to determine whether respiratory sinus arrhythmia (RSA) occurred in a similar manner to that described for mammals. Resting ECG signals were recorded in undisturbed snakes using miniature datalogging devices, and the presence of oscillations in heart rate (f(H)) was assessed by power spectral analysis (PSA). This mathematical technique provides a graphical output that enables the estimation of cardiac autonomic control by measuring periodic changes in the heart beat interval. At fH above 19 min(-1) spectra were mainly characterised by low frequency components, reflecting mainly adrenergic tonus on the heart. By contrast, at f(H) below 19 min(-1) spectra typically contained high frequency components, demonstrated to be cholinergic in origin. Snakes with a f(H) > 19 min(-1) may therefore have insufficient cholinergic tonus and/or too high an adrenergic tonus acting upon the heart for respiratory sinus arrhythmia ( RSA) to develop. A parallel study monitored f(Hd) simultaneously with the intraperitoneal pressures associated with lung inflation. Snakes with a fH < 19 min(-1) exhibited a high frequency (HF) peak in the power spectrum, which correlated with ventilation rate (f(V)). Adrenergic blockade by propranolol infusion increased the variability of the ventilation cycle, and the oscillatory component of the f(H) spectrum broadened accordingly. Infusion of atropine to effect cholinergic blockade abolished this HF component, confirming a role for vagal control of the heart in matching f(H) and f(V) in the rattlesnake. A neuroanatomical study of the brainstem revealed two locations for vagal preganglionic neurones (VPN). This is consistent with the suggestion that generation of ventilatory components in the heart rate variability (HRV) signal are dependent on spatially distinct loci for cardiac VPN. Therefore, this study has demonstrated the presence of RSA in the HRV signal and a dual location for VPN in the rattlesnake. We suggest there to be a causal relationship between these two observations.
Resumo:
This thesis deal with the design of advanced OFDM systems. Both waveform and receiver design have been treated. The main scope of the Thesis is to study, create, and propose, ideas and novel design solutions able to cope with the weaknesses and crucial aspects of modern OFDM systems. Starting from the the transmitter side, the problem represented by low resilience to non-linear distortion has been assessed. A novel technique that considerably reduces the Peak-to-Average Power Ratio (PAPR) yielding a quasi constant signal envelope in the time domain (PAPR close to 1 dB) has been proposed.The proposed technique, named Rotation Invariant Subcarrier Mapping (RISM),is a novel scheme for subcarriers data mapping,where the symbols belonging to the modulation alphabet are not anchored, but maintain some degrees of freedom. In other words, a bit tuple is not mapped on a single point, rather it is mapped onto a geometrical locus, which is totally or partially rotation invariant. The final positions of the transmitted complex symbols are chosen by an iterative optimization process in order to minimize the PAPR of the resulting OFDM symbol. Numerical results confirm that RISM makes OFDM usable even in severe non-linear channels. Another well known problem which has been tackled is the vulnerability to synchronization errors. Indeed in OFDM system an accurate recovery of carrier frequency and symbol timing is crucial for the proper demodulation of the received packets. In general, timing and frequency synchronization is performed in two separate phases called PRE-FFT and POST-FFT synchronization. Regarding the PRE-FFT phase, a novel joint symbol timing and carrier frequency synchronization algorithm has been presented. The proposed algorithm is characterized by a very low hardware complexity, and, at the same time, it guarantees very good performance in in both AWGN and multipath channels. Regarding the POST-FFT phase, a novel approach for both pilot structure and receiver design has been presented. In particular, a novel pilot pattern has been introduced in order to minimize the occurrence of overlaps between two pattern shifted replicas. This allows to replace conventional pilots with nulls in the frequency domain, introducing the so called Silent Pilots. As a result, the optimal receiver turns out to be very robust against severe Rayleigh fading multipath and characterized by low complexity. Performance of this approach has been analytically and numerically evaluated. Comparing the proposed approach with state of the art alternatives, in both AWGN and multipath fading channels, considerable performance improvements have been obtained. The crucial problem of channel estimation has been thoroughly investigated, with particular emphasis on the decimation of the Channel Impulse Response (CIR) through the selection of the Most Significant Samples (MSSs). In this contest our contribution is twofold, from the theoretical side, we derived lower bounds on the estimation mean-square error (MSE) performance for any MSS selection strategy,from the receiver design we proposed novel MSS selection strategies which have been shown to approach these MSE lower bounds, and outperformed the state-of-the-art alternatives. Finally, the possibility of using of Single Carrier Frequency Division Multiple Access (SC-FDMA) in the Broadband Satellite Return Channel has been assessed. Notably, SC-FDMA is able to improve the physical layer spectral efficiency with respect to single carrier systems, which have been used so far in the Return Channel Satellite (RCS) standards. However, it requires a strict synchronization and it is also sensitive to phase noise of local radio frequency oscillators. For this reason, an effective pilot tone arrangement within the SC-FDMA frame, and a novel Joint Multi-User (JMU) estimation method for the SC-FDMA, has been proposed. As shown by numerical results, the proposed scheme manages to satisfy strict synchronization requirements and to guarantee a proper demodulation of the received signal.
Resumo:
Range estimation is the core of many positioning systems such as radar, and Wireless Local Positioning Systems (WLPS). The estimation of range is achieved by estimating Time-of-Arrival (TOA). TOA represents the signal propagation delay between a transmitter and a receiver. Thus, error in TOA estimation causes degradation in range estimation performance. In wireless environments, noise, multipath, and limited bandwidth reduce TOA estimation performance. TOA estimation algorithms that are designed for wireless environments aim to improve the TOA estimation performance by mitigating the effect of closely spaced paths in practical (positive) signal-to-noise ratio (SNR) regions. Limited bandwidth avoids the discrimination of closely spaced paths. This reduces TOA estimation performance. TOA estimation methods are evaluated as a function of SNR, bandwidth, and the number of reflections in multipath wireless environments, as well as their complexity. In this research, a TOA estimation technique based on Blind signal Separation (BSS) is proposed. This frequency domain method estimates TOA in wireless multipath environments for a given signal bandwidth. The structure of the proposed technique is presented and its complexity and performance are theoretically evaluated. It is depicted that the proposed method is not sensitive to SNR, number of reflections, and bandwidth. In general, as bandwidth increases, TOA estimation performance improves. However, spectrum is the most valuable resource in wireless systems and usually a large portion of spectrum to support high performance TOA estimation is not available. In addition, the radio frequency (RF) components of wideband systems suffer from high cost and complexity. Thus, a novel, multiband positioning structure is proposed. The proposed technique uses the available (non-contiguous) bands to support high performance TOA estimation. This system incorporates the capabilities of cognitive radio (CR) systems to sense the available spectrum (also called white spaces) and to incorporate white spaces for high-performance localization. First, contiguous bands that are divided into several non-equal, narrow sub-bands that possess the same SNR are concatenated to attain an accuracy corresponding to the equivalent full band. Two radio architectures are proposed and investigated: the signal is transmitted over available spectrum either simultaneously (parallel concatenation) or sequentially (serial concatenation). Low complexity radio designs that handle the concatenation process sequentially and in parallel are introduced. Different TOA estimation algorithms that are applicable to multiband scenarios are studied and their performance is theoretically evaluated and compared to simulations. Next, the results are extended to non-contiguous, non-equal sub-bands with the same SNR. These are more realistic assumptions in practical systems. The performance and complexity of the proposed technique is investigated as well. This study’s results show that selecting bandwidth, center frequency, and SNR levels for each sub-band can adapt positioning accuracy.
Resumo:
Objectives To evaluate the presence of false flow three-dimensional (3D) power Doppler signals in `flow-free` models. Methods 3D power Doppler datasets were acquired from three different flow-free phantoms (muscle, air and water) with two different transducers and Virtual Organ Computer-aided AnaLysis was used to generate a sphere that was serially applied through the 3D dataset. The vascularization flow index was used to compare artifactual signals at different depths (from 0 to 6 cm) within the different phantoms and at different gain and pulse repetition frequency (PR F) settings. Results Artifactual Doppler signals were seen in all phantoms despite these being flow-free. The pattern was very similar and the degree of artifact appeared to be dependent on the gain and distance from the transducer. False signals were more evident in the far field and increased as the gain was increased, with false signals first appearing with a gain of 1 dB in the air and muscle phantoms. False signals were seen at a lower gain with the water phantom (-15 dB) and these were associated with vertical lines of Doppler artifact that were related to PRF, and disappeared when reflections were attenuated. Conclusions Artifactual Doppler signals are seen in flow-free phantoms and are related to the gain settings and the distance from the transducer. In the in-vivo situation, the lowest gain settings that allow the detection of blood flow and adequate definition of vessel architecture should be used, which invariably means using a setting near or below the middle of the range available. Additionally, observers should be aware of vertical lines when evaluating cystic or liquid-containing structures. Copyright (C) 2010 ISUOC. Published by John Wiley & Sons, Ltd.
Resumo:
Dimensionality reduction plays a crucial role in many hyperspectral data processing and analysis algorithms. This paper proposes a new mean squared error based approach to determine the signal subspace in hyperspectral imagery. The method first estimates the signal and noise correlations matrices, then it selects the subset of eigenvalues that best represents the signal subspace in the least square sense. The effectiveness of the proposed method is illustrated using simulated and real hyperspectral images.
Resumo:
Given an hyperspectral image, the determination of the number of endmembers and the subspace where they live without any prior knowledge is crucial to the success of hyperspectral image analysis. This paper introduces a new minimum mean squared error based approach to infer the signal subspace in hyperspectral imagery. The method, termed hyperspectral signal identification by minimum error (HySime), is eigendecomposition based and it does not depend on any tuning parameters. It first estimates the signal and noise correlation matrices and then selects the subset of eigenvalues that best represents the signal subspace in the least squared error sense. The effectiveness of the proposed method is illustrated using simulated data based on U.S.G.S. laboratory spectra and real hyperspectral data collected by the AVIRIS sensor over Cuprite, Nevada.
Resumo:
Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.