966 resultados para Maximum entropy methods
Resumo:
The elastic behavior of the demand consumption jointly used with other available resources such as distributed generation (DG) can play a crucial role for the success of smart grids. The intensive use of Distributed Energy Resources (DER) and the technical and contractual constraints result in large-scale non linear optimization problems that require computational intelligence methods to be solved. This paper proposes a Particle Swarm Optimization (PSO) based methodology to support the minimization of the operation costs of a virtual power player that manages the resources in a distribution network and the network itself. Resources include the DER available in the considered time period and the energy that can be bought from external energy suppliers. Network constraints are considered. The proposed approach uses Gaussian mutation of the strategic parameters and contextual self-parameterization of the maximum and minimum particle velocities. The case study considers a real 937 bus distribution network, with 20310 consumers and 548 distributed generators. The obtained solutions are compared with a deterministic approach and with PSO without mutation and Evolutionary PSO, both using self-parameterization.
Resumo:
This study analysed 22 strawberry and soil samples after their collection over the course of 2 years to compare the residue profiles from organic farming with integrated pest management practices in Portugal. For sample preparation, we used the citrate-buffered version of the quick, easy, cheap, effective, rugged, and safe (QuEChERS) method. We applied three different methods for analysis: (1) 27 pesticides were targeted using LC-MS/MS; (2) 143 were targeted using low pressure GC-tandem mass spectrometry (LP-GC-MS/MS); and (3) more than 600 pesticides were screened in a targeted and untargeted approach using comprehensive, two-dimensional gas chromatography time-of-flight mass spectrometry (GC × GC-TOF-MS). Comparison was made of the analyses using the different methods for the shared samples. The results were similar, thereby providing satisfactory confirmation of both similarly positive and negative findings. No pesticides were found in the organic-farmed samples. In samples from integrated pest management practices, nine pesticides were determined and confirmed to be present, ranging from 2 μg kg−1 for fluazifop-pbutyl to 50 μg kg−1 for fenpropathrin. Concentrations of residues in strawberries were less than European maximum residue limits.
Resumo:
Backgroud: O International Panel on Climate Change prevê que o aumento da temperatura média global, até ao ano de 2100, varie entre 1,4 e 5,8ºC desconhecendo-se a evolução da adaptação da população a esta subida da temperatura. Em Portugal morre-se mais no Inverno que no Verão. Mas existem evidências de repercussões na mortalidade atribuíveis ao calor extremo. Este estudo procura conhecer os grupos etários e/ou populacionais que parecem revelar vulnerabilidade acrescida à exposição a temperaturas extremas e identificar indicadores de saúde apropriados para revelar esses mesmos efeitos. Métodos: Foram analisados dados de internamentos hospitalar e mortalidade por doenças cardiovasculares, respiratórias, renais, efeitos directos do frio e do calor, na população com 75 e mais anos de idade, nos distritos de Beja, Bragança e Faro, nos meses de Janeiro e Junho. Para os dados de morbilidade o período de análise foi 2002 a 2005 e para os de mortalidade de 2002 a 2004. Os dados meteorológicos analisados corresponderam aos valores da temperatura máxima e percentis da temperatura máxima, nos meses de Janeiro (P10) e Junho (P90). Os excessos de internamentos hospitalares, definidos como os dias em que ocorreram internamentos acima do valor da média mais 2 desvio padrão, foram relacionados com a distribuição das temperaturas extremas (frias abaixo do P10, quentes acima do P90.Os dias com óbitos acima do valor da média foram relacionados com a distribuição das temperaturas extremas (frias abaixo do P10, quentes acima do P90). Os indicadores propostos foram baseados em Odds Ratios e intervalos de confiança que sugeriam as estimativas mais precisas. Resultados: O grupo que revelou maior vulnerabilidade às temperaturas extremas foi o grupo dos 75 e mais anos, com doenças cardiovasculares quando exposto a temperaturas extremas, nos 3 distritos observados.O nº de dias de excesso de óbitos por doenças cardiovasculares relacionados com temperaturas extremas foi o mais elevado comparado com as restantes causas de morte. O grupo etário dos 75 e mais anos com de doenças respiratórias também é vulnerável, às temperaturas extremas frias, nos 3 distritos. Verificaram-se dias de excessos de internamentos hospitalares e óbitos por esta causa de morte, relacionados com a exposição às temperaturas extremas frias. Em Junho, não se verificou excesso de mortalidade associado à exposição a temperaturas extremas por esta causa, em qualquer dos distritos analisados. Apenas se verificou a associação entre os dias de ocorrência de internamentos hospitalares por doenças renais e o calor extremo, em Bragança. Conclusões: Foram encontradas associações estatísticas significativas entre dias de excesso de ocorrência de internamentos hospitalares ou óbitos por causa e exposição a temperaturas extremas frias e quentes possibilitando a identificação de um conjunto de indicadores de saúde ambiental apropriados para monitorizar a evolução dos padrões de morbilidade, mortalidade e susceptibilidade das populações ao longo do tempo.-------------------- Backgroud: International Panel on Climate Change estimates that the rise of mean global temperature varies between 1,4 e 5,8ºC until 2100, with unknowing evolution adaptation of populations. In Portugal we die more in Winter than in Summer time. But there are several evidences of mortality attributable to extreme eat. The proposal of this study is to know the age and/or populations groups that reveal more vulnerability to exposure to extreme temperature and identifying proper health indicators to reveal those effects. Methods: Data from hospital admissions and mortality caused by cardiovascular, respiratory, renal diseases and direct effects from direct exposure to extreme cold and heat, in population with 75 and more years, in Beja, Bragança and Faro districts, during January and June, were analysed. Analysis period for morbidity data was from 2002 to 2005 and form mortality was 2002 to 2004. Meteorological data analysed were maximum temperature and percentile of maximum temperature, from January (P10) and June (P90. Relationship between excess of hospital admission, defined as the days that occurred hospital admissions above mean value more 2 standards desviation and distribution of extreme temperatures were established (cold under P10 and heat above P90. Proposal indicators were based on Odds Ratios and confidence intervals, suggesting the most precises estimatives. Results: The most vulnerable group to extreme temperature were people with 75 or more years older with cardiovascular diseases, observed in the 3 districts. Number of days caused by excess cardiovascular mortality and extreme temperature were the most number of days between the other causes. The group with 75 or more years old with respiratory diseases is vulnerable too, especially to cold extreme temperature, in all the 3 districts. There were excess of days of hospital admissions and days with deaths, for this cause relating to extreme cold temperature. In June, does not funded excess of mortality associated to extreme temperature by this cause in any district of the in observation. Just was found relationship between days of hospital admissions caused by renal diseases in Bragança in days with extreme heat. Conclusions: Were found statistically significant associations between days of excess of hospital admissions or deaths and exposure to extreme cold and heat temperatures giving the possibility of identifying a core of environmental indicators proper to monitoring patterns and trends evolutions on morbidity, mortality and susceptibly of populations for a long time.
Resumo:
The development of an intelligent wheelchair (IW) platform that may be easily adapted to any commercial electric powered wheelchair and aid any person with special mobility needs is the main objective of this project. To be able to achieve this main objective, three distinct control methods were implemented in the IW: manual, shared and automatic. Several algorithms were developed for each of these control methods. This paper presents three of the most significant of those algorithms with emphasis on the shared control method. Experiments were performed by users suffering from cerebral palsy, using a realistic simulator, in order to validate the approach. The experiments revealed the importance of using shared (aided) controls for users with severe disabilities. The patients still felt having complete control over the wheelchair movement when using a shared control at a 50% level and thus this control type was very well accepted. Thus it may be used in intelligent wheelchairs since it is able to correct the direction in case of involuntary movements of the user but still gives him a sense of complete control over the IW movement.
Resumo:
Trabalho Final de mestrado para obtenção do grau de Mestre em engenharia Mecância
Resumo:
Introdução – Ao aumento exponencial de informação, sobretudo a científica, não corresponde obrigatoriamente a melhoria de qualidade na pesquisa e no uso da mesma. O conceito de literacia da informação ganha pertinência e destaque, na medida em que abarca competências que permitem reconhecer quando é necessária a informação e de atuar de forma eficiente e efetiva na sua obtenção e utilização. A biblioteca académica assume, neste contexto, o papel de parceiro privilegiado, preparando o momento em que o estudante se sente capaz de produzir e registar novo conhecimento através da escrita. Objectivo – A Biblioteca da ESTeSL reestruturou as sessões desenvolvidas desde o ano lectivo 2002/2003 e deu início a um projecto mais formal denominado «Saber usar a informação de forma eficiente e eficaz». Objectivos: a) promover a melhoria da qualidade dos trabalhos académicos e científicos; b) contribuir para a diminuição do risco de plágio; c) aumentar a confiança dos estudantes nas suas capacidades de utilização dos recursos de informação; d) incentivar uma participação mais ativa em sala de aulas; e) colaborar para a integração dos conteúdos pedagógicos e das várias fontes de informação. Método – Dinamizaram-se várias sessões de formação de curta duração, versando diferentes temas associados à literacia de informação, designadamente: 1) Pesquisa de informação com sessões dedicadas à MEDLINE, RCAAP, SciELO, B-ON e Scopus; 2) Factor de impacto das revistas científicas: Journal Citation Reports e SciMAGO; 3) Como fazer um resumo científico?; 4) Como estruturar o trabalho científico?; 5) Como fazer uma apresentação oral?; 6) Como evitar o plágio?; 7) Referenciação bibliográfica usando a norma de Vancouver; 8) Utilização de gestores de referências bibliográficas: ZOTERO (primeira abordagem para os estudantes de 1º ano de licenciatura) e a gestão de referências e rede académica de informação com o MENDELEY (direcionado para estudantes finalistas, mestrandos, docentes e investigadores). O projecto foi apresentado à comunidade académica no site da ESTeSL; cada sessão foi divulgada individualmente no site e por email. Em 2015, a divulgação investiu na nova página da Biblioteca (https://estesl.biblio.ipl.pt/), que alojava informações e recursos abordados nas formações. As inscrições eram feitas por email, sem custos associados ou limite mínimo ou máximo de sessões para participar. Resultados – Em 2014 registaram-se 87 inscrições. Constatou-se a presença de, pelo menos, um participante em cada sessão de formação. Em 2015, o total de inscrições foi de 190. Foram reagendadas novas sessões a pedido dos estudantes cujos horários não eram compatíveis com os inicialmente agendados. Foram então organizados dois dias de formação seguida (cerca de 4h em cada dia) com conteúdos selecionados pelos estudantes. Registou-se, nestas sessões, a presença contante de cerca de 30 estudantes em sala. No total, as sessões da literacia da informação contaram com estudantes de licenciatura de todos os anos, estudantes de mestrado, docentes e investigadores (internos e externos à ESTeSL). Conclusões – Constata-se a necessidade de introdução de novos conteúdos no projeto de literacia da informação. O tempo, os conteúdos e o interesse demonstrado por aqueles que dele usufruíram evidenciam que este é um projeto que está a ganhar o seu espaço na comunidade da ESTeSL e que a literacia da informação contribui de forma efetiva para a construção e para a produção de conhecimento no meio académico.
Resumo:
The paper reports viscosity measurements of compressed liquid dipropyl (DPA) and dibutyl (DBA) adipates obtained with two vibrating wire sensors developed in our group. The vibrating wire instruments were operated in the forced oscillation, or steady-state mode. The viscosity measurements of DPA were carried out in a range of pressures up to 18. MPa and temperatures from (303 to 333). K, and DBA up to 65. MPa and temperature from (303 to 373). K, covering a total range of viscosities from (1.3 to 8.3). mPa. s. The required density data of the liquid samples were obtained in our laboratory using an Anton Paar vibrating tube densimeter and were reported in a previous paper. The viscosity results were correlated with density, using a modified hard-spheres scheme. The root mean square deviation of the data from the correlation is less than (0.21 and 0.32)% and the maximum absolute relative deviations are within (0.43 and 0.81)%, for DPA and DBA respectively. No data for the viscosity of both adipates could be found in the literature. Independent viscosity measurements were also performed, at atmospheric pressure, using an Ubbelohde capillary in order to compare with the vibrating wire results. The expanded uncertainty of these results is estimated as ±1.5% at a 95% confidence level. The two data sets agree within the uncertainty of both methods. © 2015 Published by Elsevier B.V.
Resumo:
Journal of Hydraulic Engineering, Vol. 135, No. 11, November 1, 2009
Resumo:
Forest fires dynamics is often characterized by the absence of a characteristic length-scale, long range correlations in space and time, and long memory, which are features also associated with fractional order systems. In this paper a public domain forest fires catalogue, containing information of events for Portugal, covering the period from 1980 up to 2012, is tackled. The events are modelled as time series of Dirac impulses with amplitude proportional to the burnt area. The time series are viewed as the system output and are interpreted as a manifestation of the system dynamics. In the first phase we use the pseudo phase plane (PPP) technique to describe forest fires dynamics. In the second phase we use multidimensional scaling (MDS) visualization tools. The PPP allows the representation of forest fires dynamics in two-dimensional space, by taking time series representative of the phenomena. The MDS approach generates maps where objects that are perceived to be similar to each other are placed on the map forming clusters. The results are analysed in order to extract relationships among the data and to better understand forest fires behaviour.
Resumo:
In Brazil, more than 500,000 new cases of malaria were notified in 1992. Plasmodium falciparum and P.vivax are the responsible species for 99.3% of the cases. For adequate treatment, precoce diagnosis is necessary. In this work, we present the results of the traditional Plasmodia detection method, thick blood film (TBF), and the results of alternative methods: Immunofluorescence assay (IFA) with polyclonal antibody and Quantitative Buffy Coat method (QBC)® in a well defined population groups. The analysis were done in relation to the presence or absence of malaria clinical symptoms. Also different classes of immunoglobulins anti-P.falciparum were quantified for the global analysis of the results, mainly in the discrepant results. We concluded that alternative methods are more sensitive than TBF and that the association of epidemiological, clinical and laboratory findings is necessary to define the presence of malaria.
Resumo:
Coupling five rigid or flexible bis(pyrazolato)based tectons with late transition metal ions allowed us to isolate 18 coordination polymers (CPs). As assessed by thermal analysis, all of them possess a remarkable thermal stability, their decomposition temperatures lying in the range of 340-500 degrees C. As demonstrated by N-2 adsorption measurements at 77 K, their Langmuir specific surface areas span the rather vast range of 135-1758 m(2)/g, in agreement with the porous or dense polymeric architectures retrieved by powder X-ray diffraction structure solution methods. Two representative families of CPs, built up with either rigid or flexible spacers, were tested as catalysts in (0 the microwave-assisted solvent-free peroxidative oxidation of alcohols by t-BuOOH, and (ii) the peroxidative oxidation of cydohexane to cydohexanol and cydohexanone by H2O2 in acetonitrile. Those CPs bearing the rigid spacer, concurrently possessing higher specific surface areas, are more active than the corresponding ones with the flexible spacer. Moreover, the two copper(I)-containing CPs investigated exhibit the highest efficiency in both reactions, leading selectively to a maximum product yield of 92% (and TON up to 1.5 x 10(3)) in the oxidation of 1-phenylethanol and of 11% in the oxidation of cydohexane, the latter value being higher than that granted by the current industrial process.
Resumo:
The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.
Resumo:
Arguably, the most difficult task in text classification is to choose an appropriate set of features that allows machine learning algorithms to provide accurate classification. Most state-of-the-art techniques for this task involve careful feature engineering and a pre-processing stage, which may be too expensive in the emerging context of massive collections of electronic texts. In this paper, we propose efficient methods for text classification based on information-theoretic dissimilarity measures, which are used to define dissimilarity-based representations. These methods dispense with any feature design or engineering, by mapping texts into a feature space using universal dissimilarity measures; in this space, classical classifiers (e.g. nearest neighbor or support vector machines) can then be used. The reported experimental evaluation of the proposed methods, on sentiment polarity analysis and authorship attribution problems, reveals that it approximates, sometimes even outperforms previous state-of-the-art techniques, despite being much simpler, in the sense that they do not require any text pre-processing or feature engineering.
Resumo:
Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.