851 resultados para Correlation based analysis
Resumo:
The idea of spacecraft formations, flying in tight configurations with maximum baselines of a few hundred meters in low-Earth orbits, has generated widespread interest over the last several years. Nevertheless, controlling the movement of spacecraft in formation poses difficulties, such as in-orbit high-computing demand and collision avoidance capabilities, which escalate as the number of units in the formation is increased and complicated nonlinear effects are imposed to the dynamics, together with uncertainty which may arise from the lack of knowledge of system parameters. These requirements have led to the need of reliable linear and nonlinear controllers in terms of relative and absolute dynamics. The objective of this thesis is, therefore, to introduce new control methods to allow spacecraft in formation, with circular/elliptical reference orbits, to efficiently execute safe autonomous manoeuvres. These controllers distinguish from the bulk of literature in that they merge guidance laws never applied before to spacecraft formation flying and collision avoidance capacities into a single control strategy. For this purpose, three control schemes are presented: linear optimal regulation, linear optimal estimation and adaptive nonlinear control. In general terms, the proposed control approaches command the dynamical performance of one or several followers with respect to a leader to asymptotically track a time-varying nominal trajectory (TVNT), while the threat of collision between the followers is reduced by repelling accelerations obtained from the collision avoidance scheme during the periods of closest proximity. Linear optimal regulation is achieved through a Riccati-based tracking controller. Within this control strategy, the controller provides guidance and tracking toward a desired TVNT, optimizing fuel consumption by Riccati procedure using a non-infinite cost function defined in terms of the desired TVNT, while repelling accelerations generated from the CAS will ensure evasive actions between the elements of the formation. The relative dynamics model, suitable for circular and eccentric low-Earth reference orbits, is based on the Tschauner and Hempel equations, and includes a control input and a nonlinear term corresponding to the CAS repelling accelerations. Linear optimal estimation is built on the forward-in-time separation principle. This controller encompasses two stages: regulation and estimation. The first stage requires the design of a full state feedback controller using the state vector reconstructed by means of the estimator. The second stage requires the design of an additional dynamical system, the estimator, to obtain the states which cannot be measured in order to approximately reconstruct the full state vector. Then, the separation principle states that an observer built for a known input can also be used to estimate the state of the system and to generate the control input. This allows the design of the observer and the feedback independently, by exploiting the advantages of linear quadratic regulator theory, in order to estimate the states of a dynamical system with model and sensor uncertainty. The relative dynamics is described with the linear system used in the previous controller, with a control input and nonlinearities entering via the repelling accelerations from the CAS during collision avoidance events. Moreover, sensor uncertainty is added to the control process by considering carrier-phase differential GPS (CDGPS) velocity measurement error. An adaptive control law capable of delivering superior closed-loop performance when compared to the certainty-equivalence (CE) adaptive controllers is finally presented. A novel noncertainty-equivalence controller based on the Immersion and Invariance paradigm for close-manoeuvring spacecraft formation flying in both circular and elliptical low-Earth reference orbits is introduced. The proposed control scheme achieves stabilization by immersing the plant dynamics into a target dynamical system (or manifold) that captures the desired dynamical behaviour. They key feature of this methodology is the addition of a new term to the classical certainty-equivalence control approach that, in conjunction with the parameter update law, is designed to achieve adaptive stabilization. This parameter has the ultimate task of shaping the manifold into which the adaptive system is immersed. The performance of the controller is proven stable via a Lyapunov-based analysis and Barbalat’s lemma. In order to evaluate the design of the controllers, test cases based on the physical and orbital features of the Prototype Research Instruments and Space Mission Technology Advancement (PRISMA) are implemented, extending the number of elements in the formation into scenarios with reconfigurations and on-orbit position switching in elliptical low-Earth reference orbits. An extensive analysis and comparison of the performance of the controllers in terms of total Δv and fuel consumption, with and without the effects of the CAS, is presented. These results show that the three proposed controllers allow the followers to asymptotically track the desired nominal trajectory and, additionally, those simulations including CAS show an effective decrease of collision risk during the performance of the manoeuvre.
Resumo:
Introducción: El dolor lumbar y los desórdenes músculo esqueléticos comprometen la salud y la calidad de vida de los trabajadores, pueden poner en riesgo el futuro laboral de las personas. bjetivo: Estimar la prevalencia de dolor lumbar y los posibles factores biomecánicos asociados en el personal operativo y administrativo en una empresa manufacturera de jabón en Bogotá, en el año 2016 Metodología: Estudio de corte transversal donde se evaluó el riesgo biomecánico y la prevalencia del dolor lumbar en personal administrativo (138) y operativo (165); se utilizó como instrumento el ERGOPAR validado en España. Se revisó la asociación utilizando la prueba Chi Cuadrado de Pearson, con un nivel de significación α 0.05 Resultados: 303 trabajadores de una empresa manufacturera de jabón en Bogotá, donde predominó el género masculino (51,82%) y la población adulta media entre 30-39 años (57,42%). La prevalencia del dolor lumbar en la población fue de 61,39% (186). La edad no se asoció estadísticamente al dolor lumbar. Se encontró asociación estadística entre el síntoma dolor lumbar y extensión de cuello (p=0,05 OR1.95 IC 1.33-2.88), así como con agarrar o sujetar objetos (p= 0,036. OR 2.3 IC 1.59-3.51) y con las exigencias físicas laborales (p= 0.001 OR 1.99 IC 1.31-3.02). Conclusiones: La población estudiada presentó una alta prevalencia de dolor lumbar, con predominio en personal que realiza labores operativas, y del género femenino. La adopción de posturas de extensión del cuello y la sujeción o agarre de objetos son factores asociados directamente con la aparición de lumbalgia.
Resumo:
Introducción: la colecistectomía laparoscópica es la técnica de elección en pacientes con indicación de extracción quirúrgica de la vesícula; sin embargo, en promedio 20% de éstos requieren conversión a técnica abierta. En este estudio se evaluaron los factores de riesgo preoperatorios para conversión en colecistectomía laparoscópica de urgencia. Metodología: se realizó un estudio de casos y controles no pareado. Se obtuvo información sociodemográfica y de variables de interés de los registros de historias clínicas de pacientes operados entre el 2013 y 2016. Se identificaron los motivos de conversión de técnica quirúrgica. Se caracterizó la población de estudio y se estimaron asociaciones según la naturaleza de las variables. Mediante un análisis de regresión logística se ajustaron posibles variables de confusión. Resultados: se analizaron los datos de 444 pacientes (111 casos y 333 controles). La causa de conversión más frecuente fue la dificultad técnica (50,5%). Se encontró que la mayor edad, el sexo masculino, el antecedente de cirugía abierta en hemiabdomen superior, el signo de Murphy clínico positivo, la dilatación de la vía biliar, la leucocitosis y la mayor experiencia del cirujano, fueron factores de riesgo para conversión. Se encontró un área bajo la curva ROC= 0,743 (IC95% 0,692–0,794, p= <0,001). Discusión: existen unos factores que se asocian a mayor riesgo de conversión en colecistectomía laparoscópica. La mayoría se relacionan con un proceso inflamatorio más severo, por lo que se debe evitar la prolongación del tiempo de espera entre el inicio de los síntomas y la extirpación quirúrgica de la vesícula.
Resumo:
Introducción: El Cáncer es prevenible en algunos casos, si se evita la exposición a sustancias cancerígenas en el medio ambiente. En Colombia, Cundinamarca es uno de los departamentos con mayores incrementos en la tasa de mortalidad y en el municipio de Sibaté, habitantes han manifestado preocupación por el incremento de la enfermedad. En el campo de la salud ambiental mundial, la georreferenciación aplicada al estudio de fenómenos en salud, ha tenido éxito con resultados válidos. El estudio propuso usar herramientas de información geográfica, para generar análisis de tiempo y espacio que hicieran visible el comportamiento del cáncer en Sibaté y sustentaran hipótesis de influencias ambientales sobre concentraciones de casos. Objetivo: Obtener incidencia y prevalencia de casos de cáncer en habitantes de Sibaté y georreferenciar los casos en un periodo de 5 años, con base en indagación de registros. Metodología: Estudio exploratorio descriptivo de corte transversal,sobre todos los diagnósticos de cáncer entre los años 2010 a 2014, encontrados en los archivos de la Secretaria de Salud municipal. Se incluyeron unicamente quienes tuvieron residencia permanente en el municipio y fueron diagnosticados con cáncer entre los años de 2010 a 2104. Sobre cada caso se obtuvo género, edad, estrato socioeconómico, nivel académico, ocupación y estado civil. Para el análisis de tiempo se usó la fecha de diagnóstico y para el análisis de espacio, la dirección de residencia, tipo de cáncer y coordenada geográfica. Se generaron coordenadas geográficas con un equipo GPS Garmin y se crearon mapas con los puntos de la ubicación de las viviendas de los pacientes. Se proceso la información, con Epi Info 7 Resultados: Se encontraron 107 casos de cáncer registrados en la Secretaria de Salud de Sibaté, 66 mujeres, 41 hombres. Sin división de género, el 30.93% de la población presento cáncer del sistema reproductor, el 18,56% digestivo y el 17,53% tegumentario. Se presentaron 2 grandes casos de agrupaciones espaciales en el territorio estudiado, una en el Barrio Pablo Neruda con 12 (21,05%) casos y en el casco Urbano de Sibaté con 38 (66,67%) casos. Conclusión: Se corroboro que el análisis geográfico con variables espacio temporales y de exposición, puede ser la herramienta para generar hipótesis sobre asociaciones de casos de cáncer con factores ambientales.
Resumo:
To change unadapted water governing systems, and water users’ traditional conducts in line with climate change, understanding of systems’ structures and users’ behaviors is necessary. To this aim, comprehensive and pragmatic research was designed and implemented in the Urmia Lake Basin where due to the severe droughts, and human-made influences, especially through the agricultural development, the lake has been shrunken drastically. To analyze the water governance and conservation issues in the basin, an innovative framework was developed based on mathematical physics concepts and pro-environmental behavior theories. Accordingly, in system level (macro/meso), the problem of fit of the early-shaped water governing system associating with the function of “political-security” and “political-economic” factors in the basin was identified through mean-field models. Furthermore, the effect of a “political-environmental” factor, the Urmia Lake Restoration Program (ULRP), on reforming the system structure and hence its fit was assessed. The analysis results revealed that by revising the provincial boundaries (horizontal alternation) for the entity of Kurdistan province to permit that interact with the headquarter of West Azerbaijan province for its water demand-supply initiatives, the system fit can increase. Also, the constitution of the ULRP (vertical arrangement) not only could increase the structural fit of the water governing system to the basin, but also significantly could enhance the system fit through its water-saving policy. Besides, in individual level (micro), the governing factors of water conservation behavior of the major users/farmers were identified through rational and moral socio-psychological models. In rational approach, incorporating PMT and TPB, the SEM results demonstrated that “Perceived Vulnerability”, “Self-Efficacy”, “Response Efficacy”, “Response Cost”, “Subjective Norms” and “Institutional Trust” significantly affect the water-saving intention/behavior. Likewise, NAM based analysis as a moral approach, uncovered the significant effects of “Awareness of Consequences”, “Appraisal of Responsibility”, “Personal Norms” as well as “Place Attachment” and “Emotions” on water-saving intention.
Resumo:
This research investigates the use of Artificial Intelligence (AI) systems for profiling and decision-making, and the consequences that it poses to rights and freedoms of individuals. In particular, the research considers that automated decision-making systems (ADMs) are opaque, can be biased, and their logic is correlation-based. For these reasons, ADMs do not take decisions as human beings do. Against this background, the risks for the rights of individuals combined with the demand for transparency of algorithms have created a debate on the need for a new 'right to explanation'. Assuming that, except in cases provided for by law, a decision made by a human does not entitle to a right to explanation, the question has been raised as to whether – if the decision is made by an algorithm – it is necessary to configure a right to explanation for the decision-subject. Therefore, the research addresses a right to explanation of automated decision-making, examining the relation between today’s technology and legal concepts of explanation, reasoning, and transparency. In particular, it focuses on the existence and scope of the right to explanation, considering legal and technical issues surrounding the use of ADMs. The research analyses the use of AI and the problems arising from it from a legal perspective, studying the EU legal framework – especially in the data protection field. In this context, a part of the research is focused on transparency requirements under the GDPR (namely, Articles 13–15, 22, as well as Recital 71). The research aims to outline an interpretative framework of such a right and make recommendations about its development, aiming to provide guidelines for an adequate explanation of automated decisions. Hence, the thesis analyses what an explanation might consist of, and the benefits of explainable AI – examined from legal and technical perspectives.
Resumo:
This thesis presents a search for a sterile right-handed neutrino $N$ produced in $D_s$ meson decays, using proton-proton collisions collected by the CMS experiment at the LHC. The data set used for the analysis, the B-Parking data set, corresponds to an integrated luminosity of $41.7\,\textrm{fb}^{-1}$ and was collected during the 2018 data-taking period. The analysis is targeting the $D_s^+\rightarrow N(\rightarrow\mu^{\pm}\pi^{\mp})\mu^{+}$ decays, where the final state muons can have the same electric charge allowing for a lepton flavor violating decay. To separate signal from background, a cut-based analysis is optimized using requirements on the sterile neutrino vertex displacement, muon and pion impact parameter, and impact parameter significance. The expected limit on the active-sterile neutrino mixing matrix parameter $|V_{\mu}|^2$ is extracted by performing a fit of the $\mu\pi$ invariant mass spectrum for two sterile neutrino mass hypotheses, 1.0 and 1.5 GeV. The analysis is currently blinded, following the internal CMS review process. The expected limit ranges between approximately $10^{-4}$ for a 1.0 GeV neutrino to $7\times10^{-5}$ for a 1.5 GeV neutrino. This is competitive with the best existing results from collider experiments over the same mass range.
Resumo:
Revendo a definição e determinação de bolhas especulativas no contexto de contágio, este estudo analisa a bolha do DotCom nos mercados acionistas americanos e europeus usando o modelo de correlação condicional dinâmica (DCC) proposto por Engle e Sheppard (2001) como uma explicação econométrica e, por outro lado, as finanças comportamentais como uma explicação psicológica. Contágio é definido, neste contexto, como a quebra estatística nos DCC’s estimados, medidos através das alterações das suas médias e medianas. Surpreendentemente, o contágio é menor durante bolhas de preços, sendo que o resultado principal indica a presença de contágio entre os diferentes índices dos dois continentes e demonstra a presença de alterações estruturais durante a crise financeira.
Resumo:
Reviewing the de nition and measurement of speculative bubbles in context of contagion, this paper analyses the DotCom bubble in American and European equity markets using the dynamic conditional correlation (DCC) model proposed by (Engle and Sheppard 2001) as on one hand as an econometrics explanation and on the other hand the behavioral nance as an psychological explanation. Contagion is de ned in this context as the statistical break in the computed DCCs as measured by the shifts in their means and medians. Even it is astonishing, that the contagion is lower during price bubbles, the main nding indicates the presence of contagion in the di¤erent indices among those two continents and proves the presence of structural changes during nancial crisis
Resumo:
Twenty areas from eight Brazilian states were compared according to a list of 224 species of Poaceae. In order to determinate affinity patterns between the areas, a binary matrix was submitted to cluster and ordination analysis. The patterns found were then faced to climate and geographic position. The scores corresponding to the areas obtained from the cluster analysis showed a strong correlation to temperature. The scores corresponding to the species suggest a gradient that associates distribution patterns to the photosynthetic pathway (C3 or C4). The current results suggest that the traditional classification of the Southern American grasslands might require some modification in order to be broadly applicable in the Brazilian context.
Wavelet correlation between subjects: A time-scale data driven analysis for brain mapping using fMRI
Resumo:
Functional magnetic resonance imaging (fMRI) based on BOLD signal has been used to indirectly measure the local neural activity induced by cognitive tasks or stimulation. Most fMRI data analysis is carried out using the general linear model (GLM), a statistical approach which predicts the changes in the observed BOLD response based on an expected hemodynamic response function (HRF). In cases when the task is cognitively complex or in cases of diseases, variations in shape and/or delay may reduce the reliability of results. A novel exploratory method using fMRI data, which attempts to discriminate between neurophysiological signals induced by the stimulation protocol from artifacts or other confounding factors, is introduced in this paper. This new method is based on the fusion between correlation analysis and the discrete wavelet transform, to identify similarities in the time course of the BOLD signal in a group of volunteers. We illustrate the usefulness of this approach by analyzing fMRI data from normal subjects presented with standardized human face pictures expressing different degrees of sadness. The results show that the proposed wavelet correlation analysis has greater statistical power than conventional GLM or time domain intersubject correlation analysis. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
The identification, modeling, and analysis of interactions between nodes of neural systems in the human brain have become the aim of interest of many studies in neuroscience. The complex neural network structure and its correlations with brain functions have played a role in all areas of neuroscience, including the comprehension of cognitive and emotional processing. Indeed, understanding how information is stored, retrieved, processed, and transmitted is one of the ultimate challenges in brain research. In this context, in functional neuroimaging, connectivity analysis is a major tool for the exploration and characterization of the information flow between specialized brain regions. In most functional magnetic resonance imaging (fMRI) studies, connectivity analysis is carried out by first selecting regions of interest (ROI) and then calculating an average BOLD time series (across the voxels in each cluster). Some studies have shown that the average may not be a good choice and have suggested, as an alternative, the use of principal component analysis (PCA) to extract the principal eigen-time series from the ROI(s). In this paper, we introduce a novel approach called cluster Granger analysis (CGA) to study connectivity between ROIs. The main aim of this method was to employ multiple eigen-time series in each ROI to avoid temporal information loss during identification of Granger causality. Such information loss is inherent in averaging (e.g., to yield a single ""representative"" time series per ROI). This, in turn, may lead to a lack of power in detecting connections. The proposed approach is based on multivariate statistical analysis and integrates PCA and partial canonical correlation in a framework of Granger causality for clusters (sets) of time series. We also describe an algorithm for statistical significance testing based on bootstrapping. By using Monte Carlo simulations, we show that the proposed approach outperforms conventional Granger causality analysis (i.e., using representative time series extracted by signal averaging or first principal components estimation from ROIs). The usefulness of the CGA approach in real fMRI data is illustrated in an experiment using human faces expressing emotions. With this data set, the proposed approach suggested the presence of significantly more connections between the ROIs than were detected using a single representative time series in each ROI. (c) 2010 Elsevier Inc. All rights reserved.
Resumo:
We describe a novel approach to explore DNA nucleotide sequence data, aiming to produce high-level categorical and structural information about the underlying chromosomes, genomes and species. The article starts by analyzing chromosomal data through histograms using fixed length DNA sequences. After creating the DNA-related histograms, a correlation between pairs of histograms is computed, producing a global correlation matrix. These data are then used as input to several data processing methods for information extraction and tabular/graphical output generation. A set of 18 species is processed and the extensive results reveal that the proposed method is able to generate significant and diversified outputs, in good accordance with current scientific knowledge in domains such as genomics and phylogenetics.
Resumo:
This paper aims to study the relationships between chromosomal DNA sequences of twenty species. We propose a methodology combining DNA-based word frequency histograms, correlation methods, and an MDS technique to visualize structural information underlying chromosomes (CRs) and species. Four statistical measures are tested (Minkowski, Cosine, Pearson product-moment, and Kendall τ rank correlations) to analyze the information content of 421 nuclear CRs from twenty species. The proposed methodology is built on mathematical tools and allows the analysis and visualization of very large amounts of stream data, like DNA sequences, with almost no assumptions other than the predefined DNA “word length.” This methodology is able to produce comprehensible three-dimensional visualizations of CR clustering and related spatial and structural patterns. The results of the four test correlation scenarios show that the high-level information clusterings produced by the MDS tool are qualitatively similar, with small variations due to each correlation method characteristics, and that the clusterings are a consequence of the input data and not method’s artifacts.
Resumo:
Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.