885 resultados para SLASHED HALF-NORMAL DISTRIBUTION
Resumo:
Purpose.This retrospective cohort study evaluated factors for peri-implant bone level changes (ΔIBL) associated with an implant type with inner-cone implant-abutment connection, rough neck surface, and platform switching (AT). Materials and Methods. All AT placed at the Department of Prosthodontics of the University of Bern between January 2004 and December 2005 were included in this study. All implants were examined by single radiographs using the parallel technique taken at surgery (T0) and obtained at least 6 months after surgery (T1). Possible influencing factors were analysed first using t-test (normal distribution) or the nonparametric Wilcoxon test (not normal distribution), and then a mixed model q variance analysis was performed. Results. 43 patients were treated with 109 implants. Five implants in 2 patients failed (survival rate: 95.4%).Mean ΔIBL in group 1 (T1: 6–12 months after surgery) was −0.65 ± 0.82mm and −0.69 ± 0.82mm in group 2 (T1: >12 months after surgery) (
Resumo:
BACKGROUND Patients with downbeat nystagmus syndrome suffer from oscillopsia, which leads to an unstable visual perception and therefore impaired visual acuity. The aim of this study was to use real-time computer-based visual feedback to compensate for the destabilizing slow phase eye movements. METHODS The patients were sitting in front of a computer screen with the head fixed on a chin rest. The eye movements were recorded by an eye tracking system (EyeSeeCam®). We tested the visual acuity with a fixed Landolt C (static) and during real-time feedback driven condition (dynamic) in gaze straight ahead and (20°) sideward gaze. In the dynamic condition, the Landolt C moved according to the slow phase eye velocity of the downbeat nystagmus. The Shapiro-Wilk test was used to test for normal distribution and one-way ANOVA for comparison. RESULTS Ten patients with downbeat nystagmus were included in the study. Median age was 76 years and the median duration of symptoms was 6.3 years (SD +/- 3.1y). The mean slow phase velocity was moderate during gaze straight ahead (1.44°/s, SD +/- 1.18°/s) and increased significantly in sideward gaze (mean left 3.36°/s; right 3.58°/s). In gaze straight ahead, we found no difference between the static and feedback driven condition. In sideward gaze, visual acuity improved in five out of ten subjects during the feedback-driven condition (p = 0.043). CONCLUSIONS This study provides proof of concept that non-invasive real-time computer-based visual feedback compensates for the SPV in DBN. Therefore, real-time visual feedback may be a promising aid for patients suffering from oscillopsia and impaired text reading on screen. Recent technological advances in the area of virtual reality displays might soon render this approach feasible in fully mobile settings.
Resumo:
INTRODUCTION Despite important advances in psychological and pharmacological treatments of persistent depressive disorders in the past decades, their responses remain typically slow and poor, and differential responses among different modalities of treatments or their combinations are not well understood. Cognitive-Behavioural Analysis System of Psychotherapy (CBASP) is the only psychotherapy that has been specifically designed for chronic depression and has been examined in an increasing number of trials against medications, alone or in combination. When several treatment alternatives are available for a certain condition, network meta-analysis (NMA) provides a powerful tool to examine their relative efficacy by combining all direct and indirect comparisons. Individual participant data (IPD) meta-analysis enables exploration of impacts of individual characteristics that lead to a differentiated approach matching treatments to specific subgroups of patients. METHODS AND ANALYSIS We will search for all randomised controlled trials that compared CBASP, pharmacotherapy or their combination, in the treatment of patients with persistent depressive disorder, in Cochrane CENTRAL, PUBMED, SCOPUS and PsycINFO, supplemented by personal contacts. Individual participant data will be sought from the principal investigators of all the identified trials. Our primary outcomes are depression severity as measured on a continuous observer-rated scale for depression, and dropouts for any reason as a proxy measure of overall treatment acceptability. We will conduct a one-step IPD-NMA to compare CBASP, medications and their combinations, and also carry out a meta-regression to identify their prognostic factors and effect moderators. The model will be fitted in OpenBUGS, using vague priors for all location parameters. For the heterogeneity we will use a half-normal prior on the SD. ETHICS AND DISSEMINATION This study requires no ethical approval. We will publish the findings in a peer-reviewed journal. The study results will contribute to more finely differentiated therapeutics for patients suffering from this chronically disabling disorder. TRIAL REGISTRATION NUMBER CRD42016035886.
Resumo:
The rates of childhood and adolescent obesity in the United States have been increasing steadily. American youth continue to eat more (increase energy intake) and reduce physical activity (decrease energy expenditure) resulting in increased body weight and body fatness. One way to help reduce body weight in children is to increase physical activity. The purpose of this study was to determine if an age appropriate before-school physical activity intervention would be successful in increasing energy expenditure, intensity of activity, and behavioral approaches in overweight girls. The subjects were recruited from Parker Memorial School in Tolland, Connecticut, and two testing periods occurred over an eight week period. Video recordings of each physical activity session were analyzed to determine energy expenditure, exercise intensity, and behaviors during exercise. Data was evaluated for normal distribution, and paired t-tests were used to determine statistical significance. This study showed that the age appropriate before school physical activity intervention was able to increase energy expenditure and exercise intensity and have a positive effect on behavioral approaches in overweight girls.
Resumo:
In this paper, we extend the debate concerning Credit Default Swap valuation to include time varying correlation and co-variances. Traditional multi-variate techniques treat the correlations between covariates as constant over time; however, this view is not supported by the data. Secondly, since financial data does not follow a normal distribution because of its heavy tails, modeling the data using a Generalized Linear model (GLM) incorporating copulas emerge as a more robust technique over traditional approaches. This paper also includes an empirical analysis of the regime switching dynamics of credit risk in the presence of liquidity by following the general practice of assuming that credit and market risk follow a Markov process. The study was based on Credit Default Swap data obtained from Bloomberg that spanned the period January 1st 2004 to August 08th 2006. The empirical examination of the regime switching tendencies provided quantitative support to the anecdotal view that liquidity decreases as credit quality deteriorates. The analysis also examined the joint probability distribution of the credit risk determinants across credit quality through the use of a copula function which disaggregates the behavior embedded in the marginal gamma distributions, so as to isolate the level of dependence which is captured in the copula function. The results suggest that the time varying joint correlation matrix performed far superior as compared to the constant correlation matrix; the centerpiece of linear regression models.
Resumo:
Interaction effect is an important scientific interest for many areas of research. Common approach for investigating the interaction effect of two continuous covariates on a response variable is through a cross-product term in multiple linear regression. In epidemiological studies, the two-way analysis of variance (ANOVA) type of method has also been utilized to examine the interaction effect by replacing the continuous covariates with their discretized levels. However, the implications of model assumptions of either approach have not been examined and the statistical validation has only focused on the general method, not specifically for the interaction effect.^ In this dissertation, we investigated the validity of both approaches based on the mathematical assumptions for non-skewed data. We showed that linear regression may not be an appropriate model when the interaction effect exists because it implies a highly skewed distribution for the response variable. We also showed that the normality and constant variance assumptions required by ANOVA are not satisfied in the model where the continuous covariates are replaced with their discretized levels. Therefore, naïve application of ANOVA method may lead to an incorrect conclusion. ^ Given the problems identified above, we proposed a novel method modifying from the traditional ANOVA approach to rigorously evaluate the interaction effect. The analytical expression of the interaction effect was derived based on the conditional distribution of the response variable given the discretized continuous covariates. A testing procedure that combines the p-values from each level of the discretized covariates was developed to test the overall significance of the interaction effect. According to the simulation study, the proposed method is more powerful then the least squares regression and the ANOVA method in detecting the interaction effect when data comes from a trivariate normal distribution. The proposed method was applied to a dataset from the National Institute of Neurological Disorders and Stroke (NINDS) tissue plasminogen activator (t-PA) stroke trial, and baseline age-by-weight interaction effect was found significant in predicting the change from baseline in NIHSS at Month-3 among patients received t-PA therapy.^
Resumo:
Health departments, research institutions, policy-makers, and healthcare providers are often interested in knowing the health status of their clients/constituents. Without the resources, financially or administratively, to go out into the community and conduct health assessments directly, these entities frequently rely on data from population-based surveys to supply the information they need. Unfortunately, these surveys are ill-equipped for the job due to sample size and privacy concerns. Small area estimation (SAE) techniques have excellent potential in such circumstances, but have been underutilized in public health due to lack of awareness and confidence in applying its methods. The goal of this research is to make model-based SAE accessible to a broad readership using clear, example-based learning. Specifically, we applied the principles of multilevel, unit-level SAE to describe the geographic distribution of HPV vaccine coverage among females aged 11-26 in Texas.^ Multilevel (3 level: individual, county, public health region) random-intercept logit models of HPV vaccination (receipt of ≥ 1 dose Gardasil® ) were fit to data from the 2008 Behavioral Risk Factor Surveillance System (outcome and level 1 covariates) and a number of secondary sources (group-level covariates). Sampling weights were scaled (level 1) or constructed (levels 2 & 3), and incorporated at every level. Using the regression coefficients (and standard errors) from the final models, I simulated 10,000 datasets for each regression coefficient from the normal distribution and applied them to the logit model to estimate HPV vaccine coverage in each county and respective demographic subgroup. For simplicity, I only provide coverage estimates (and 95% confidence intervals) for counties.^ County-level coverage among females aged 11-17 varied from 6.8-29.0%. For females aged 18-26, coverage varied from 1.9%-23.8%. Aggregated to the state level, these values translate to indirect state estimates of 15.5% and 11.4%, respectively; both of which fall within the confidence intervals for the direct estimates of HPV vaccine coverage in Texas (Females 11-17: 17.7%, 95% CI: 13.6, 21.9; Females 18-26: 12.0%, 95% CI: 6.2, 17.7).^ Small area estimation has great potential for informing policy, program development and evaluation, and the provision of health services. Harnessing the flexibility of multilevel, unit-level SAE to estimate HPV vaccine coverage among females aged 11-26 in Texas counties, I have provided (1) practical guidance on how to conceptualize and conduct modelbased SAE, (2) a robust framework that can be applied to other health outcomes or geographic levels of aggregation, and (3) HPV vaccine coverage data that may inform the development of health education programs, the provision of health services, the planning of additional research studies, and the creation of local health policies.^
Resumo:
Maximizing data quality may be especially difficult in trauma-related clinical research. Strategies are needed to improve data quality and assess the impact of data quality on clinical predictive models. This study had two objectives. The first was to compare missing data between two multi-center trauma transfusion studies: a retrospective study (RS) using medical chart data with minimal data quality review and the PRospective Observational Multi-center Major Trauma Transfusion (PROMMTT) study with standardized quality assurance. The second objective was to assess the impact of missing data on clinical prediction algorithms by evaluating blood transfusion prediction models using PROMMTT data. RS (2005-06) and PROMMTT (2009-10) investigated trauma patients receiving ≥ 1 unit of red blood cells (RBC) from ten Level I trauma centers. Missing data were compared for 33 variables collected in both studies using mixed effects logistic regression (including random intercepts for study site). Massive transfusion (MT) patients received ≥ 10 RBC units within 24h of admission. Correct classification percentages for three MT prediction models were evaluated using complete case analysis and multiple imputation based on the multivariate normal distribution. A sensitivity analysis for missing data was conducted to estimate the upper and lower bounds of correct classification using assumptions about missing data under best and worst case scenarios. Most variables (17/33=52%) had <1% missing data in RS and PROMMTT. Of the remaining variables, 50% demonstrated less missingness in PROMMTT, 25% had less missingness in RS, and 25% were similar between studies. Missing percentages for MT prediction variables in PROMMTT ranged from 2.2% (heart rate) to 45% (respiratory rate). For variables missing >1%, study site was associated with missingness (all p≤0.021). Survival time predicted missingness for 50% of RS and 60% of PROMMTT variables. MT models complete case proportions ranged from 41% to 88%. Complete case analysis and multiple imputation demonstrated similar correct classification results. Sensitivity analysis upper-lower bound ranges for the three MT models were 59-63%, 36-46%, and 46-58%. Prospective collection of ten-fold more variables with data quality assurance reduced overall missing data. Study site and patient survival were associated with missingness, suggesting that data were not missing completely at random, and complete case analysis may lead to biased results. Evaluating clinical prediction model accuracy may be misleading in the presence of missing data, especially with many predictor variables. The proposed sensitivity analysis estimating correct classification under upper (best case scenario)/lower (worst case scenario) bounds may be more informative than multiple imputation, which provided results similar to complete case analysis.^
Resumo:
This paper assesses the along strike variation of active bedrock fault scarps using long range terrestrial laser scanning (t-LiDAR) data in order to determine the distribution behaviour of scarp height and the subsequently calculate long term throw-rates. Five faults on Cretewhich display spectacular limestone fault scarps have been studied using high resolution digital elevation model (HRDEM) data. We scanned several hundred square metres of the fault system including the footwall, fault scarp and hanging wall of the investigated fault segment. The vertical displacement and the dip of the scarp were extracted every metre along the strike of the detected fault segment based on the processed HRDEM. The scarp variability was analysed by using statistical and morphological methods. The analysis was done in a geographical information system (GIS) environment. Results show a normal distribution for the scanned fault scarp's vertical displacement. Based on these facts, the mean value of height was chosen to define the authentic vertical displacement. Consequently the scarp can be divided into above, below and within the range of mean (within one standard deviation) and quantify the modifications of vertical displacement. Therefore, the fault segment can be subdivided into areas which are influenced by external modification like erosion and sedimentation processes. Moreover, to describe and measure the variability of vertical displacement along strike the fault, the semi-variance was calculated with the variogram method. This method is used to determine how much influence the external processes have had on the vertical displacement. By combining of morphological and statistical results, the fault can be subdivided into areas with high external influences and areas with authentic fault scarps, which have little or no external influences. This subdivision is necessary for long term throw-rate calculations, because without this differentiation the calculated rates would be misleading and the activity of a fault would be incorrectly assessed with significant implications for seismic hazard assessment since fault slip rate data govern the earthquake recurrence. Furthermore, by using this workflow areas with minimal external influences can be determined, not only for throw-rate calculations, but also for determining samples sites for absolute dating techniques such as cosmogenic nuclide dating. The main outcomes of this study include: i) there is no direct correlation between the fault's mean vertical displacement and dip (R² less than 0.31); ii) without subdividing the scanned scarp into areas with differing amounts of external influences, the along strike variability of vertical displacement is ±35%; iii) when the scanned scarp is subdivided the variation of the vertical displacement of the authentic scarp (exposed by earthquakes only) is in a range of ±6% (the varies depending on the fault from 7 to 12%); iv) the calculation of the long term throw-rate (since 13 ka) for four scarps in Crete using the authentic vertical displacement is 0.35 ± 0.04 mm/yr at Kastelli 1, 0.31 ± 0.01 mm/yr at Kastelli 2, 0.85 ± 0.06 mm/yr at the Asomatos fault (Sellia) and 0.55 ± 0.05 mm/yr at the Lastros fault.
Resumo:
Diluted nitride self-assembled In(Ga)AsN quantum dots (QDs) grown on GaAs substrates are potential candidates to emit in the windows of maximum transmittance for optical fibres (1.3-1.55 μm). In this paper, we analyse the effect of nitrogen addition on the indium desorption occurring during the capping process of InxGa1−xAs QDs (x = l and 0.7). The samples have been grown by molecular beam epitaxy and studied through transmission electron microscopy (TEM) and photoluminescence techniques. The composition distribution inside the dots was determined by statistical moiré analysis and measured by energy dispersive X-ray spectroscopy. First, the addition of nitrogen in In(Ga)As QDs gave rise to a strong redshift in the emission peak, together with a large loss of intensity and monochromaticity. Moreover, these samples showed changes in the QDs morphology as well as an increase in the density of defects. The statistical compositional analysis displayed a normal distribution in InAs QDs with an average In content of 0.7. Nevertheless, the addition of Ga and/or N leads to a bimodal distribution of the Indium content with two separated QD populations. We suggest that the nitrogen incorporation enhances the indium fixation inside the QDs where the indium/gallium ratio plays an important role in this process. The strong redshift observed in the PL should be explained not only by the N incorporation but also by the higher In content inside the QDs
Resumo:
Fractal and multifractal are concepts that have grown increasingly popular in recent years in the soil analysis, along with the development of fractal models. One of the common steps is to calculate the slope of a linear fit commonly using least squares method. This shouldn?t be a special problem, however, in many situations using experimental data the researcher has to select the range of scales at which is going to work neglecting the rest of points to achieve the best linearity that in this type of analysis is necessary. Robust regression is a form of regression analysis designed to circumvent some limitations of traditional parametric and non-parametric methods. In this method we don?t have to assume that the outlier point is simply an extreme observation drawn from the tail of a normal distribution not compromising the validity of the regression results. In this work we have evaluated the capacity of robust regression to select the points in the experimental data used trying to avoid subjective choices. Based on this analysis we have developed a new work methodology that implies two basic steps: ? Evaluation of the improvement of linear fitting when consecutive points are eliminated based on R pvalue. In this way we consider the implications of reducing the number of points. ? Evaluation of the significance of slope difference between fitting with the two extremes points and fitted with the available points. We compare the results applying this methodology and the common used least squares one. The data selected for these comparisons are coming from experimental soil roughness transect and simulated based on middle point displacement method adding tendencies and noise. The results are discussed indicating the advantages and disadvantages of each methodology.
Resumo:
El presente proyecto de fin de carrera describe y analiza el estudio integral del efecto de las vibraciones producidas por voladuras superficiales realizadas en el proyecto del “Tercer Juego de Esclusas” ejecutado para la Expansión del Canal de Panamá. Se recopilan un total de 53 registros, data generada por el monitoreo de 7 sismógrafos en 10 voladuras de producción realizadas en el año 2010. El fenómeno vibratorio tiene dos parámetros fundamentales, la velocidad pico-partícula (PPV) y la frecuencia dominante, los cuales caracterizan cuan dañino puede ser éste frente a su influencia sobre las estructuras civiles; por ello, se pretende caracterizarlas y fundamentalmente predecirlas, lo que permitirá su debido control. En función a lo expuesto, el estudio realizado consta de dos partes, la primera describe el comportamiento del terreno mediante la estimación de la ley de atenuación de la velocidad pico-partícula a través del uso de la regresión lineal por mínimos cuadrados; la segunda detalla un procedimiento validable para la predicción de la frecuencia dominante y del pseudo-espectro de respuesta de velocidad (PVRS) basada en la teoría de Newmark & Hall. Se ha obtenido: (i) la ley de atenuación del terreno para distintos grados de fiabilidad, (ii) herramientas de diseño de voladuras basadas en la relación de carga – distancia, (iii) la demostración que los valores de PPV se ajustan a una distribución log-normal, (iv) el mapa de isolíneas de PPV para el área de estudio, (v) una técnica detallada y válida para la predicción de la frecuencia dominante y del espectro de respuesta, (vi) formulaciones matemáticas de los factores de amplificación para el desplazamiento, velocidad y aceleración, (vii) mapa de isolíneas de amplificación para el área de estudio. A partir de los resultados obtenidos se proporciona información útil para su uso en el diseño y control de las voladuras posteriores del proyecto. ABSTRACT This project work describes and analyzes the comprehensive study of the effect of the vibrations produced by surface blasting carried out in the "Third Set of Locks" project executed for the expansion of the Panama Canal. A total of 53 records were collected, with the data generated by the monitoring of 7 seismographs in 10 production blasts carried out in 2010. The vibratory phenomenon has two fundamental parameters, the peak-particle velocity (PPV) and the dominant frequency, which characterize how damaging this can be compared to their influence on structures, which is why this is intended to characterize and predict fundamentally, that which allows proper control. Based on the above, the study consists of two parts; the first describes the behavior of the terrain by estimating the attenuation law for peak-particle velocity by using the ordinary least squares regression analysis, the second details a validable procedure for the prediction of the dominant frequency and pseudo-velocity response spectrum (PVRS) based on the theory of Newmark & Hall. The following have been obtained: (i) the attenuation law of the terrain for different degrees of reliability, (ii) blast design tools based on charge-distance ratio, (iii) the demonstration that the values of PPV conform to a log-normal distribution, (iv) the map of isolines of PPV for the area of study (v) detailed and valid technique for predicting the dominant frequency response spectrum, (vi) mathematical formulations of the amplification factors for displacement, velocity and acceleration, (vii) amplification of isolines map for the study area. From the results obtained, the study provides useful information for use in the design and control of blasting for subsequent projects.
Resumo:
El cultivo de la caña de azúcar es uno de los más importantes en muchos países del mundo. Los suelos dedicados a este cultivo son usualmente compactados por el tránsito de la maquinaria en el proceso de cosecha. El uso combinado de la geoestadística con el análisis fractal ha demostrado ser útil para el estudio de los mismos. El objetivo del trabajo fue determinar los cambios espaciales de la resistencia a la penetración del suelo debido a la influencia del tránsito de la maquinaria en el proceso de cosecha de la caña de azúcar en un Vertisol, aplicando la metodología geoestadística-fractal. La investigación se llevó a cabo en el período de cosecha 2008-2009. Se evaluó la resistencia a la penetración en dos momentos, antes y después de la cosecha. El muestreo se realizó sistemáticamente en cuadrícula y en transecto, seleccionando 144 y 100 observaciones antes y después de la cosecha, respectivamente, y 221 para el transecto en diagonal. También se determinó el contenido de humedad del suelo por el método gravimétrico, para lo que se tomaron 288 muestras aleatorias en todo el campo. Los resultados demuestran que los valores de resistencia a la penetración (RP) presentaron una distribución normal a partir de los 5 cm de profundidad, el tránsito de la maquinaria agrícola para la cosecha de la caña de azúcar provocó concentración de la variabilidad espacial a escalas inferiores a la del muestreo (el efecto pepita aumentó), un aumento del rango de correlación espacial y una redistribución de las zonas de compactación (las variaciones de los mapas de Krigeaje). También indujo anti-persistencia y anisotropía en algunas direcciones horizontales. Se observó un comportamiento irregular de (RP) verticalmente en el transecto, donde no solamente influyó la maquinaria, sino que también otros factores influyeron como: la hilera, borde de la hilera y grietas. ABSTRACT The cultivation of the cane of sugar is one of the most important in many countries of the world. The soils dedicated to this cultivation are usually compacted by the traffic of the machinery in the harvest process. The combined use of the geostatistics with the fractal analysis has demonstrated to be useful for the study of the same ones. The objective of the work was to determine the space changes from the resistance to the penetration of the floor due to the influence of the traffic of the machinery in the harvest process of harvest of the cane of sugar in a Vertisol applying the geostatistic-fractal methodology. The investigation was carried out in the period of harvest 2008-2009. The resistance to the penetration at two moments was evaluated, before and after the harvest. The sampling was realized systematically in grid and transect, selecting 144 and 100 observations before and after the harvest, respectively, and 221 for transect in diagonal. Also the soil moisture content of the ground by the gravimetric method was determined, so 288 random samples in the entire field were taken. The results shown that resistance to penetration values presented a normal distribution deeper than 5 cm before and after harvest. The transit of the agricultural machinery for sugar cane harvest concentrated the space variability at lower distances than the sampling one, reflected an increase in the nugget effect. At the same time, an increase space correlation rank and a redistribution of compaction areas were observed studying the variations in kriging maps. Another effect of the agricultural machinery transit was to induce antipersistence and anisotropy in some horizontal directions. However, in vertical direction of the longest transect an irregular behaviour was induced not only by the machinery as by another factors such as soil cracks, crop rows and allocation.
Resumo:
La Universidad Politécnica de Madrid (UPM) y la Università degli Studi di Firenze (UniFi), bajo la coordinación técnica de AMPHOS21, participan desde 2009 en el proyecto de investigación “Estrategias de Monitorización de CO2 y otros gases en el estudio de Análogos Naturales”, financiado por la Fundación Ciudad de la Energía (CIUDEN) en el marco del Proyecto Compostilla OXYCFB300 (http://www.compostillaproject.eu), del Programa “European Energy Program for Recovery - EEPR”. El objetivo principal del proyecto fue el desarrollo y puesta a punto de metodologías de monitorización superficiales para su aplicación en el seguimiento y control de los emplazamientos donde se realice el almacenamiento geológico de CO2, analizando técnicas que permitan detectar y cuantificar las posibles fugas de CO2 a la atmósfera. Los trabajos se realizaron tanto en análogos naturales (españoles e italianos) como en la Planta de Desarrollo Tecnológico de Almacenamiento de CO2 de Hontomín. Las técnicas analizadas se centran en la medición de gases y aguas superficiales (de escorrentía y manantiales). En cuanto a la medición de gases se analizó el flujo de CO2 que emana desde el suelo a la atmósfera y la aplicabilidad de trazadores naturales (como el radón) para la detección e identificación de las fugas de CO2. En cuanto al análisis químico de las aguas se analizaron los datos geoquímicos e isotópicos y los gases disueltos en las aguas de los alrededores de la PDT de Hontomín, con objeto de determinar qué parámetros son los más apropiados para la detección de una posible migración del CO2 inyectado, o de la salmuera, a los ambientes superficiales. Las medidas de flujo de CO2 se realizaron con la técnica de la cámara de acúmulo. A pesar de ser una técnica desarrollada y aplicada en diferentes ámbitos científicos se estimó necesario adaptar un protocolo de medida y de análisis de datos a las características específicas de los proyectos de captura y almacenamiento de CO2 (CAC). Donde los flujos de CO2 esperados son bajos y en caso de producirse una fuga habrá que detectar pequeñas variaciones en los valores flujo con un “ruido” en la señal alto, debido a actividad biológica en el suelo. La medida de flujo de CO2 mediante la técnica de la cámara de acúmulo se puede realizar sin limpiar la superficie donde se coloca la cámara o limpiando y esperando al reequilibrio del flujo después de la distorsión al sistema. Sin embargo, los resultados obtenidos después de limpiar y esperar muestran menor dispersión, lo que nos indica que este procedimiento es el mejor para la monitorización de los complejos de almacenamiento geológico de CO2. El protocolo de medida resultante, utilizado para la obtención de la línea base de flujo de CO2 en Hontomín, sigue los siguiente pasos: a) con una espátula se prepara el punto de medición limpiando y retirando el recubrimiento vegetal o la primera capa compacta de suelo, b) se espera un tiempo para la realización de la medida de flujo, facilitando el reequilibrio del flujo del gas tras la alteración provocada en el suelo y c) se realiza la medida de flujo de CO2. Una vez realizada la medición de flujo de CO2, y detectada si existen zonas de anomalías, se debe estimar la cantidad de CO2 que se está escapando a la atmósfera (emanación total), con el objetivo de cuantificar la posible fuga. Existen un amplio rango de metodologías para realizar dicha estimación, siendo necesario entender cuáles son las más apropiadas para obtener el valor más representativo del sistema. En esta tesis se comparan seis técnicas estadísticas: media aritmética, estimador insegado de la media (aplicando la función de Sichel), remuestreo con reemplazamiento (bootstrap), separación en diferentes poblaciones mediante métodos gráficos y métodos basados en criterios de máxima verosimilitud, y la simulación Gaussiana secuencial. Para este análisis se realizaron ocho campañas de muestreo, tanto en la Planta de Desarrollo Tecnológico de Hontomón como en análogos naturales (italianos y españoles). Los resultados muestran que la simulación Gaussiana secuencial suele ser el método más preciso para realizar el cálculo, sin embargo, existen ocasiones donde otros métodos son más apropiados. Como consecuencia, se desarrolla un procedimiento de actuación para seleccionar el método que proporcione el mejor estimador. Este procedimiento consiste, en primer lugar, en realizar un análisis variográfico. Si existe una autocorrelación entre los datos, modelizada mediante el variograma, la mejor técnica para calcular la emanación total y su intervalo de confianza es la simulación Gaussiana secuencial (sGs). Si los datos son independientes se debe comprobar la distribución muestral, aplicando la media aritmética o el estimador insesgado de la media (Sichel) para datos normales o lognormales respectivamente. Cuando los datos no son normales o corresponden a una mezcla de poblaciones la mejor técnica de estimación es la de remuestreo con reemplazamiento (bootstrap). Siguiendo este procedimiento el máximo valor del intervalo de confianza estuvo en el orden del ±20/25%, con la mayoría de valores comprendidos entre ±3,5% y ±8%. La identificación de las diferentes poblaciones muestrales en los datos de flujo de CO2 puede ayudar a interpretar los resultados obtenidos, toda vez que esta distribución se ve afectada por la presencia de varios procesos geoquímicos como, por ejemplo, una fuente geológica o biológica del CO2. Así pues, este análisis puede ser una herramienta útil en el programa de monitorización, donde el principal objetivo es demostrar que no hay fugas desde el reservorio a la atmósfera y, si ocurren, detectarlas y cuantificarlas. Los resultados obtenidos muestran que el mejor proceso para realizar la separación de poblaciones está basado en criterios de máxima verosimilitud. Los procedimientos gráficos, aunque existen pautas para realizarlos, tienen un cierto grado de subjetividad en la interpretación de manera que los resultados son menos reproducibles. Durante el desarrollo de la tesis se analizó, en análogos naturales, la relación existente entre el CO2 y los isótopos del radón (222Rn y 220Rn), detectándose en todas las zonas de emisión de CO2 una relación positiva entre los valores de concentración de 222Rn en aire del suelo y el flujo de CO2. Comparando la concentración de 220Rn con el flujo de CO2 la relación no es tan clara, mientras que en algunos casos aumenta en otros se detecta una disminución, hecho que parece estar relacionado con la profundidad de origen del radón. Estos resultados confirmarían la posible aplicación de los isótopos del radón como trazadores del origen de los gases y su aplicación en la detección de fugas. Con respecto a la determinación de la línea base de flujo CO2 en la PDT de Hontomín, se realizaron mediciones con la cámara de acúmulo en las proximidades de los sondeos petrolíferos, perforados en los ochenta y denominados H-1, H-2, H-3 y H-4, en la zona donde se instalarán el sondeo de inyección (H-I) y el de monitorización (H-A) y en las proximidades de la falla sur. Desde noviembre de 2009 a abril de 2011 se realizaron siete campañas de muestreo, adquiriéndose más de 4.000 registros de flujo de CO2 con los que se determinó la línea base y su variación estacional. Los valores obtenidos fueron bajos (valores medios entre 5 y 13 g•m-2•d-1), detectándose pocos valores anómalos, principalmente en las proximidades del sondeo H-2. Sin embargo, estos valores no se pudieron asociar a una fuente profunda del CO2 y seguramente estuvieran más relacionados con procesos biológicos, como la respiración del suelo. No se detectaron valores anómalos cerca del sistema de fracturación (falla Ubierna), toda vez que en esta zona los valores de flujo son tan bajos como en el resto de puntos de muestreo. En este sentido, los valores de flujo de CO2 aparentemente están controlados por la actividad biológica, corroborado al obtenerse los menores valores durante los meses de otoño-invierno e ir aumentando en los periodos cálidos. Se calcularon dos grupos de valores de referencia, el primer grupo (UCL50) es 5 g•m-2•d-1 en las zonas no aradas en los meses de otoño-invierno y 3,5 y 12 g•m-2•d-1 en primavera-verano para zonas aradas y no aradas, respectivamente. El segundo grupo (UCL99) corresponde a 26 g•m-2•d- 1 durante los meses de otoño-invierno en las zonas no aradas y 34 y 42 g•m-2•d-1 para los meses de primavera-verano en zonas aradas y no aradas, respectivamente. Flujos mayores a estos valores de referencia podrían ser indicativos de una posible fuga durante la inyección y posterior a la misma. Los primeros datos geoquímicos e isotópicos de las aguas superficiales (de escorrentía y de manantiales) en el área de Hontomín–Huermeces fueron analizados. Los datos sugieren que las aguas estudiadas están relacionadas con aguas meteóricas con un circuito hidrogeológico superficial, caracterizadas por valores de TDS relativamente bajos (menor a 800 mg/L) y una fácie hidrogeoquímica de Ca2+(Mg2+)-HCO3 −. Algunas aguas de manantiales se caracterizan por concentraciones elevadas de NO3 − (concentraciones de hasta 123 mg/l), lo que sugiere una contaminación antropogénica. Se obtuvieron concentraciones anómalas de of Cl−, SO4 2−, As, B y Ba en dos manantiales cercanos a los sondeos petrolíferos y en el rio Ubierna, estos componentes son probablemente indicadores de una posible mezcla entre los acuíferos profundos y superficiales. El estudio de los gases disueltos en las aguas también evidencia el circuito superficial de las aguas. Estando, por lo general, dominado por la componente atmosférica (N2, O2 y Ar). Sin embargo, en algunos casos el gas predominante fue el CO2 (con concentraciones que llegan al 63% v/v), aunque los valores isotópicos del carbono (<-17,7 ‰) muestran que lo más probable es que esté relacionado con un origen biológico. Los datos geoquímicos e isotópicos de las aguas superficiales obtenidos en la zona de Hontomín se pueden considerar como el valor de fondo con el que comparar durante la fase operacional, la clausura y posterior a la clausura. En este sentido, la composición de los elementos mayoritarios y traza, la composición isotópica del carbono del CO2 disuelto y del TDIC (Carbono inorgánico disuelto) y algunos elementos traza se pueden considerar como parámetros adecuados para detectar la migración del CO2 a los ambientes superficiales. ABSTRACT Since 2009, a group made up of Universidad Politécnica de Madrid (UPM; Spain) and Università degli Studi Firenze (UniFi; Italy) has been taking part in a joint project called “Strategies for Monitoring CO2 and other Gases in Natural analogues”. The group was coordinated by AMPHOS XXI, a private company established in Barcelona. The Project was financially supported by Fundación Ciudad de la Energía (CIUDEN; Spain) as a part of the EC-funded OXYCFB300 project (European Energy Program for Recovery -EEPR-; www.compostillaproject.eu). The main objectives of the project were aimed to develop and optimize analytical methodologies to be applied at the surface to Monitor and Verify the feasibility of geologically stored carbon dioxide. These techniques were oriented to detect and quantify possible CO2 leakages to the atmosphere. Several investigations were made in natural analogues from Spain and Italy and in the Tecnchnological Development Plant for CO2 injection al Hontomín (Burgos, Spain). The studying techniques were mainly focused on the measurements of diffuse soil gases and surface and shallow waters. The soil-gas measurements included the determination of CO2 flux and the application to natural trace gases (e.g. radon) that may help to detect any CO2 leakage. As far as the water chemistry is concerned, geochemical and isotopic data related to surface and spring waters and dissolved gases in the area of the PDT of Hontomín were analyzed to determine the most suitable parameters to trace the migration of the injected CO2 into the near-surface environments. The accumulation chamber method was used to measure the diffuse emission of CO2 at the soil-atmosphere interface. Although this technique has widely been applied in different scientific areas, it was considered of the utmost importance to adapt the optimum methodology for measuring the CO2 soil flux and estimating the total CO2 output to the specific features of the site where CO2 is to be stored shortly. During the pre-injection phase CO2 fluxes are expected to be relatively low where in the intra- and post-injection phases, if leakages are to be occurring, small variation in CO2 flux might be detected when the CO2 “noise” is overcoming the biological activity of the soil (soil respiration). CO2 flux measurements by the accumulation chamber method could be performed without vegetation clearance or after vegetation clearance. However, the results obtained after clearance show less dispersion and this suggests that this procedure appears to be more suitable for monitoring CO2 Storage sites. The measurement protocol, applied for the determination of the CO2 flux baseline at Hontomín, has included the following steps: a) cleaning and removal of both the vegetal cover and top 2 cm of soil, b) waiting to reduce flux perturbation due to the soil removal and c) measuring the CO2 flux. Once completing the CO2 flux measurements and detected whether there were anomalies zones, the total CO2 output was estimated to quantify the amount of CO2 released to the atmosphere in each of the studied areas. There is a wide range of methodologies for the estimation of the CO2 output, which were applied to understand which one was the most representative. In this study six statistical methods are presented: arithmetic mean, minimum variances unbiased estimator, bootstrap resample, partitioning of data into different populations with a graphical and a maximum likelihood procedures, and sequential Gaussian simulation. Eight campaigns were carried out in the Hontomín CO2 Storage Technology Development Plant and in natural CO2 analogues. The results show that sequential Gaussian simulation is the most accurate method to estimate the total CO2 output and the confidential interval. Nevertheless, a variety of statistic methods were also used. As a consequence, an application procedure for selecting the most realistic method was developed. The first step to estimate the total emanation rate was the variogram analysis. If the relation among the data can be explained with the variogram, the best technique to calculate the total CO2 output and its confidence interval is the sequential Gaussian simulation method (sGs). If the data are independent, their distribution is to be analyzed. For normal and log-normal distribution the proper methods are the arithmetic mean and minimum variances unbiased estimator, respectively. If the data are not normal (log-normal) or are a mixture of different populations the best approach is the bootstrap resampling. According to these steps, the maximum confidence interval was about ±20/25%, with most of values between ±3.5% and ±8%. Partitioning of CO2 flux data into different populations may help to interpret the data as their distribution can be affected by different geochemical processes, e.g. geological or biological sources of CO2. Consequently, it may be an important tool in a monitoring CCS program, where the main goal is to demonstrate that there are not leakages from the reservoir to the atmosphere and, if occurring, to be able to detect and quantify it. Results show that the partitioning of populations is better performed by maximum likelihood criteria, since graphical procedures have a degree of subjectivity in the interpretation and results may not be reproducible. The relationship between CO2 flux and radon isotopes (222Rn and 220Rn) was studied in natural analogues. In all emissions zones, a positive relation between 222Rn and CO2 was observed. However, the relationship between activity of 220Rn and CO2 flux is not clear. In some cases the 220Rn activity indeed increased with the CO2 flux in other measurements a decrease was recognized. We can speculate that this effect was possibly related to the route (deep or shallow) of the radon source. These results may confirm the possible use of the radon isotopes as tracers for the gas origin and their application in the detection of leakages. With respect to the CO2 flux baseline at the TDP of Hontomín, soil flux measurements in the vicinity of oil boreholes, drilled in the eighties and named H-1 to H-4, and injection and monitoring wells were performed using an accumulation chamber. Seven surveys were carried out from November 2009 to summer 2011. More than 4,000 measurements were used to determine the baseline flux of CO2 and its seasonal variations. The measured values were relatively low (from 5 to 13 g•m-2•day-1) and few outliers were identified, mainly located close to the H-2 oil well. Nevertheless, these values cannot be associated to a deep source of CO2, being more likely related to biological processes, i.e. soil respiration. No anomalies were recognized close to the deep fault system (Ubierna Fault) detected by geophysical investigations. There, the CO2 flux is indeed as low as other measurement stations. CO2 fluxes appear to be controlled by the biological activity since the lowest values were recorded during autumn-winter seasons and they tend to increase in warm periods. Two reference CO2 flux values (UCL50 of 5 g•m-2•d-1 for non-ploughed areas in autumn-winter seasons and 3.5 and 12 g•m-2•d-1 for in ploughed and non-ploughed areas, respectively, in spring-summer time, and UCL99 of 26 g•m-2•d-1 for autumn-winter in not-ploughed areas and 34 and 42 g•m-2•d-1 for spring-summer in ploughed and not-ploughed areas, respectively, were calculated. Fluxes higher than these reference values could be indicative of possible leakage during the operational and post-closure stages of the storage project. The first geochemical and isotopic data related to surface and spring waters and dissolved gases in the area of Hontomín–Huermeces (Burgos, Spain) are presented and discussed. The chemical and features of the spring waters suggest that they are related to a shallow hydrogeological system as the concentration of the Total Dissolved Solids approaches 800 mg/L with a Ca2+(Mg2+)-HCO3 − composition, similar to that of the surface waters. Some spring waters are characterized by relatively high concentrations of NO3 − (up to 123 mg/L), unequivocally suggesting an anthropogenic source. Anomalous concentrations of Cl−, SO4 2−, As, B and Ba were measured in two springs, discharging a few hundred meters from the oil wells, and in the Rio Ubierna. These contents are possibly indicative of mixing processes between deep and shallow aquifers. The chemistry of the dissolved gases also evidences the shallow circuits of the Hontomín– Huermeces, mainly characterized by an atmospheric source as highlighted by the contents of N2, O2, Ar and their relative ratios. Nevertheless, significant concentrations (up to 63% by vol.) of isotopically negative CO2 (<−17.7‰ V-PDB) were found in some water samples, likely related to a biogenic source. The geochemical and isotopic data of the surface and spring waters in the surroundings of Hontomín can be considered as background values when intra- and post-injection monitoring programs will be carried out. In this respect, main and minor solutes, the isotopic carbon of dissolved CO2 and TDIC (Total Dissolved Inorganic Carbon) and selected trace elements can be considered as useful parameters to trace the migration of the injected CO2 into near-surface environments.
Resumo:
El objeto de esta Tesis doctoral es el desarrollo de una metodologia para la deteccion automatica de anomalias a partir de datos hiperespectrales o espectrometria de imagen, y su cartografiado bajo diferentes condiciones tipologicas de superficie y terreno. La tecnologia hiperespectral o espectrometria de imagen ofrece la posibilidad potencial de caracterizar con precision el estado de los materiales que conforman las diversas superficies en base a su respuesta espectral. Este estado suele ser variable, mientras que las observaciones se producen en un numero limitado y para determinadas condiciones de iluminacion. Al aumentar el numero de bandas espectrales aumenta tambien el numero de muestras necesarias para definir espectralmente las clases en lo que se conoce como Maldicion de la Dimensionalidad o Efecto Hughes (Bellman, 1957), muestras habitualmente no disponibles y costosas de obtener, no hay mas que pensar en lo que ello implica en la Exploracion Planetaria. Bajo la definicion de anomalia en su sentido espectral como la respuesta significativamente diferente de un pixel de imagen respecto de su entorno, el objeto central abordado en la Tesis estriba primero en como reducir la dimensionalidad de la informacion en los datos hiperespectrales, discriminando la mas significativa para la deteccion de respuestas anomalas, y segundo, en establecer la relacion entre anomalias espectrales detectadas y lo que hemos denominado anomalias informacionales, es decir, anomalias que aportan algun tipo de informacion real de las superficies o materiales que las producen. En la deteccion de respuestas anomalas se asume un no conocimiento previo de los objetivos, de tal manera que los pixeles se separan automaticamente en funcion de su informacion espectral significativamente diferenciada respecto de un fondo que se estima, bien de manera global para toda la escena, bien localmente por segmentacion de la imagen. La metodologia desarrollada se ha centrado en la implicacion de la definicion estadistica del fondo espectral, proponiendo un nuevo enfoque que permite discriminar anomalias respecto fondos segmentados en diferentes grupos de longitudes de onda del espectro, explotando la potencialidad de separacion entre el espectro electromagnetico reflectivo y emisivo. Se ha estudiado la eficiencia de los principales algoritmos de deteccion de anomalias, contrastando los resultados del algoritmo RX (Reed and Xiaoli, 1990) adoptado como estandar por la comunidad cientifica, con el metodo UTD (Uniform Targets Detector), su variante RXD-UTD, metodos basados en subespacios SSRX (Subspace RX) y metodo basados en proyecciones de subespacios de imagen, como OSPRX (Orthogonal Subspace Projection RX) y PP (Projection Pursuit). Se ha desarrollado un nuevo metodo, evaluado y contrastado por los anteriores, que supone una variacion de PP y describe el fondo espectral mediante el analisis discriminante de bandas del espectro electromagnetico, separando las anomalias con el algortimo denominado Detector de Anomalias de Fondo Termico o DAFT aplicable a sensores que registran datos en el espectro emisivo. Se han evaluado los diferentes metodos de deteccion de anomalias en rangos del espectro electromagnetico del visible e infrarrojo cercano (Visible and Near Infrared-VNIR), infrarrojo de onda corta (Short Wavelenght Infrared-SWIR), infrarrojo medio (Meadle Infrared-MIR) e infrarrojo termico (Thermal Infrared-TIR). La respuesta de las superficies en las distintas longitudes de onda del espectro electromagnetico junto con su entorno, influyen en el tipo y frecuencia de las anomalias espectrales que puedan provocar. Es por ello que se han utilizado en la investigacion cubos de datos hiperepectrales procedentes de los sensores aeroportados cuya estrategia y diseno en la construccion espectrometrica de la imagen difiere. Se han evaluado conjuntos de datos de test de los sensores AHS (Airborne Hyperspectral System), HyMAP Imaging Spectrometer, CASI (Compact Airborne Spectrographic Imager), AVIRIS (Airborne Visible Infrared Imaging Spectrometer), HYDICE (Hyperspectral Digital Imagery Collection Experiment) y MASTER (MODIS/ASTER Simulator). Se han disenado experimentos sobre ambitos naturales, urbanos y semiurbanos de diferente complejidad. Se ha evaluado el comportamiento de los diferentes detectores de anomalias a traves de 23 tests correspondientes a 15 areas de estudio agrupados en 6 espacios o escenarios: Urbano - E1, Semiurbano/Industrial/Periferia Urbana - E2, Forestal - E3, Agricola - E4, Geologico/Volcanico - E5 y Otros Espacios Agua, Nubes y Sombras - E6. El tipo de sensores evaluados se caracteriza por registrar imagenes en un amplio rango de bandas, estrechas y contiguas, del espectro electromagnetico. La Tesis se ha centrado en el desarrollo de tecnicas que permiten separar y extraer automaticamente pixeles o grupos de pixeles cuya firma espectral difiere de manera discriminante de las que tiene alrededor, adoptando para ello como espacio muestral parte o el conjunto de las bandas espectrales en las que ha registrado radiancia el sensor hiperespectral. Un factor a tener en cuenta en la investigacion ha sido el propio instrumento de medida, es decir, la caracterizacion de los distintos subsistemas, sensores imagen y auxiliares, que intervienen en el proceso. Para poder emplear cuantitativamente los datos medidos ha sido necesario definir las relaciones espaciales y espectrales del sensor con la superficie observada y las potenciales anomalias y patrones objetivos de deteccion. Se ha analizado la repercusion que en la deteccion de anomalias tiene el tipo de sensor, tanto en su configuracion espectral como en las estrategias de diseno a la hora de registrar la radiacion prodecente de las superficies, siendo los dos tipos principales de sensores estudiados los barredores o escaneres de espejo giratorio (whiskbroom) y los barredores o escaneres de empuje (pushbroom). Se han definido distintos escenarios en la investigacion, lo que ha permitido abarcar una amplia variabilidad de entornos geomorfologicos y de tipos de coberturas, en ambientes mediterraneos, de latitudes medias y tropicales. En resumen, esta Tesis presenta una tecnica de deteccion de anomalias para datos hiperespectrales denominada DAFT en su variante de PP, basada en una reduccion de la dimensionalidad proyectando el fondo en un rango de longitudes de onda del espectro termico distinto de la proyeccion de las anomalias u objetivos sin firma espectral conocida. La metodologia propuesta ha sido probada con imagenes hiperespectrales reales de diferentes sensores y en diferentes escenarios o espacios, por lo tanto de diferente fondo espectral tambien, donde los resultados muestran los beneficios de la aproximacion en la deteccion de una gran variedad de objetos cuyas firmas espectrales tienen suficiente desviacion respecto del fondo. La tecnica resulta ser automatica en el sentido de que no hay necesidad de ajuste de parametros, dando resultados significativos en todos los casos. Incluso los objetos de tamano subpixel, que no pueden distinguirse a simple vista por el ojo humano en la imagen original, pueden ser detectados como anomalias. Ademas, se realiza una comparacion entre el enfoque propuesto, la popular tecnica RX y otros detectores tanto en su modalidad global como local. El metodo propuesto supera a los demas en determinados escenarios, demostrando su capacidad para reducir la proporcion de falsas alarmas. Los resultados del algoritmo automatico DAFT desarrollado, han demostrado la mejora en la definicion cualitativa de las anomalias espectrales que identifican a entidades diferentes en o bajo superficie, reemplazando para ello el modelo clasico de distribucion normal con un metodo robusto que contempla distintas alternativas desde el momento mismo de la adquisicion del dato hiperespectral. Para su consecucion ha sido necesario analizar la relacion entre parametros biofisicos, como la reflectancia y la emisividad de los materiales, y la distribucion espacial de entidades detectadas respecto de su entorno. Por ultimo, el algoritmo DAFT ha sido elegido como el mas adecuado para sensores que adquieren datos en el TIR, ya que presenta el mejor acuerdo con los datos de referencia, demostrando una gran eficacia computacional que facilita su implementacion en un sistema de cartografia que proyecte de forma automatica en un marco geografico de referencia las anomalias detectadas, lo que confirma un significativo avance hacia un sistema en lo que se denomina cartografia en tiempo real. The aim of this Thesis is to develop a specific methodology in order to be applied in automatic detection anomalies processes using hyperspectral data also called hyperspectral scenes, and to improve the classification processes. Several scenarios, areas and their relationship with surfaces and objects have been tested. The spectral characteristics of reflectance parameter and emissivity in the pattern recognition of urban materials in several hyperspectral scenes have also been tested. Spectral ranges of the visible-near infrared (VNIR), shortwave infrared (SWIR) and thermal infrared (TIR) from hyperspectral data cubes of AHS (Airborne Hyperspectral System), HyMAP Imaging Spectrometer, CASI (Compact Airborne Spectrographic Imager), AVIRIS (Airborne Visible Infrared Imaging Spectrometer), HYDICE (Hyperspectral Digital Imagery Collection Experiment) and MASTER (MODIS/ASTER Simulator) have been used in this research. It is assumed that there is not prior knowledge of the targets in anomaly detection. Thus, the pixels are automatically separated according to their spectral information, significantly differentiated with respect to a background, either globally for the full scene, or locally by the image segmentation. Several experiments on different scenarios have been designed, analyzing the behavior of the standard RX anomaly detector and different methods based on subspace, image projection and segmentation-based anomaly detection methods. Results and their consequences in unsupervised classification processes are discussed. Detection of spectral anomalies aims at extracting automatically pixels that show significant responses in relation of their surroundings. This Thesis deals with the unsupervised technique of target detection, also called anomaly detection. Since this technique assumes no prior knowledge about the target or the statistical characteristics of the data, the only available option is to look for objects that are differentiated from the background. Several methods have been developed in the last decades, allowing a better understanding of the relationships between the image dimensionality and the optimization of search procedures as well as the subpixel differentiation of the spectral mixture and its implications in anomalous responses. In other sense, image spectrometry has proven to be efficient in the characterization of materials, based on statistical methods using a specific reflection and absorption bands. Spectral configurations in the VNIR, SWIR and TIR have been successfully used for mapping materials in different urban scenarios. There has been an increasing interest in the use of high resolution data (both spatial and spectral) to detect small objects and to discriminate surfaces in areas with urban complexity. This has come to be known as target detection which can be either supervised or unsupervised. In supervised target detection, algorithms lean on prior knowledge, such as the spectral signature. The detection process for matching signatures is not straightforward due to the complications of converting data airborne sensor with material spectra in the ground. This could be further complicated by the large number of possible objects of interest, as well as uncertainty as to the reflectance or emissivity of these objects and surfaces. An important objective in this research is to establish relationships that allow linking spectral anomalies with what can be called informational anomalies and, therefore, identify information related to anomalous responses in some places rather than simply spotting differences from the background. The development in recent years of new hyperspectral sensors and techniques, widen the possibilities for applications in remote sensing of the Earth. Remote sensing systems measure and record electromagnetic disturbances that the surveyed objects induce in their surroundings, by means of different sensors mounted on airborne or space platforms. Map updating is important for management and decisions making people, because of the fast changes that usually happen in natural, urban and semi urban areas. It is necessary to optimize the methodology for obtaining the best from remote sensing techniques from hyperspectral data. The first problem with hyperspectral data is to reduce the dimensionality, keeping the maximum amount of information. Hyperspectral sensors augment considerably the amount of information, this allows us to obtain a better precision on the separation of material but at the same time it is necessary to calculate a bigger number of parameters, and the precision lowers with the increase in the number of bands. This is known as the Hughes effects (Bellman, 1957) . Hyperspectral imagery allows us to discriminate between a huge number of different materials however some land and urban covers are made up with similar material and respond similarly which produces confusion in the classification. The training and the algorithm used for mapping are also important for the final result and some properties of thermal spectrum for detecting land cover will be studied. In summary, this Thesis presents a new technique for anomaly detection in hyperspectral data called DAFT, as a PP's variant, based on dimensionality reduction by projecting anomalies or targets with unknown spectral signature to the background, in a range thermal spectrum wavelengths. The proposed methodology has been tested with hyperspectral images from different imaging spectrometers corresponding to several places or scenarios, therefore with different spectral background. The results show the benefits of the approach to the detection of a variety of targets whose spectral signatures have sufficient deviation in relation to the background. DAFT is an automated technique in the sense that there is not necessary to adjust parameters, providing significant results in all cases. Subpixel anomalies which cannot be distinguished by the human eye, on the original image, however can be detected as outliers due to the projection of the VNIR end members with a very strong thermal contrast. Furthermore, a comparison between the proposed approach and the well-known RX detector is performed at both modes, global and local. The proposed method outperforms the existents in particular scenarios, demonstrating its performance to reduce the probability of false alarms. The results of the automatic algorithm DAFT have demonstrated improvement in the qualitative definition of the spectral anomalies by replacing the classical model by the normal distribution with a robust method. For their achievement has been necessary to analyze the relationship between biophysical parameters such as reflectance and emissivity, and the spatial distribution of detected entities with respect to their environment, as for example some buried or semi-buried materials, or building covers of asbestos, cellular polycarbonate-PVC or metal composites. Finally, the DAFT method has been chosen as the most suitable for anomaly detection using imaging spectrometers that acquire them in the thermal infrared spectrum, since it presents the best results in comparison with the reference data, demonstrating great computational efficiency that facilitates its implementation in a mapping system towards, what is called, Real-Time Mapping.