16 resultados para principal component analysis (PCA)

em Universidad Politécnica de Madrid


Relevância:

100.00% 100.00%

Publicador:

Resumo:

FBGs are excellent strain sensors, because of its low size and multiplexing capability. Tens to hundred of sensors may be embedded into a structure, as it has already been demonstrated. Nevertheless, they only afford strain measurements at local points, so unless the damage affects the strain readings in a distinguishable manner, damage will go undetected. This paper show the experimental results obtained on the wing of a UAV, instrumented with 32 FBGs, before and after small damages were introduced. The PCA algorithm was able to distinguish the damage cases, even for small cracks. Principal Component Analysis (PCA) is a technique of multivariable analysis to reduce a complex data set to a lower dimension and reveal some hidden patterns that underlie.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The use of a common environment for processing different powder foods in the industry has increased the risk of finding peanut traces in powder foods. The analytical methods commonly used for detection of peanut such as enzyme-linked immunosorbent assay (ELISA) and real-time polymerase chain reaction (RT-PCR) represent high specificity and sensitivity but are destructive and time-consuming, and require highly skilled experimenters. The feasibility of NIR hyperspectral imaging (HSI) is studied for the detection of peanut traces down to 0.01% by weight. A principal-component analysis (PCA) was carried out on a dataset of peanut and flour spectra. The obtained loadings were applied to the HSI images of adulterated wheat flour samples with peanut traces. As a result, HSI images were reduced to score images with enhanced contrast between peanut and flour particles. Finally, a threshold was fixed in score images to obtain a binary classification image, and the percentage of peanut adulteration was compared with the percentage of pixels identified as peanut particles. This study allowed the detection of traces of peanut down to 0.01% and quantification of peanut adulteration from 10% to 0.1% with a coefficient of determination (r2) of 0.946. These results show the feasibility of using HSI systems for the detection of peanut traces in conjunction with chemical procedures, such as RT-PCR and ELISA to facilitate enhanced quality-control surveillance on food-product processing lines.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Este trabajo presenta una solución al problema del reconocimiento del género de un rostro humano a partir de una imagen. Adoptamos una aproximación que utiliza la cara completa a través de la textura de la cara normalizada y redimensionada como entrada a un clasificador Näive Bayes. Presentamos la técnica de Análisis de Componentes Principales Probabilístico Condicionado-a-la-Clase (CC-PPCA) para reducir la dimensionalidad de los vectores de características para la clasificación y asegurar la asunción de independencia para el clasificador. Esta nueva aproximación tiene la deseable propiedad de presentar un modelo paramétrico sencillo para las marginales. Además, este modelo puede estimarse con muy pocos datos. En los experimentos que hemos desarrollados mostramos que CC-PPCA obtiene un 90% de acierto en la clasificación, resultado muy similar al mejor presentado en la literatura---ABSTRACT---This paper presents a solution to the problem of recognizing the gender of a human face from an image. We adopt a holistic approach by using the cropped and normalized texture of the face as input to a Naïve Bayes classifier. First it is introduced the Class-Conditional Probabilistic Principal Component Analysis (CC-PPCA) technique to reduce the dimensionality of the classification attribute vector and enforce the independence assumption of the classifier. This new approach has the desirable property of a simple parametric model for the marginals. Moreover this model can be estimated with very few data. In the experiments conducted we show that using CCPPCA we get 90% classification accuracy, which is similar result to the best in the literature. The proposed method is very simple to train and implement.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Video-based vehicle detection is the focus of increasing interest due to its potential towards collision avoidance. In particular, vehicle verification is especially challenging due to the enormous variability of vehicles in size, color, pose, etc. In this paper, a new approach based on supervised learning using Principal Component Analysis (PCA) is proposed that addresses the main limitations of existing methods. Namely, in contrast to classical approaches which train a single classifier regardless of the relative position of the candidate (thus ignoring valuable pose information), a region-dependent analysis is performed by considering four different areas. In addition, a study on the evolution of the classification performance according to the dimensionality of the principal subspace is carried out using PCA features within a SVM-based classification scheme. Indeed, the experiments performed on a publicly available database prove that PCA dimensionality requirements are region-dependent. Hence, in this work, the optimal configuration is adapted to each of them, rendering very good vehicle verification results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La predicción de energía eólica ha desempeñado en la última década un papel fundamental en el aprovechamiento de este recurso renovable, ya que permite reducir el impacto que tiene la naturaleza fluctuante del viento en la actividad de diversos agentes implicados en su integración, tales como el operador del sistema o los agentes del mercado eléctrico. Los altos niveles de penetración eólica alcanzados recientemente por algunos países han puesto de manifiesto la necesidad de mejorar las predicciones durante eventos en los que se experimenta una variación importante de la potencia generada por un parque o un conjunto de ellos en un tiempo relativamente corto (del orden de unas pocas horas). Estos eventos, conocidos como rampas, no tienen una única causa, ya que pueden estar motivados por procesos meteorológicos que se dan en muy diferentes escalas espacio-temporales, desde el paso de grandes frentes en la macroescala a procesos convectivos locales como tormentas. Además, el propio proceso de conversión del viento en energía eléctrica juega un papel relevante en la ocurrencia de rampas debido, entre otros factores, a la relación no lineal que impone la curva de potencia del aerogenerador, la desalineación de la máquina con respecto al viento y la interacción aerodinámica entre aerogeneradores. En este trabajo se aborda la aplicación de modelos estadísticos a la predicción de rampas a muy corto plazo. Además, se investiga la relación de este tipo de eventos con procesos atmosféricos en la macroescala. Los modelos se emplean para generar predicciones de punto a partir del modelado estocástico de una serie temporal de potencia generada por un parque eólico. Los horizontes de predicción considerados van de una a seis horas. Como primer paso, se ha elaborado una metodología para caracterizar rampas en series temporales. La denominada función-rampa está basada en la transformada wavelet y proporciona un índice en cada paso temporal. Este índice caracteriza la intensidad de rampa en base a los gradientes de potencia experimentados en un rango determinado de escalas temporales. Se han implementado tres tipos de modelos predictivos de cara a evaluar el papel que juega la complejidad de un modelo en su desempeño: modelos lineales autorregresivos (AR), modelos de coeficientes variables (VCMs) y modelos basado en redes neuronales (ANNs). Los modelos se han entrenado en base a la minimización del error cuadrático medio y la configuración de cada uno de ellos se ha determinado mediante validación cruzada. De cara a analizar la contribución del estado macroescalar de la atmósfera en la predicción de rampas, se ha propuesto una metodología que permite extraer, a partir de las salidas de modelos meteorológicos, información relevante para explicar la ocurrencia de estos eventos. La metodología se basa en el análisis de componentes principales (PCA) para la síntesis de la datos de la atmósfera y en el uso de la información mutua (MI) para estimar la dependencia no lineal entre dos señales. Esta metodología se ha aplicado a datos de reanálisis generados con un modelo de circulación general (GCM) de cara a generar variables exógenas que posteriormente se han introducido en los modelos predictivos. Los casos de estudio considerados corresponden a dos parques eólicos ubicados en España. Los resultados muestran que el modelado de la serie de potencias permitió una mejora notable con respecto al modelo predictivo de referencia (la persistencia) y que al añadir información de la macroescala se obtuvieron mejoras adicionales del mismo orden. Estas mejoras resultaron mayores para el caso de rampas de bajada. Los resultados también indican distintos grados de conexión entre la macroescala y la ocurrencia de rampas en los dos parques considerados. Abstract One of the main drawbacks of wind energy is that it exhibits intermittent generation greatly depending on environmental conditions. Wind power forecasting has proven to be an effective tool for facilitating wind power integration from both the technical and the economical perspective. Indeed, system operators and energy traders benefit from the use of forecasting techniques, because the reduction of the inherent uncertainty of wind power allows them the adoption of optimal decisions. Wind power integration imposes new challenges as higher wind penetration levels are attained. Wind power ramp forecasting is an example of such a recent topic of interest. The term ramp makes reference to a large and rapid variation (1-4 hours) observed in the wind power output of a wind farm or portfolio. Ramp events can be motivated by a broad number of meteorological processes that occur at different time/spatial scales, from the passage of large-scale frontal systems to local processes such as thunderstorms and thermally-driven flows. Ramp events may also be conditioned by features related to the wind-to-power conversion process, such as yaw misalignment, the wind turbine shut-down and the aerodynamic interaction between wind turbines of a wind farm (wake effect). This work is devoted to wind power ramp forecasting, with special focus on the connection between the global scale and ramp events observed at the wind farm level. The framework of this study is the point-forecasting approach. Time series based models were implemented for very short-term prediction, this being characterised by prediction horizons up to six hours ahead. As a first step, a methodology to characterise ramps within a wind power time series was proposed. The so-called ramp function is based on the wavelet transform and it provides a continuous index related to the ramp intensity at each time step. The underlying idea is that ramps are characterised by high power output gradients evaluated under different time scales. A number of state-of-the-art time series based models were considered, namely linear autoregressive (AR) models, varying-coefficient models (VCMs) and artificial neural networks (ANNs). This allowed us to gain insights into how the complexity of the model contributes to the accuracy of the wind power time series modelling. The models were trained in base of a mean squared error criterion and the final set-up of each model was determined through cross-validation techniques. In order to investigate the contribution of the global scale into wind power ramp forecasting, a methodological proposal to identify features in atmospheric raw data that are relevant for explaining wind power ramp events was presented. The proposed methodology is based on two techniques: principal component analysis (PCA) for atmospheric data compression and mutual information (MI) for assessing non-linear dependence between variables. The methodology was applied to reanalysis data generated with a general circulation model (GCM). This allowed for the elaboration of explanatory variables meaningful for ramp forecasting that were utilized as exogenous variables by the forecasting models. The study covered two wind farms located in Spain. All the models outperformed the reference model (the persistence) during both ramp and non-ramp situations. Adding atmospheric information had a noticeable impact on the forecasting performance, specially during ramp-down events. Results also suggested different levels of connection between the ramp occurrence at the wind farm level and the global scale.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El objetivo principal alrededor del cual se desenvuelve este proyecto es el desarrollo de un sistema de reconocimiento facial. Entre sus objetivos específicos se encuentran: realizar una primera aproximación sobre las técnicas de reconocimiento facial existentes en la actualidad, elegir una aplicación donde pueda ser útil el reconocimiento facial, diseñar y desarrollar un programa en MATLAB que lleve a cabo la función de reconocimiento facial, y evaluar el funcionamiento del sistema desarrollado. Este documento se encuentra dividido en cuatro partes: INTRODUCCIÓN, MARCO TEÓRICO, IMPLEMENTACIÓN, y RESULTADOS, CONCLUSIONES Y LÍNEAS FUTURAS. En la primera parte, se hace una introducción relativa a la actualidad del reconocimiento facial y se comenta brevemente sobre las técnicas existentes para desarrollar un sistema biométrico de este tipo. En ella se justifican también aquellas técnicas que acabaron formando parte de la implementación. En la segunda parte, el marco teórico, se explica la estructura general que tiene un sistema de reconocimiento biométrico, así como sus modos de funcionamiento, y las tasas de error utilizadas para evaluar y comparar su rendimiento. Así mismo, se lleva a cabo una descripción más profunda sobre los conceptos y métodos utilizados para efectuar la detección y reconocimiento facial en la tercera parte del proyecto. La tercera parte abarca una descripción detallada de la solución propuesta. En ella se explica el diseño, características y aplicación de la implementación; que trata de un programa elaborado en MATLAB con interfaz gráfica, y que utiliza cuatro sistemas de reconocimiento facial, basados cada uno en diferentes técnicas: Análisis por componentes principales, análisis lineal discriminante, wavelets de Gabor, y emparejamiento de grafos elásticos. El programa ofrece además la capacidad de crear y editar una propia base de datos con etiquetas, dándole aplicación directa sobre el tema que se trata. Se proponen además una serie de características con el objetivo de ampliar y mejorar las funcionalidades del programa diseñado. Dentro de dichas características destaca la propuesta de un modo de verificación híbrido aplicable a cualquier rama de la biometría y un programa de evaluación capaz de medir, graficar, y comparar las configuraciones de cada uno de los sistemas de reconocimiento implementados. Otra característica destacable es la herramienta programada para la creación de grafos personalizados y generación de modelos, aplicable a reconocimiento de objetos en general. En la cuarta y última parte, se presentan al principio los resultados obtenidos. En ellos se contemplan y analizan las comparaciones entre las distintas configuraciones de los sistemas de reconocimiento implementados para diferentes bases de datos (una de ellas formada con imágenes con condiciones de adquisición no controladas). También se miden las tasas de error del modo de verificación híbrido propuesto. Finalmente, se extraen conclusiones, y se proponen líneas futuras de investigación. ABSTRACT The main goal of this project is to develop a facial recognition system. To meet this end, it was necessary to accomplish a series of specific objectives, which were: researching on the existing face recognition technics nowadays, choosing an application where face recognition might be useful, design and develop a face recognition system using MATLAB, and measure the performance of the implemented system. This document is divided into four parts: INTRODUCTION, THEORTICAL FRAMEWORK, IMPLEMENTATION, and RESULTS, CONCLUSSIONS AND FUTURE RESEARCH STUDIES. In the first part, an introduction is made in relation to facial recognition nowadays, and the techniques used to develop a biometric system of this kind. Furthermore, the techniques chosen to be part of the implementation are justified. In the second part, the general structure and the two basic modes of a biometric system are explained. The error rates used to evaluate and compare the performance of a biometric system are explained as well. Moreover, a description of the concepts and methods used to detect and recognize faces in the third part is made. The design, characteristics, and applications of the systems put into practice are explained in the third part. The implementation consists in developing a program with graphical user interface made in MATLAB. This program uses four face recognition systems, each of them based on a different technique: Principal Component Analysis (PCA), Fisher’s Linear Discriminant (FLD), Gabor wavelets, and Elastic Graph Matching (EGM). In addition, with this implementation it is possible to create and edit one´s tagged database, giving it a direct application. Also, a group of characteristics are proposed to enhance the functionalities of the program designed. Among these characteristics, three of them should be emphasized in this summary: A proposal of an hybrid verification mode of a biometric system; and an evaluation program capable of measuring, plotting curves, and comparing different configurations of each implemented recognition system; and a tool programmed to create personalized graphs and models (tagged graph associated to an image of a person), which can be used generally in object recognition. In the fourth and last part of the project, the results of the comparisons between different configurations of the systems implemented are shown for three databases (One of them created with pictures taken under non-controlled environments). The error rates of the proposed hybrid verification mode are measured as well. Finally, conclusions are extracted and future research studies are proposed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The application of the Electro-Mechanical Impedance (EMI) method for damage detection in Structural Health Monitoring has noticeable increased in recent years. EMI method utilizes piezoelectric transducers for directly measuring the mechanical properties of the host structure, obtaining the so called impedance measurement, highly influenced by the variations of dynamic parameters of the structure. These measurements usually contain a large number of frequency points, as well as a high number of dimensions, since each frequency range swept can be considered as an independent variable. That makes this kind of data hard to handle, increasing the computational costs and being substantially time-consuming. In that sense, the Principal Component Analysis (PCA)-based data compression has been employed in this work, in order to enhance the analysis capability of the raw data. Furthermore, a Support Vector Machine (SVM), which has been widespread used in machine learning and pattern recognition fields, has been applied in this study in order to model any possible existing pattern in the PCAcompress data, using for that just the first two Principal Components. Different known non-damaged and damaged measurements of an experimental tested beam were used as training input data for the SVM algorithm, using as test input data the same amount of cases measured in beams with unknown structural health conditions. Thus, the purpose of this work is to demonstrate how, with a few impedance measurements of a beam as raw data, its healthy status can be determined based on pattern recognition procedures.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Video analytics play a critical role in most recent traffic monitoring and driver assistance systems. In this context, the correct detection and classification of surrounding vehicles through image analysis has been the focus of extensive research in the last years. Most of the pieces of work reported for image-based vehicle verification make use of supervised classification approaches and resort to techniques, such as histograms of oriented gradients (HOG), principal component analysis (PCA), and Gabor filters, among others. Unfortunately, existing approaches are lacking in two respects: first, comparison between methods using a common body of work has not been addressed; second, no study of the combination potentiality of popular features for vehicle classification has been reported. In this study the performance of the different techniques is first reviewed and compared using a common public database. Then, the combination capabilities of these techniques are explored and a methodology is presented for the fusion of classifiers built upon them, taking into account also the vehicle pose. The study unveils the limitations of single-feature based classification and makes clear that fusion of classifiers is highly beneficial for vehicle verification.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Poder clasificar de manera precisa la aplicación o programa del que provienen los flujos que conforman el tráfico de uso de Internet dentro de una red permite tanto a empresas como a organismos una útil herramienta de gestión de los recursos de sus redes, así como la posibilidad de establecer políticas de prohibición o priorización de tráfico específico. La proliferación de nuevas aplicaciones y de nuevas técnicas han dificultado el uso de valores conocidos (well-known) en puertos de aplicaciones proporcionados por la IANA (Internet Assigned Numbers Authority) para la detección de dichas aplicaciones. Las redes P2P (Peer to Peer), el uso de puertos no conocidos o aleatorios, y el enmascaramiento de tráfico de muchas aplicaciones en tráfico HTTP y HTTPS con el fin de atravesar firewalls y NATs (Network Address Translation), entre otros, crea la necesidad de nuevos métodos de detección de tráfico. El objetivo de este estudio es desarrollar una serie de prácticas que permitan realizar dicha tarea a través de técnicas que están más allá de la observación de puertos y otros valores conocidos. Existen una serie de metodologías como Deep Packet Inspection (DPI) que se basa en la búsqueda de firmas, signatures, en base a patrones creados por el contenido de los paquetes, incluido el payload, que caracterizan cada aplicación. Otras basadas en el aprendizaje automático de parámetros de los flujos, Machine Learning, que permite determinar mediante análisis estadísticos a qué aplicación pueden pertenecer dichos flujos y, por último, técnicas de carácter más heurístico basadas en la intuición o el conocimiento propio sobre tráfico de red. En concreto, se propone el uso de alguna de las técnicas anteriormente comentadas en conjunto con técnicas de minería de datos como son el Análisis de Componentes Principales (PCA por sus siglas en inglés) y Clustering de estadísticos extraídos de los flujos procedentes de ficheros de tráfico de red. Esto implicará la configuración de diversos parámetros que precisarán de un proceso iterativo de prueba y error que permita dar con una clasificación del tráfico fiable. El resultado ideal sería aquel en el que se pudiera identificar cada aplicación presente en el tráfico en un clúster distinto, o en clusters que agrupen grupos de aplicaciones de similar naturaleza. Para ello, se crearán capturas de tráfico dentro de un entorno controlado e identificando cada tráfico con su aplicación correspondiente, a continuación se extraerán los flujos de dichas capturas. Tras esto, parámetros determinados de los paquetes pertenecientes a dichos flujos serán obtenidos, como por ejemplo la fecha y hora de llagada o la longitud en octetos del paquete IP. Estos parámetros serán cargados en una base de datos MySQL y serán usados para obtener estadísticos que ayuden, en un siguiente paso, a realizar una clasificación de los flujos mediante minería de datos. Concretamente, se usarán las técnicas de PCA y clustering haciendo uso del software RapidMiner. Por último, los resultados obtenidos serán plasmados en una matriz de confusión que nos permitirá que sean valorados correctamente. ABSTRACT. Being able to classify the applications that generate the traffic flows in an Internet network allows companies and organisms to implement efficient resource management policies such as prohibition of specific applications or prioritization of certain application traffic, looking for an optimization of the available bandwidth. The proliferation of new applications and new technics in the last years has made it more difficult to use well-known values assigned by the IANA (Internet Assigned Numbers Authority), like UDP and TCP ports, to identify the traffic. Also, P2P networks and data encapsulation over HTTP and HTTPS traffic has increased the necessity to improve these traffic analysis technics. The aim of this project is to develop a number of techniques that make us able to classify the traffic with more than the simple observation of the well-known ports. There are some proposals that have been created to cover this necessity; Deep Packet Inspection (DPI) tries to find signatures in the packets reading the information contained in them, the payload, looking for patterns that can be used to characterize the applications to which that traffic belongs; Machine Learning procedures work with statistical analysis of the flows, trying to generate an automatic process that learns from those statistical parameters and calculate the likelihood of a flow pertaining to a certain application; Heuristic Techniques, finally, are based in the intuition or the knowledge of the researcher himself about the traffic being analyzed that can help him to characterize the traffic. Specifically, the use of some of the techniques previously mentioned in combination with data mining technics such as Principal Component Analysis (PCA) and Clustering (grouping) of the flows extracted from network traffic captures are proposed. An iterative process based in success and failure will be needed to configure these data mining techniques looking for a reliable traffic classification. The perfect result would be the one in which the traffic flows of each application is grouped correctly in each cluster or in clusters that contain group of applications of similar nature. To do this, network traffic captures will be created in a controlled environment in which every capture is classified and known to pertain to a specific application. Then, for each capture, all the flows will be extracted. These flows will be used to extract from them information such as date and arrival time or the IP length of the packets inside them. This information will be then loaded to a MySQL database where all the packets defining a flow will be classified and also, each flow will be assigned to its specific application. All the information obtained from the packets will be used to generate statistical parameters in order to describe each flow in the best possible way. After that, data mining techniques previously mentioned (PCA and Clustering) will be used on these parameters making use of the software RapidMiner. Finally, the results obtained from the data mining will be compared with the real classification of the flows that can be obtained from the database. A Confusion Matrix will be used for the comparison, letting us measure the veracity of the developed classification process.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Developing countries are experiencing unprecedented levels of economic growth. As a result, they will be responsible for most of the future growth in energy demand and greenhouse gas (GHG) emissions. Curbing GHG emissions in developing countries has become one of the cornerstones of a future international agreement under the United Nations Framework Convention for Climate Change (UNFCCC). However, setting caps for developing countries’ GHG emissions has encountered strong resistance in the current round of negotiations. Continued economic growth that allows poverty eradication is still the main priority for most developing countries, and caps are perceived as a constraint to future growth prospects. The development, transfer and use of low-carbon technologies have more positive connotations, and are seen as the potential path towards low-carbon development. So far, the success of the UNFCCC process in improving the levels of technology transfer (TT) to developing countries has been limited. This thesis analyses the causes for such limited success and seeks to improve on the understanding about what constitutes TT in the field of climate change, establish the factors that enable them in developing countries and determine which policies could be implemented to reinforce these factors. Despite the wide recognition of the importance of technology and knowledge transfer to developing countries in the climate change mitigation policy agenda, this issue has not received sufficient attention in academic research. Current definitions of climate change TT barely take into account the perspective of actors involved in actual climate change TT activities, while respective measurements do not bear in mind the diversity of channels through which these happen and the outputs and effects that they convey. Furthermore, the enabling factors for TT in non-BRIC (Brazil, Russia, India, China) developing countries have been seldom investigated, and policy recommendations to improve the level and quality of TTs to developing countries have not been adapted to the specific needs of highly heterogeneous countries, commonly denominated as “developing countries”. This thesis contributes to enriching the climate change TT debate from the perspective of a smaller emerging economy (Chile) and by undertaking a quantitative analysis of enabling factors for TT in a large sample of developing countries. Two methodological approaches are used to study climate change TT: comparative case study analysis and quantitative analysis. Comparative case studies analyse TT processes in ten cases based in Chile, all of which share the same economic, technological and policy frameworks, thus enabling us to draw conclusions on the enabling factors and obstacles operating in TT processes. The quantitative analysis uses three methodologies – principal component analysis, multiple regression analysis and cluster analysis – to assess the performance of developing countries in a number of enabling factors and the relationship between these factors and indicators of TT, as well as to create groups of developing countries with similar performances. The findings of this thesis are structured to provide responses to four main research questions: What constitutes technology transfer and how does it happen? Is it possible to measure technology transfer, and what are the main challenges in doing so? Which factors enable climate change technology transfer to developing countries? And how do different developing countries perform in these enabling factors, and how can differentiated policy priorities be defined accordingly? vi Resumen Los paises en desarrollo estan experimentando niveles de crecimiento economico sin precedentes. Como consecuencia, se espera que sean responsables de la mayor parte del futuro crecimiento global en demanda energetica y emisiones de Gases de Efecto de Invernadero (GEI). Reducir las emisiones de GEI en los paises en desarrollo es por tanto uno de los pilares de un futuro acuerdo internacional en el marco de la Convencion Marco de las Naciones Unidas para el Cambio Climatico (UNFCCC). La posibilidad de compromisos vinculantes de reduccion de emisiones de GEI ha sido rechazada por los paises en desarrollo, que perciben estos limites como frenos a su desarrollo economico y a su prioridad principal de erradicacion de la pobreza. El desarrollo, transferencia y uso de tecnologias bajas en carbono tiene connotaciones mas positivas y se percibe como la via hacia un crecimiento bajo en carbono. Hasta el momento, la UNFCCC ha tenido un exito limitado en la promocion de transferencias de tecnologia (TT) a paises en desarrollo. Esta tesis analiza las causas de este resultado y busca mejorar la comprension sobre que constituye transferencia de tecnologia en el area de cambio climatico, cuales son los factores que la facilitan en paises en desarrollo y que politicas podrian implementarse para reforzar dichos factores. A pesar del extendido reconocimiento sobre la importancia de la transferencia de tecnologia a paises en desarrollo en la agenda politica de cambio climatico, esta cuestion no ha sido suficientemente atendida por la investigacion existente. Las definiciones actuales de transferencia de tecnologia relacionada con la mitigacion del cambio climatico no tienen en cuenta la diversidad de canales por las que se manifiestan o los efectos que consiguen. Los factores facilitadores de TT en paises en desarrollo no BRIC (Brasil, Rusia, India y China) apenas han sido investigados, y las recomendaciones politicas para aumentar el nivel y la calidad de la TT no se han adaptado a las necesidades especificas de paises muy heterogeneos aglutinados bajo el denominado grupo de "paises en desarrollo". Esta tesis contribuye a enriquecer el debate sobre la TT de cambio climatico con la perspectiva de una economia emergente de pequeno tamano (Chile) y el analisis cuantitativo de factores que facilitan la TT en una amplia muestra de paises en desarrollo. Se utilizan dos metodologias para el estudio de la TT a paises en desarrollo: analisis comparativo de casos de estudio y analisis cuantitativo basado en metodos multivariantes. Los casos de estudio analizan procesos de TT en diez casos basados en Chile, para derivar conclusiones sobre los factores que facilitan u obstaculizan el proceso de transferencia. El analisis cuantitativo multivariante utiliza tres metodologias: regresion multiple, analisis de componentes principales y analisis cluster. Con dichas metodologias se busca analizar el posicionamiento de diversos paises en cuanto a factores que facilitan la TT; las relaciones entre dichos factores e indicadores de transferencia tecnologica; y crear grupos de paises con caracteristicas similares que podrian beneficiarse de politicas similares para la promocion de la transferencia de tecnologia. Los resultados de la tesis se estructuran en torno a cuatro preguntas de investigacion: .Que es la transferencia de tecnologia y como ocurre?; .Es posible medir la transferencia de tecnologias de bajo carbono?; .Que factores facilitan la transferencia de tecnologias de bajo carbono a paises en desarrollo? y .Como se puede agrupar a los paises en desarrollo en funcion de sus necesidades politicas para la promocion de la transferencia de tecnologias de bajo carbono?

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A Near Infrared Spectroscopy (NIRS) industrial application was developed by the LPF-Tagralia team, and transferred to a Spanish dehydrator company (Agrotécnica Extremeña S.L.) for the classification of dehydrator onion bulbs for breeding purposes. The automated operation of the system has allowed the classification of more than one million onion bulbs during seasons 2004 to 2008 (Table 1). The performance achieved by the original model (R2=0,65; SEC=2,28ºBrix) was enough for qualitative classification thanks to the broad range of variation of the initial population (18ºBrix). Nevertheless, a reduction of the classification performance of the model has been observed with the passing of seasons. One of the reasons put forward is the reduction of the range of variation that naturally occurs during a breeding process, the other is the variations in other parameters than the variable of interest but whose effects would probably be affecting the measurements [1]. This study points to the application of Independent Component Analysis (ICA) on this highly variable dataset coming from a NIRS industrial application for the identification of the different sources of variation present through seasons.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Images acquired during free breathing using first-pass gadolinium-enhanced myocardial perfusion magnetic resonance imaging (MRI) exhibit a quasiperiodic motion pattern that needs to be compensated for if a further automatic analysis of the perfusion is to be executed. In this work, we present a method to compensate this movement by combining independent component analysis (ICA) and image registration: First, we use ICA and a time?frequency analysis to identify the motion and separate it from the intensity change induced by the contrast agent. Then, synthetic reference images are created by recombining all the independent components but the one related to the motion. Therefore, the resulting image series does not exhibit motion and its images have intensities similar to those of their original counterparts. Motion compensation is then achieved by using a multi-pass image registration procedure. We tested our method on 39 image series acquired from 13 patients, covering the basal, mid and apical areas of the left heart ventricle and consisting of 58 perfusion images each. We validated our method by comparing manually tracked intensity profiles of the myocardial sections to automatically generated ones before and after registration of 13 patient data sets (39 distinct slices). We compared linear, non-linear, and combined ICA based registration approaches and previously published motion compensation schemes. Considering run-time and accuracy, a two-step ICA based motion compensation scheme that first optimizes a translation and then for non-linear transformation performed best and achieves registration of the whole series in 32 ± 12 s on a recent workstation. The proposed scheme improves the Pearsons correlation coefficient between manually and automatically obtained time?intensity curves from .84 ± .19 before registration to .96 ± .06 after registration

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the last years significant efforts have been devoted to the development of advanced data analysis tools to both predict the occurrence of disruptions and to investigate the operational spaces of devices, with the long term goal of advancing the understanding of the physics of these events and to prepare for ITER. On JET the latest generation of the disruption predictor called APODIS has been deployed in the real time network during the last campaigns with the new metallic wall. Even if it was trained only with discharges with the carbon wall, it has reached very good performance, with both missed alarms and false alarms in the order of a few percent (and strategies to improve the performance have already been identified). Since for the optimisation of the mitigation measures, predicting also the type of disruption is considered to be also very important, a new clustering method, based on the geodesic distance on a probabilistic manifold, has been developed. This technique allows automatic classification of an incoming disruption with a success rate of better than 85%. Various other manifold learning tools, particularly Principal Component Analysis and Self Organised Maps, are also producing very interesting results in the comparative analysis of JET and ASDEX Upgrade (AUG) operational spaces, on the route to developing predictors capable of extrapolating from one device to another.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A novel methodology for damage detection and location in structures is proposed. The methodology is based on strain measurements and consists in the development of strain field pattern recognition techniques. The aforementioned are based on PCA (principal component analysis) and damage indices (T 2 and Q). We propose the use of fiber Bragg gratings (FBGs) as strain sensors

Relevância:

100.00% 100.00%

Publicador:

Resumo:

En este artículo se explora la aplicación del PCA (Principal Component Analysis), y mediciones estadísticas T 2 y Q para detectar daños en estructuras fabricadas en materiales compuestos mediante la utilización de FBGs (Fiber Bragg Grating). Un modelo PCA es construido usando datos de la estructura sin daños como un estado de referencia. Los defectos en la estructura son simulados causando pequeñas delaminaciones entre el panel y el rigidizador. Los datos de diferentes escenarios experimentales para la estructura sin daño y con daño son proyectados en el modelo PCA. Las proyecciones y los índices T 2 y Q son analizadas. Resultados de cada caso son presentados y discutidos demostrando la viabilidad y el potencial de usar esta formulación en SHM (Structural Health Monitoring)