23 resultados para Unsupervised endmember extraction
em Universidad Politécnica de Madrid
Resumo:
El análisis de imágenes hiperespectrales permite obtener información con una gran resolución espectral: cientos de bandas repartidas desde el espectro infrarrojo hasta el ultravioleta. El uso de dichas imágenes está teniendo un gran impacto en el campo de la medicina y, en concreto, destaca su utilización en la detección de distintos tipos de cáncer. Dentro de este campo, uno de los principales problemas que existen actualmente es el análisis de dichas imágenes en tiempo real ya que, debido al gran volumen de datos que componen estas imágenes, la capacidad de cómputo requerida es muy elevada. Una de las principales líneas de investigación acerca de la reducción de dicho tiempo de procesado se basa en la idea de repartir su análisis en diversos núcleos trabajando en paralelo. En relación a esta línea de investigación, en el presente trabajo se desarrolla una librería para el lenguaje RVC – CAL – lenguaje que está especialmente pensado para aplicaciones multimedia y que permite realizar la paralelización de una manera intuitiva – donde se recogen las funciones necesarias para implementar dos de las cuatro fases propias del procesado espectral: reducción dimensional y extracción de endmembers. Cabe mencionar que este trabajo se complementa con el realizado por Raquel Lazcano en su Proyecto Fin de Grado, donde se desarrollan las funciones necesarias para completar las otras dos fases necesarias en la cadena de desmezclado. En concreto, este trabajo se encuentra dividido en varias partes. La primera de ellas expone razonadamente los motivos que han llevado a comenzar este Proyecto Fin de Grado y los objetivos que se pretenden conseguir con él. Tras esto, se hace un amplio estudio del estado del arte actual y, en él, se explican tanto las imágenes hiperespectrales como los medios y las plataformas que servirán para realizar la división en núcleos y detectar las distintas problemáticas con las que nos podamos encontrar al realizar dicha división. Una vez expuesta la base teórica, nos centraremos en la explicación del método seguido para componer la cadena de desmezclado y generar la librería; un punto importante en este apartado es la utilización de librerías especializadas en operaciones matriciales complejas, implementadas en C++. Tras explicar el método utilizado, se exponen los resultados obtenidos primero por etapas y, posteriormente, con la cadena de procesado completa, implementada en uno o varios núcleos. Por último, se aportan una serie de conclusiones obtenidas tras analizar los distintos algoritmos en cuanto a bondad de resultados, tiempos de procesado y consumo de recursos y se proponen una serie de posibles líneas de actuación futuras relacionadas con dichos resultados. ABSTRACT. Hyperspectral imaging allows us to collect high resolution spectral information: hundred of bands covering from infrared to ultraviolet spectrum. These images have had strong repercussions in the medical field; in particular, we must highlight its use in cancer detection. In this field, the main problem we have to deal with is the real time analysis, because these images have a great data volume and they require a high computational power. One of the main research lines that deals with this problem is related with the analysis of these images using several cores working at the same time. According to this investigation line, this document describes the development of a RVC – CAL library – this language has been widely used for working with multimedia applications and allows an optimized system parallelization –, which joins all the functions needed to implement two of the four stages of the hyperspectral imaging processing chain: dimensionality reduction and endmember extraction. This research is complemented with the research conducted by Raquel Lazcano in her Diploma Project, where she studies the other two stages of the processing chain. The document is divided in several chapters. The first of them introduces the motivation of the Diploma Project and the main objectives to achieve. After that, we study the state of the art of some technologies related with this work, like hyperspectral images and the software and hardware that we will use to parallelize the system and to analyze its performance. Once we have exposed the theoretical bases, we will explain the followed methodology to compose the processing chain and to generate the library; one of the most important issues in this chapter is the use of some C++ libraries specialized in complex matrix operations. At this point, we will expose the results obtained in the individual stage analysis and then, the results of the full processing chain implemented in one or several cores. Finally, we will extract some conclusions related with algorithm behavior, time processing and system performance. In the same way, we propose some future research lines according to the results obtained in this document
Resumo:
Las imágenes hiperespectrales permiten extraer información con una gran resolución espectral, que se suele extender desde el espectro ultravioleta hasta el infrarrojo. Aunque esta tecnología fue aplicada inicialmente a la observación de la superficie terrestre, esta característica ha hecho que, en los últimos años, la aplicación de estas imágenes se haya expandido a otros campos, como la medicina y, en concreto, la detección del cáncer. Sin embargo, este nuevo ámbito de aplicación ha generado nuevas necesidades, como la del procesado de las imágenes en tiempo real. Debido, precisamente, a la gran resolución espectral, estas imágenes requieren una elevada capacidad computacional para ser procesadas, lo que imposibilita la consecución de este objetivo con las técnicas tradicionales de procesado. En este sentido, una de las principales líneas de investigación persigue el objetivo del tiempo real mediante la paralelización del procesamiento, dividiendo esta carga computacional en varios núcleos que trabajen simultáneamente. A este respecto, en el presente documento se describe el desarrollo de una librería de procesado hiperespectral para el lenguaje RVC - CAL, que está específicamente pensado para el desarrollo de aplicaciones multimedia y proporciona las herramientas necesarias para paralelizar las aplicaciones. En concreto, en este Proyecto Fin de Grado se han desarrollado las funciones necesarias para implementar dos de las cuatro fases de la cadena de análisis de una imagen hiperespectral - en concreto, las fases de estimación del número de endmembers y de la estimación de la distribución de los mismos en la imagen -; conviene destacar que este trabajo se complementa con el realizado por Daniel Madroñal en su Proyecto Fin de Grado, donde desarrolla las funciones necesarias para completar las otras dos fases de la cadena. El presente documento sigue la estructura clásica de un trabajo de investigación, exponiendo, en primer lugar, las motivaciones que han cimentado este Proyecto Fin de Grado y los objetivos que se esperan alcanzar con él. A continuación, se realiza un amplio análisis del estado del arte de las tecnologías necesarias para su desarrollo, explicando, por un lado, las imágenes hiperespectrales y, por otro, todos los recursos hardware y software necesarios para la implementación de la librería. De esta forma, se proporcionarán todos los conceptos técnicos necesarios para el correcto seguimiento de este documento. Tras ello, se detallará la metodología seguida para la generación de la mencionada librería, así como el proceso de implementación de una cadena completa de procesado de imágenes hiperespectrales que permita la evaluación tanto de la bondad de la librería como del tiempo necesario para analizar una imagen hiperespectral completa. Una vez expuesta la metodología utilizada, se analizarán en detalle los resultados obtenidos en las pruebas realizadas; en primer lugar, se explicarán los resultados individuales extraídos del análisis de las dos etapas implementadas y, posteriormente, se discutirán los arrojados por el análisis de la ejecución de la cadena completa, tanto en uno como en varios núcleos. Por último, como resultado de este estudio se extraen una serie de conclusiones, que engloban aspectos como bondad de resultados, tiempos de ejecución y consumo de recursos; asimismo, se proponen una serie de líneas futuras de actuación con las que se podría continuar y complementar la investigación desarrollada en este documento. ABSTRACT. Hyperspectral imaging collects information from across the electromagnetic spectrum, covering a wide range of wavelengths. Although this technology was initially developed for remote sensing and earth observation, its multiple advantages - such as high spectral resolution - led to its application in other fields, as cancer detection. However, this new field has shown specific requirements; for example, it needs to accomplish strong time specifications, since all the potential applications - like surgical guidance or in vivo tumor detection - imply real-time requisites. Achieving this time requirements is a great challenge, as hyperspectral images generate extremely high volumes of data to process. For that reason, some new research lines are studying new processing techniques, and the most relevant ones are related to system parallelization: in order to reduce the computational load, this solution executes image analysis in several processors simultaneously; in that way, this computational load is divided among the different cores, and real-time specifications can be accomplished. This document describes the construction of a new hyperspectral processing library for RVC - CAL language, which is specifically designed for multimedia applications and allows multithreading compilation and system parallelization. This Diploma Project develops the required library functions to implement two of the four stages of the hyperspectral imaging processing chain - endmember and abundance estimations -. The two other stages - dimensionality reduction and endmember extraction - are studied in the Diploma Project of Daniel Madroñal, which complements the research work described in this document. The document follows the classical structure of a research work. Firstly, it introduces the motivations that have inspired this Diploma Project and the main objectives to achieve. After that, it thoroughly studies the state of the art of the technologies related to the development of the library. The state of the art contains all the concepts needed to understand the contents of this research work, like the definition and applications of hyperspectral imaging and the typical processing chain. Thirdly, it explains the methodology of the library implementation, as well as the construction of a complete processing chain in RVC - CAL applying the mentioned library. This chain will test both the correct behavior of the library and the time requirements for the complete analysis of one hyperspectral image, either executing the chain in one processor or in several ones. Afterwards, the collected results will be carefully analyzed: first of all, individual results -from endmember and abundance estimations stages - will be discussed and, after that, complete results will be studied; this results will be obtained from the complete processing chain, so they will analyze the effects of multithreading and system parallelization on the mentioned processing chain. Finally, as a result of this discussion, some conclusions will be gathered regarding some relevant aspects, such as algorithm behavior, execution times and processing performance. Likewise, this document will conclude with the proposal of some future research lines that could continue the research work described in this document.
Resumo:
Twelve commercially available edible marine algae from France, Japan and Spain and the certified reference material (CRM) NIES No. 9 Sargassum fulvellum were analyzed for total arsenic and arsenic species. Total arsenic concentrations were determined by inductively coupled plasma atomic emission spectrometry (ICP-AES) after microwave digestion and ranged from 23 to 126 μg g−1. Arsenic species in alga samples were extracted with deionized water by microwave-assisted extraction and showed extraction efficiencies from 49 to 98%, in terms of total arsenic. The presence of eleven arsenic species was studied by high performance liquid chromatography–ultraviolet photo-oxidation–hydride generation atomic–fluorescence spectrometry (HPLC–(UV)–HG–AFS) developed methods, using both anion and cation exchange chromatography. Glycerol and phosphate sugars were found in all alga samples analyzed, at concentrations between 0.11 and 22 μg g−1, whereas sulfonate and sulfate sugars were only detected in three of them (0.6-7.2 μg g−1). Regarding arsenic toxic species, low concentration levels of dimethylarsinic acid (DMA) (<0.9 μg g−1) and generally high arsenate (As(V)) concentrations (up to 77 μg g−1) were found in most of the algae studied. The results obtained are of interest to highlight the need to perform speciation analysis and to introduce appropriate legislation to limit toxic arsenic species content in these food products.
Resumo:
A particle accelerator is any device that, using electromagnetic fields, is able to communicate energy to charged particles (typically electrons or ionized atoms), accelerating and/or energizing them up to the required level for its purpose. The applications of particle accelerators are countless, beginning in a common TV CRT, passing through medical X-ray devices, and ending in large ion colliders utilized to find the smallest details of the matter. Among the other engineering applications, the ion implantation devices to obtain better semiconductors and materials of amazing properties are included. Materials supporting irradiation for future nuclear fusion plants are also benefited from particle accelerators. There are many devices in a particle accelerator required for its correct operation. The most important are the particle sources, the guiding, focalizing and correcting magnets, the radiofrequency accelerating cavities, the fast deflection devices, the beam diagnostic mechanisms and the particle detectors. Most of the fast particle deflection devices have been built historically by using copper coils and ferrite cores which could effectuate a relatively fast magnetic deflection, but needed large voltages and currents to counteract the high coil inductance in a response in the microseconds range. Various beam stability considerations and the new range of energies and sizes of present time accelerators and their rings require new devices featuring an improved wakefield behaviour and faster response (in the nanoseconds range). This can only be achieved by an electromagnetic deflection device based on a transmission line. The electromagnetic deflection device (strip-line kicker) produces a transverse displacement on the particle beam travelling close to the speed of light, in order to extract the particles to another experiment or to inject them into a different accelerator. The deflection is carried out by the means of two short, opposite phase pulses. The diversion of the particles is exerted by the integrated Lorentz force of the electromagnetic field travelling along the kicker. This Thesis deals with a detailed calculation, manufacturing and test methodology for strip-line kicker devices. The methodology is then applied to two real cases which are fully designed, built, tested and finally installed in the CTF3 accelerator facility at CERN (Geneva). Analytical and numerical calculations, both in 2D and 3D, are detailed starting from the basic specifications in order to obtain a conceptual design. Time domain and frequency domain calculations are developed in the process using different FDM and FEM codes. The following concepts among others are analyzed: scattering parameters, resonating high order modes, the wakefields, etc. Several contributions are presented in the calculation process dealing specifically with strip-line kicker devices fed by electromagnetic pulses. Materials and components typically used for the fabrication of these devices are analyzed in the manufacturing section. Mechanical supports and connexions of electrodes are also detailed, presenting some interesting contributions on these concepts. The electromagnetic and vacuum tests are then analyzed. These tests are required to ensure that the manufactured devices fulfil the specifications. Finally, and only from the analytical point of view, the strip-line kickers are studied together with a pulsed power supply based on solid state power switches (MOSFETs). The solid state technology applied to pulsed power supplies is introduced and several circuit topologies are modelled and simulated to obtain fast and good flat-top pulses.
Resumo:
The electroencephalograph (EEG) signal is one of the most widely used signals in the biomedicine field due to its rich information about human tasks. This research study describes a new approach based on i) build reference models from a set of time series, based on the analysis of the events that they contain, is suitable for domains where the relevant information is concentrated in specific regions of the time series, known as events. In order to deal with events, each event is characterized by a set of attributes. ii) Discrete wavelet transform to the EEG data in order to extract temporal information in the form of changes in the frequency domain over time- that is they are able to extract non-stationary signals embedded in the noisy background of the human brain. The performance of the model was evaluated in terms of training performance and classification accuracies and the results confirmed that the proposed scheme has potential in classifying the EEG signals.
Resumo:
The focus of this chapter is to study feature extraction and pattern classification methods from two medical areas, Stabilometry and Electroencephalography (EEG). Stabilometry is the branch of medicine responsible for examining balance in human beings. Balance and dizziness disorders are probably two of the most common illnesses that physicians have to deal with. In Stabilometry, the key nuggets of information in a time series signal are concentrated within definite time periods are known as events. In this chapter, two feature extraction schemes have been developed to identify and characterise the events in Stabilometry and EEG signals. Based on these extracted features, an Adaptive Fuzzy Inference Neural network has been applied for classification of Stabilometry and EEG signals.
Resumo:
A number of thrombectomy devices using a variety of methods have now been developed to facilitate clot removal. We present research involving one such experimental device recently developed in the UK, called a ‘GP’ Thrombus Aspiration Device (GPTAD). This device has the potential to bring about the extraction of a thrombus. Although the device is at a relatively early stage of development, the results look encouraging. In this work, we present an analysis and modeling of the GPTAD by means of the bond graph technique; it seems to be a highly effective method of simulating the device under a variety of conditions. Such modeling is useful in optimizing the GPTAD and predicting the result of clot extraction. The aim of this simulation model is to obtain the minimum pressure necessary to extract the clot and to verify that both the pressure and the time required to complete the clot extraction are realistic for use in clinical situations, and are consistent with any experimentally obtained data. We therefore consider aspects of rheology and mechanics in our modeling.
Resumo:
This article describes the work performed over the database of questions belonging to the different opinion polls carried during the last 50 years in Spain. Approximately half of the questions are provided with a title while the other half remain untitled. The work and implemented techniques in order to automatically generate the titles for untitled questions are described. This process is performed over very short texts and generated titles are subject to strong stylistic conventions and should be fully grammatical pieces of Spanish
Resumo:
Recently, we have presented some studies concerning the analysis, design and optimization of one experimental device developed in the UK - GPTAD - which has been designed to remove blood clots without the need to make contact with the clot itself, thereby potentially reducing the risk of problems such as downstream embolisation. Based on the idea of a modification of the previous device, in this work, we present a model based in the use of stents like the SolitaireTM FR, which is in contact with the clot itself. In the case of such devices, the stent is self-expandable and the extraction of the blood clot is faciliatated by the stent, which must be inside the clot. Such stents are generally inserted in position by using the guidewire inserted into the catheter. This type of modeling could potentially be useful in showing how the blood clot is moved by the various different forces involved. The modelling has been undertaken by analyzing the resistances, compliances and inertances effects. We model an artery and blood clot for range of forces for the guidewire. In each case we determine the interaction between blood clot, stent and artery.
Resumo:
Salamanca is cataloged as one of the most polluted cities in Mexico. In order to observe the behavior and clarify the influence of wind parameters on the Sulphur Dioxide (SO2) concentrations a Self-Organizing Maps (SOM) Neural Network have been implemented at three monitoring locations for the period from January 1 to December 31, 2006. The maximum and minimum daily values of SO2 concentrations measured during the year of 2006 were correlated with the wind parameters of the same period. The main advantages of the SOM Neural Network is that it allows to integrate data from different sensors and provide readily interpretation results. Especially, it is powerful mapping and classification tool, which others information in an easier way and facilitates the task of establishing an order of priority between the distinguished groups of concentrations depending on their need for further research or remediation actions in subsequent management steps. For each monitoring location, SOM classifications were evaluated with respect to pollution levels established by Health Authorities. The classification system can help to establish a better air quality monitoring methodology that is essential for assessing the effectiveness of imposed pollution controls, strategies, and facilitate the pollutants reduction.
Resumo:
Folksonomies emerge as the result of the free tagging activity of a large number of users over a variety of resources. They can be considered as valuable sources from which it is possible to obtain emerging vocabularies that can be leveraged in knowledge extraction tasks. However, when it comes to understanding the meaning of tags in folksonomies, several problems mainly related to the appearance of synonymous and ambiguous tags arise, specifically in the context of multilinguality. The authors aim to turn folksonomies into knowledge structures where tag meanings are identified, and relations between them are asserted. For such purpose, they use DBpedia as a general knowledge base from which they leverage its multilingual capabilities.
Resumo:
In the spinal cord of the anesthetized cat, spontaneous cord dorsum potentials (CDPs) appear synchronously along the lumbo-sacral segments. These CDPs have different shapes and magnitudes. Previous work has indicated that some CDPs appear to be specially associated with the activation of spinal pathways that lead to primary afferent depolarization and presynaptic inhibition. Visual detection and classification of these CDPs provides relevant information on the functional organization of the neural networks involved in the control of sensory information and allows the characterization of the changes produced by acute nerve and spinal lesions. We now present a novel feature extraction approach for signal classification, applied to CDP detection. The method is based on an intuitive procedure. We first remove by convolution the noise from the CDPs recorded in each given spinal segment. Then, we assign a coefficient for each main local maximum of the signal using its amplitude and distance to the most important maximum of the signal. These coefficients will be the input for the subsequent classification algorithm. In particular, we employ gradient boosting classification trees. This combination of approaches allows a faster and more accurate discrimination of CDPs than is obtained by other methods.
Resumo:
Pot experiments were performed to evaluate the phytoremediation capacity of plants of Atriplex halimus grown in contaminated mine soils and to investigate the effects of organic amendments on the metal bioavailability and uptake of these metals by plants. Soil samples collected from abandoned mine sites north of Madrid (Spain) were mixed with 0, 30 and 60 Mg ha?1 of two organic amendments, with different pH and nutrients content: pine-bark compost and horse- and sheep-manure compost. The increasing soil organic matter content and pH by the application of manure amendment reduced metal bioavailability in soil stabilising them. The proportion of Cu in the most bioavailable fractions (sum of the water-soluble, exchangeable, acid-soluble and Fe?Mn oxides fractions) decreased with the addition of 60 Mg ha?1 of manure from 62% to 52% in one of the soils studied and from 50% to 30% in the other. This amendment also reduced Zn proportion in water-soluble and exchangeable fractions from 17% to 13% in one of the soils. Manure decreased metal concentrations in shoots of A. halimus, from 97 to 35 mg kg?1 of Cu, from 211 to 98 mg kg?1 of Zn and from 1.4 to 0.6 mg kg?1 of Cd. In these treatments there was a higher plant growth due to the lower metal toxicity and the improvement of nutrients content in soil. This higher growth resulted in a higher total metal accumulation in plant biomass and therefore in a greater amount of metals removed from soil, so manure could be useful for phytoextraction purposes. This amendment increased metal accumulation in shoots from 37 to 138 mg pot?1 of Cu, from 299 to 445 mg pot?1 of Zn and from 1.8 to 3.7 mg pot?1 of Cd. Pine bark amendment did not significantly alter metal availability and its uptake by plants. Plants of A. halimus managed to reduce total Zn concentration in one of the soils from 146 to 130 mg kg?1, but its phytoextraction capacity was insufficient to remediate contaminated soils in the short-to-medium term. However, A. halimus could be, in combination with manure amendment, appropriate for the phytostabilization of metals in mine soils.
Resumo:
Smooth light extraction in lighting optical fibre
Resumo:
In this paper a method based mainly on Data Fusion and Artificial Neural Networks to classify one of the most important pollutants such as Particulate Matter less than 10 micrometer in diameter (PM10) concentrations is proposed. The main objective is to classify in two pollution levels (Non-Contingency and Contingency) the pollutant concentration. Pollutant concentrations and meteorological variables have been considered in order to build a Representative Vector (RV) of pollution. RV is used to train an Artificial Neural Network in order to classify pollutant events determined by meteorological variables. In the experiments, real time series gathered from the Automatic Environmental Monitoring Network (AEMN) in Salamanca Guanajuato Mexico have been used. The method can help to establish a better air quality monitoring methodology that is essential for assessing the effectiveness of imposed pollution controls, strategies, and facilitate the pollutants reduction.