967 resultados para Processing methods


Relevância:

60.00% 60.00%

Publicador:

Resumo:

PURPOSE To study the apparent diffusivity and its directionality for metabolites of skeletal muscle in humans in vivo by (1) H magnetic resonance spectroscopy. METHODS The diffusion tensors were determined on a 3 Tesla MR system using optimized acquisition and processing methods including an adapted STEAM sequence with orientation-dependent diffusion weighting, pulse-triggering with individually adapted delays, eddy-current correction schemes, median filtering, and simultaneous prior-knowledge fitting of all related spectra. RESULTS The average apparent diffusivities, as well as the fractional anisotropies of taurine (ADCav  = 0.74 × 10(-3) s/mm(2) , FA = 0.46), creatine (ADCav  = 0.41 × 10(-3)  s/mm(2) , FA = 0.33), trimethylammonium compounds (ADCav  = 0.48 × 10(-3)  s/mm(2) , FA = 0.34), carnosine (ADCav  = 0.46 × 10(-3)  s/mm(2) , FA = 0.47), and water (ADCav  = 1.5 × 10(-3)  s/mm(2) , FA = 0.36) were estimated. The diffusivities of most metabolites and water were significantly different from each other. Diffusion was found to be anisotropic and the diffusion tensors showed tensor correlation coefficients close to 1 and were hence found to be essentially coaligned. The magnitudes of apparent metabolite diffusivities were largely ordered according to molecular weight, with taurine as the smallest molecule diffusing fastest, both along and across the fiber direction. CONCLUSION Diffusivities, directional dependence of diffusion and fractional anisotropies of (1) H MRS-visible muscle metabolites were presented. It was shown that metabolites share diffusion directionality with water and have similar fractional anisotropies, hinting at similar diffusion barriers. Magn Reson Med, 2014. © 2014 Wiley Periodicals, Inc.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Stray light contamination reduces considerably the precision of photometric of faint stars for low altitude spaceborne observatories. When measuring faint objects, the necessity of coping with stray light contamination arises in order to avoid systematic impacts on low signal-to-noise images. Stray light contamination can be represented by a flat offset in CCD data. Mitigation techniques begin by a comprehensive study during the design phase, followed by the use of target pointing optimisation and post-processing methods. We present a code that aims at simulating the stray-light contamination in low-Earth orbit coming from reflexion of solar light by the Earth. StrAy Light SimulAtor (SALSA) is a tool intended to be used at an early stage as a tool to evaluate the effective visible region in the sky and, therefore to optimise the observation sequence. SALSA can compute Earth stray light contamination for significant periods of time allowing missionwide parameters to be optimised (e.g. impose constraints on the point source transmission function (PST) and/or on the altitude of the satellite). It can also be used to study the behaviour of the stray light at different seasons or latitudes. Given the position of the satellite with respect to the Earth and the Sun, SALSA computes the stray light at the entrance of the telescope following a geometrical technique. After characterising the illuminated region of the Earth, the portion of illuminated Earth that affects the satellite is calculated. Then, the flux of reflected solar photons is evaluated at the entrance of the telescope. Using the PST of the instrument, the final stray light contamination at the detector is calculated. The analysis tools include time series analysis of the contamination, evaluation of the sky coverage and an objects visibility predictor. Effects of the South Atlantic Anomaly and of any shutdown periods of the instrument can be added. Several designs or mission concepts can be easily tested and compared. The code is not thought as a stand-alone mission designer. Its mandatory inputs are a time series describing the trajectory of the satellite and the characteristics of the instrument. This software suite has been applied to the design and analysis of CHEOPS (CHaracterizing ExOPlanet Satellite). This mission requires very high precision photometry to detect very shallow transits of exoplanets. Different altitudes and characteristics of the detector have been studied in order to find the best parameters, that reduce the effect of contamination. © (2014) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Clinical Research Data Quality Literature Review and Pooled Analysis We present a literature review and secondary analysis of data accuracy in clinical research and related secondary data uses. A total of 93 papers meeting our inclusion criteria were categorized according to the data processing methods. Quantitative data accuracy information was abstracted from the articles and pooled. Our analysis demonstrates that the accuracy associated with data processing methods varies widely, with error rates ranging from 2 errors per 10,000 files to 5019 errors per 10,000 fields. Medical record abstraction was associated with the highest error rates (70–5019 errors per 10,000 fields). Data entered and processed at healthcare facilities had comparable error rates to data processed at central data processing centers. Error rates for data processed with single entry in the presence of on-screen checks were comparable to double entered data. While data processing and cleaning methods may explain a significant amount of the variability in data accuracy, additional factors not resolvable here likely exist. Defining Data Quality for Clinical Research: A Concept Analysis Despite notable previous attempts by experts to define data quality, the concept remains ambiguous and subject to the vagaries of natural language. This current lack of clarity continues to hamper research related to data quality issues. We present a formal concept analysis of data quality, which builds on and synthesizes previously published work. We further posit that discipline-level specificity may be required to achieve the desired definitional clarity. To this end, we combine work from the clinical research domain with findings from the general data quality literature to produce a discipline-specific definition and operationalization for data quality in clinical research. While the results are helpful to clinical research, the methodology of concept analysis may be useful in other fields to clarify data quality attributes and to achieve operational definitions. Medical Record Abstractor’s Perceptions of Factors Impacting the Accuracy of Abstracted Data Medical record abstraction (MRA) is known to be a significant source of data errors in secondary data uses. Factors impacting the accuracy of abstracted data are not reported consistently in the literature. Two Delphi processes were conducted with experienced medical record abstractors to assess abstractor’s perceptions about the factors. The Delphi process identified 9 factors that were not found in the literature, and differed with the literature by 5 factors in the top 25%. The Delphi results refuted seven factors reported in the literature as impacting the quality of abstracted data. The results provide insight into and indicate content validity of a significant number of the factors reported in the literature. Further, the results indicate general consistency between the perceptions of clinical research medical record abstractors and registry and quality improvement abstractors. Distributed Cognition Artifacts on Clinical Research Data Collection Forms Medical record abstraction, a primary mode of data collection in secondary data use, is associated with high error rates. Distributed cognition in medical record abstraction has not been studied as a possible explanation for abstraction errors. We employed the theory of distributed representation and representational analysis to systematically evaluate cognitive demands in medical record abstraction and the extent of external cognitive support employed in a sample of clinical research data collection forms. We show that the cognitive load required for abstraction in 61% of the sampled data elements was high, exceedingly so in 9%. Further, the data collection forms did not support external cognition for the most complex data elements. High working memory demands are a possible explanation for the association of data errors with data elements requiring abstractor interpretation, comparison, mapping or calculation. The representational analysis used here can be used to identify data elements with high cognitive demands.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The Håkon Mosby Mud Volcano is a natural laboratory to study geological, geochemical, and ecological processes related to deep-water mud volcanism. High resolution bathymetry of the Håkon Mosby Mud Volcano was recorded during RV Polarstern expedition ARK-XIX/3 utilizing the multibeam system Hydrosweep DS-2. Dense spacing of the survey lines and slow ship speed (5 knots) provided necessary point density to generate a regular 10 m grid. Generalization was applied to preserve and represent morphological structures appropriately. Contour lines were derived showing detailed topography at the centre of the Håkon Mosby Mud Volcano and generalized contours in the vicinity. We provide a brief introduction to the Håkon Mosby Mud Volcano area and describe in detail data recording and processing methods, as well as the morphology of the area. Accuracy assessment was made to evaluate the reliability of a 10 m resolution terrain model. Multibeam sidescan data were recorded along with depth measurements and show reflectivity variations from light grey values at the centre of the Håkon Mosby Mud Volcano to dark grey values (less reflective) at the surrounding moat.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Laser Welding (LW) is more often used in manufacturing due to its advantages, such as accurate control, good repeatability, less heat input, opportunities for joining of special materials, high speed, capability to join small dimension parts etc. LW is dedicated to robotized manufacturing, and the fabrication cells are using various level of flexibility, from specialized robots to very flexible setups. This paper features several LW applications using two industrially-scaled manufacturing cells at UPM Laser Centre (CLUPM) of Polytechnical University of Madrid (Universidad Politécnica de Madrid). The one dedicated to Remote Laser Welding (RLW) of thin sheets for automotive and other sectors uses a CO2 laser of 3500 W. The second has a high flexibility, is based on a 6-axis ABB robot and a Nd:YAG laser of 3300 W, and is meant for various laser processing methods, including welding. After a short description of each cell, several LW applications experimented at CLUPM and recently implemented in industry are briefly presented: RLW of automotive coated sheets, LW of high strength automotive sheets, LW vs. laser hybrid welding (LHW) of Double Phase steel thin sheets, and LHW of thin sheets of stainless steel and carbon steel (dissimilar joints). The main technological issues overcame and the critical process parameters are pointed out. Conclusions about achievements and trends are provided.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper describes a novel method to enhance current airport surveillance systems used in Advanced Surveillance Monitoring Guidance and Control Systems (A-SMGCS). The proposed method allows for the automatic calibration of measurement models and enhanced detection of nonideal situations, increasing surveillance products integrity. It is based on the definition of a set of observables from the surveillance processing chain and a rule based expert system aimed to change the data processing methods

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The advent of new signal processing methods, such as non-linear analysis techniques, represents a new perspective which adds further value to brain signals' analysis. Particularly, Lempel–Ziv's Complexity (LZC) has proven to be useful in exploring the complexity of the brain electromagnetic activity. However, an important problem is the lack of knowledge about the physiological determinants of these measures. Although acorrelation between complexity and connectivity has been proposed, this hypothesis was never tested in vivo. Thus, the correlation between the microstructure of the anatomic connectivity and the functional complexity of the brain needs to be inspected. In this study we analyzed the correlation between LZC and fractional anisotropy (FA), a scalar quantity derived from diffusion tensors that is particularly useful as an estimate of the functional integrity of myelinated axonal fibers, in a group of sixteen healthy adults (all female, mean age 65.56 ± 6.06 years, intervals 58–82). Our results showed a positive correlation between FA and LZC scores in regions including clusters in the splenium of the corpus callosum, cingulum, parahipocampal regions and the sagittal stratum. This study supports the notion of a positive correlation between the functional complexity of the brain and the microstructure of its anatomical connectivity. Our investigation proved that a combination of neuroanatomical and neurophysiological techniques may shed some light on the underlying physiological determinants of brain's oscillations

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Fresh-cut or minimally processed fruit and vegetables have been physically modified from its original form (by peeling, trimming, washing and cutting) to obtain a 100% edible product that is subsequently packaged (usually under modified atmosphere packaging –MAP) and kept in refrigerated storage. In fresh-cut products, physiological activity and microbiological spoilage, determine their deterioration and shelf-life. The major preservation techniques applied to delay spoilage are chilling storage and MAP, combined with chemical treatments antimicrobial solutions antibrowning, acidulants, antioxidants, etc.). The industry looks for safer alternatives. Consequently, the sector is asking for innovative, fast, cheap and objective techniques to evaluate the overall quality and safety of fresh-cut products in order to obtain decision tools for implementing new packaging materials and procedures. In recent years, hyperspectral imaging technique has been regarded as a tool for analyses conducted for quality evaluation of food products in research, control and industries. The hyperspectral imaging system allows integrating spectroscopic and imaging techniques to enable direct identification of different components or quality characteristics and their spatial distribution in the tested sample. The objective of this work is to develop hyperspectral image processing methods for the supervision through plastic films of changes related to quality deterioration in packed readyto-use leafy vegetables during shelf life. The evolutions of ready-to-use spinach and watercress samples covered with three different common transparent plastic films were studied. Samples were stored at 4 ºC during the monitoring period (until 21 days). More than 60 hyperspectral images (from 400 to 1000 nm) per species were analyzed using ad hoc routines and commercial toolboxes of MatLab®. Besides common spectral treatments for removing additive and multiplicative effects, additional correction, previously to any other correction, was performed in the images of leaves in order to avoid the modification in their spectra due to the presence of the plastic transparent film. Findings from this study suggest that the developed images analysis system is able to deal with the effects caused in the images by the presence of plastic films in the supervision of shelf-life in leafy vegetables, in which different stages of quality has been identified.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

El cáncer de próstata es el tipo de cáncer con mayor prevalencia entre los hombres del mundo occidental y, pese a tener una alta tasa de supervivencia relativa, es la segunda mayor causa de muerte por cáncer en este sector de la población. El tratamiento de elección frente al cáncer de próstata es, en la mayoría de los casos, la radioterapia externa. Las técnicas más modernas de radioterapia externa, como la radioterapia modulada en intensidad, permiten incrementar la dosis en el tumor mientras se reduce la dosis en el tejido sano. Sin embargo, la localización del volumen objetivo varía con el día de tratamiento, y se requieren movimientos muy pequeños de los órganos para sacar partes del volumen objetivo fuera de la región terapéutica, o para introducir tejidos sanos críticos dentro. Para evitar esto se han desarrollado técnicas más avanzadas, como la radioterapia guiada por imagen, que se define por un manejo más preciso de los movimientos internos mediante una adaptación de la planificación del tratamiento basada en la información anatómica obtenida de imágenes de tomografía computarizada (TC) previas a la sesión terapéutica. Además, la radioterapia adaptativa añade la información dosimétrica de las fracciones previas a la información anatómica. Uno de los fundamentos de la radioterapia adaptativa es el registro deformable de imágenes, de gran utilidad a la hora de modelar los desplazamientos y deformaciones de los órganos internos. Sin embargo, su utilización conlleva nuevos retos científico-tecnológicos en el procesamiento de imágenes, principalmente asociados a la variabilidad de los órganos, tanto en localización como en apariencia. El objetivo de esta tesis doctoral es mejorar los procesos clínicos de delineación automática de contornos y de cálculo de dosis acumulada para la planificación y monitorización de tratamientos con radioterapia adaptativa, a partir de nuevos métodos de procesamiento de imágenes de TC (1) en presencia de contrastes variables, y (2) cambios de apariencia del recto. Además, se pretende (3) proveer de herramientas para la evaluación de la calidad de los contornos obtenidos en el caso del gross tumor volumen (GTV). Las principales contribuciones de esta tesis doctoral son las siguientes: _ 1. La adaptación, implementación y evaluación de un algoritmo de registro basado en el flujo óptico de la fase de la imagen como herramienta para el cálculo de transformaciones no-rígidas en presencia de cambios de intensidad, y su aplicabilidad a tratamientos de radioterapia adaptativa en cáncer de próstata con uso de agentes de contraste radiológico. Los resultados demuestran que el algoritmo seleccionado presenta mejores resultados cualitativos en presencia de contraste radiológico en la vejiga, y no distorsiona la imagen forzando deformaciones poco realistas. 2. La definición, desarrollo y validación de un nuevo método de enmascaramiento de los contenidos del recto (MER), y la evaluación de su influencia en el procedimiento de radioterapia adaptativa en cáncer de próstata. Las segmentaciones obtenidas mediante el MER para la creación de máscaras homogéneas en las imágenes de sesión permiten mejorar sensiblemente los resultados de los algoritmos de registro en la región rectal. Así, el uso de la metodología propuesta incrementa el índice de volumen solapado entre los contornos manuales y automáticos del recto hasta un valor del 89%, cercano a los resultados obtenidos usando máscaras manuales para el registro de las dos imágenes. De esta manera se pueden corregir tanto el cálculo de los nuevos contornos como el cálculo de la dosis acumulada. 3. La definición de una metodología de evaluación de la calidad de los contornos del GTV, que permite la representación de la distribución espacial del error, adaptándola a volúmenes no-convexos como el formado por la próstata y las vesículas seminales. Dicha metodología de evaluación, basada en un nuevo algoritmo de reconstrucción tridimensional y una nueva métrica de cuantificación, presenta resultados precisos con una gran resolución espacial en un tiempo despreciable frente al tiempo de registro. Esta nueva metodología puede ser una herramienta útil para la comparación de distintos algoritmos de registro deformable orientados a la radioterapia adaptativa en cáncer de próstata. En conclusión, el trabajo realizado en esta tesis doctoral corrobora las hipótesis de investigación postuladas, y pretende servir como cimiento de futuros avances en el procesamiento de imagen médica en los tratamientos de radioterapia adaptativa en cáncer de próstata. Asimismo, se siguen abriendo nuevas líneas de aplicación futura de métodos de procesamiento de imágenes médicas con el fin de mejorar los procesos de radioterapia adaptativa en presencia de cambios de apariencia de los órganos, e incrementar la seguridad del paciente. I.2 Inglés Prostate cancer is the most prevalent cancer amongst men in the Western world and, despite having a relatively high survival rate, is the second leading cause of cancer death in this sector of the population. The treatment of choice against prostate cancer is, in most cases, external beam radiation therapy. The most modern techniques of external radiotherapy, as intensity modulated radiotherapy, allow increasing the dose to the tumor whilst reducing the dose to healthy tissue. However, the location of the target volume varies with the day of treatment, and very small movements of the organs are required to pull out parts of the target volume outside the therapeutic region, or to introduce critical healthy tissues inside. Advanced techniques, such as the image-guided radiotherapy (IGRT), have been developed to avoid this. IGRT is defined by more precise handling of internal movements by adapting treatment planning based on the anatomical information obtained from computed tomography (CT) images prior to the therapy session. Moreover, the adaptive radiotherapy adds dosimetric information of previous fractions to the anatomical information. One of the fundamentals of adaptive radiotherapy is deformable image registration, very useful when modeling the displacements and deformations of the internal organs. However, its use brings new scientific and technological challenges in image processing, mainly associated to the variability of the organs, both in location and appearance. The aim of this thesis is to improve clinical processes of automatic contour delineation and cumulative dose calculation for planning and monitoring of adaptive radiotherapy treatments, based on new methods of CT image processing (1) in the presence of varying contrasts, and (2) rectum appearance changes. It also aims (3) to provide tools for assessing the quality of contours obtained in the case of gross tumor volume (GTV). The main contributions of this PhD thesis are as follows: 1. The adaptation, implementation and evaluation of a registration algorithm based on the optical flow of the image phase as a tool for the calculation of non-rigid transformations in the presence of intensity changes, and its applicability to adaptive radiotherapy treatment in prostate cancer with use of radiological contrast agents. The results demonstrate that the selected algorithm shows better qualitative results in the presence of radiological contrast agents in the urinary bladder, and does not distort the image forcing unrealistic deformations. 2. The definition, development and validation of a new method for masking the contents of the rectum (MER, Spanish acronym), and assessing their impact on the process of adaptive radiotherapy in prostate cancer. The segmentations obtained by the MER for the creation of homogenous masks in the session CT images can improve significantly the results of registration algorithms in the rectal region. Thus, the use of the proposed methodology increases the volume overlap index between manual and automatic contours of the rectum to a value of 89%, close to the results obtained using manual masks for both images. In this way, both the calculation of new contours and the calculation of the accumulated dose can be corrected. 3. The definition of a methodology for assessing the quality of the contours of the GTV, which allows the representation of the spatial distribution of the error, adapting it to non-convex volumes such as that formed by the prostate and seminal vesicles. Said evaluation methodology, based on a new three-dimensional reconstruction algorithm and a new quantification metric, presents accurate results with high spatial resolution in a time negligible compared to the registration time. This new approach may be a useful tool to compare different deformable registration algorithms oriented to adaptive radiotherapy in prostate cancer In conclusion, this PhD thesis corroborates the postulated research hypotheses, and is intended to serve as a foundation for future advances in medical image processing in adaptive radiotherapy treatment in prostate cancer. In addition, it opens new future applications for medical image processing methods aimed at improving the adaptive radiotherapy processes in the presence of organ’s appearance changes, and increase the patient safety.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

En este proyecto se aborda la transducción óptico-sonora utilizando métodos de tratamiento digital de imagen. Para llevar a cabo el proyecto se consideran únicamente métodos de bajo presupuesto, por lo que para realizar todo el proceso de conversión óptico-sonora se utilizan un ordenador y un escáner doméstico. Como el principal objetivo del proyecto es comprobar si es viable utilizar el tratamiento digital de imagen como conversor no se ha contemplado la utilización de equipamiento profesional. La utilidad de este proyecto está en la restauración del sonido de material fílmico con importantes degradaciones, tales que no sea posible su reproducción en un proyector. Con el prototipo que se propone, realizado con el software de programación Matlab, se consigue digitalizar el audio analógico de las películas en malas condiciones ya que la captura de audio se efectúa de manera óptica sobre las bandas sonoras. Lo conseguido en este proyecto cobra especial importancia si se tiene en cuenta la cantidad de material cinematográfico que hay en películas de celulosa. La conservación de dicho material requiere unas condiciones de almacenamiento muy específicas para que el soporte no se vea afectado, pero con el paso del tiempo es habitual que las bobinas de película presenten deformaciones o incluso ruptura. Aplicando métodos de tratamiento digital de imagen es posible restaurar el audio de fragmentos de película que no puedan ser expuestos a la tensión producida por los rodillos de los proyectores, incluso es posible recuperar el audio de fotogramas concretos ya que la digitalización del audio se realiza capturando la imagen de la forma de onda. Por ello, el procedimiento seguido para digitalizar la película debe ser poco intrusivo para garantizar la conservación del soporte fílmico. Cabe destacar que en este proyecto se ha realizado la conversión óptico-sonora sobre las bandas de sonido analógicas de área variable presentes en la película, pero el procedimiento es aplicable también a las bandas de área variable realizando modificaciones en el prototipo. Esto último queda fuera del objetivo de este proyecto, pero puede ser un trabajo futuro. ABSTRACT This project addresses optical to sound conversion using digital image processing methods. To carry out the project are considered only low-budget methods , so for all optical to sound conversion process using a computer and a home scanner . As the main application of this project is to test the feasibility of using the digital image processing as a converter does not contemplate the use of professional equipment. The main objective of this project is the restoration of sound film material with significant impairments , such is not possible playback on a projector. With the proposed prototype , made with Matlab programming software , you get digitize analog audio bad movies because the audio capture is performed optically on the soundtracks. The achievements in this project is especially important if you consider the amount of film material is in cellulose films . The preservation of such material requires a very specific storage conditions to which the support is not affected , but over time it is common for film reels presenting deformations or even rupture. Applying methods of digital image processing is possible to restore the audio from movie clips that can not be exposed to the tension produced by the rollers of the projectors , it is even possible to retrieve specific frames audio and audio that digitization is done by capturing the image of the waveform. Therefore, the procedure used to digitize the film should be bit intrusive to ensure the conservation of the film medium. Note that in this project was carried out optical to sound conversion on analog variable area soundtracks present in the film, but the procedure is applicable to variable-area bands making changes to the prototype. The latter is beyond the scope of this project, but can be a future work.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In order to improve the body of knowledge about brain injury impairment is essential to develop image database with different types of injuries. This paper proposes a new methodology to model three types of brain injury: stroke, tumor and traumatic brain injury; and implements a system to navigate among simulated MRI studies. These studies can be used on research studies, to validate new processing methods and as an educational tool, to show different types of brain injury and how they affect to neuroanatomic structures.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Many image processing methods, such as techniques for people re-identification, assume photometric constancy between different images. This study addresses the correction of photometric variations based upon changes in background areas to correct foreground areas. The authors assume a multiple light source model where all light sources can have different colours and will change over time. In training mode, the authors learn per-location relations between foreground and background colour intensities. In correction mode, the authors apply a double linear correction model based on learned relations. This double linear correction includes a dynamic local illumination correction mapping as well as an inter-camera mapping. The authors evaluate their illumination correction by computing the similarity between two images based on the earth mover's distance. The authors compare the results to a representative auto-exposure algorithm found in the recent literature plus a colour correction one based on the inverse-intensity chromaticity. Especially in complex scenarios the authors’ method outperforms these state-of-the-art algorithms.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

El análisis de imágenes hiperespectrales permite obtener información con una gran resolución espectral: cientos de bandas repartidas desde el espectro infrarrojo hasta el ultravioleta. El uso de dichas imágenes está teniendo un gran impacto en el campo de la medicina y, en concreto, destaca su utilización en la detección de distintos tipos de cáncer. Dentro de este campo, uno de los principales problemas que existen actualmente es el análisis de dichas imágenes en tiempo real ya que, debido al gran volumen de datos que componen estas imágenes, la capacidad de cómputo requerida es muy elevada. Una de las principales líneas de investigación acerca de la reducción de dicho tiempo de procesado se basa en la idea de repartir su análisis en diversos núcleos trabajando en paralelo. En relación a esta línea de investigación, en el presente trabajo se desarrolla una librería para el lenguaje RVC – CAL – lenguaje que está especialmente pensado para aplicaciones multimedia y que permite realizar la paralelización de una manera intuitiva – donde se recogen las funciones necesarias para implementar el clasificador conocido como Support Vector Machine – SVM. Cabe mencionar que este trabajo complementa el realizado en [1] y [2] donde se desarrollaron las funciones necesarias para implementar una cadena de procesado que utiliza el método unmixing para procesar la imagen hiperespectral. En concreto, este trabajo se encuentra dividido en varias partes. La primera de ellas expone razonadamente los motivos que han llevado a comenzar este Trabajo de Investigación y los objetivos que se pretenden conseguir con él. Tras esto, se hace un amplio estudio del estado del arte actual y, en él, se explican tanto las imágenes hiperespectrales como sus métodos de procesado y, en concreto, se detallará el método que utiliza el clasificador SVM. Una vez expuesta la base teórica, nos centraremos en la explicación del método seguido para convertir una versión en Matlab del clasificador SVM optimizado para analizar imágenes hiperespectrales; un punto importante en este apartado es que se desarrolla la versión secuencial del algoritmo y se asientan las bases para una futura paralelización del clasificador. Tras explicar el método utilizado, se exponen los resultados obtenidos primero comparando ambas versiones y, posteriormente, analizando por etapas la versión adaptada al lenguaje RVC – CAL. Por último, se aportan una serie de conclusiones obtenidas tras analizar las dos versiones del clasificador SVM en cuanto a bondad de resultados y tiempos de procesado y se proponen una serie de posibles líneas de actuación futuras relacionadas con dichos resultados. ABSTRACT. Hyperspectral imaging allows us to collect high resolution spectral information: hundred of bands covering from infrared to ultraviolet spectrum. These images have had strong repercussions in the medical field; in particular, we must highlight its use in cancer detection. In this field, the main problem we have to deal with is the real time analysis, because these images have a great data volume and they require a high computational power. One of the main research lines that deals with this problem is related with the analysis of these images using several cores working at the same time. According to this investigation line, this document describes the development of a RVC – CAL library – this language has been widely used for working with multimedia applications and allows an optimized system parallelization –, which joins all the functions needed to implement the Support Vector Machine – SVM - classifier. This research complements the research conducted in [1] and [2] where the necessary functions to implement the unmixing method to analyze hyperspectral images were developed. The document is divided in several chapters. The first of them introduces the motivation of the Master Thesis and the main objectives to achieve. After that, we study the state of the art of some technologies related with this work, like hyperspectral images, their processing methods and, concretely, the SVM classifier. Once we have exposed the theoretical bases, we will explain the followed methodology to translate a Matlab version of the SVM classifier optimized to process an hyperspectral image to RVC – CAL language; one of the most important issues in this chapter is that a sequential implementation is developed and the bases of a future parallelization of the SVM classifier are set. At this point, we will expose the results obtained in the comparative between versions and then, the results of the different steps that compose the SVM in its RVC – CAL version. Finally, we will extract some conclusions related with algorithm behavior and time processing. In the same way, we propose some future research lines according to the results obtained in this document.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Multibeam bathymetric data collected in the Puerto Rico Trench and northeastern Caribbean region are compiled into a seamless bathymetric terrain model for broad-scale geological investigations of the trench system. These data, collected during eight separate surveys between 2002 and 2013 and covering almost 180,000 square kilometers, are published here in large-format map sheet and digital spatial data. This report describes the common multibeam data collection and processing methods used to produce the bathymetric terrain model and corresponding data-source polygon. Details documenting the complete provenance of the data are provided in the metadata in the Data Catalog section.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

O Monitoramento Acústico Passivo (PAM) submarino refere-se ao uso de sistemas de escuta e gravação subaquática, com o intuito de detectar, monitorar e identificar fontes sonoras através das ondas de pressão que elas produzem. Se diz que é passivo já que tais sistemas unicamente ouvem, sem perturbam o meio ambiente acústico existente, diferentemente de ativos, como os sonares. O PAM submarino tem diversas áreas de aplicação, como em sistemas de vigilância militar, seguridade portuária, monitoramento ambiental, desenvolvimento de índices de densidade populacional de espécies, identificação de espécies, etc. Tecnologia nacional nesta área é praticamente inexistente apesar da sua importância. Neste contexto, o presente trabalho visa contribuir com o desenvolvimento de tecnologia nacional no tema através da concepção, construção e operação de equipamento autônomo de PAM e de métodos de processamento de sinais para detecção automatizada de eventos acústicos submarinos. Foi desenvolvido um equipamento, nomeado OceanPod, que possui características como baixo custo de fabrica¸c~ao, flexibilidade e facilidade de configuração e uso, voltado para a pesquisa científica, industrial e para controle ambiental. Vários protótipos desse equipamento foram construídos e utilizados em missões no mar. Essas jornadas de monitoramento permitiram iniciar a criação de um banco de dados acústico, o qual permitiu fornecer a matéria prima para o teste de detectores de eventos acústicos automatizados e em tempo real. Adicionalmente também é proposto um novo método de detecção-identificação de eventos acústicos, baseado em análise estatística da representação tempo-frequência dos sinais acústicos. Este novo método foi testado na detecção de cetáceos, presentes no banco de dados gerado pelas missões de monitoramento.