867 resultados para image processing and analysis
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Desde finales del siglo pasado, el procesamiento y análisis de imágenes digitales, se ha convertido en una poderosa herramienta para la investigación de las propiedades del suelo a múltiples resoluciones, sin embargo todavía no existen los mejores resultados en cuanto a estos trabajos. El principal problema para investigar el drenaje vertical a partir de la distribución de humedad en un perfil de vertisol es la búsqueda de métodos factibles que usen este procedimiento. El objetivo general es implementar una metodología para el procesamiento y análisis de imágenes digitales, que permita caracterizar la distribución del contenido de humedad de un perfil de vertisol. Para el estudio, doce calicatas fueron excavadas en un Mazic Pellic Vertisol, seis de ellas en mayo 13/2011 y el resto en mayo 19/2011 después de moderados eventos de lluvia. Las imágenes RGB de los perfiles fueron tomadas con una cámara Kodak™; con tamaños seleccionados de 1600 x 945 píxeles cada una fue procesada para homogeneizar el brillo y se aplicaron filtros suavizadores de diferentes tamaños de ventana, hasta obtener el óptimo. Cada imagen se dividió en sus matrices componentes, seleccionando los umbrales de cada una para ser aplicado y obtener el patrón digital binario. Este último fue analizado a través de la estimación de dos exponentes fractales: dimensión de conteo de cajas (DBC) y dimensión fractal de interfase húmedo seco (Di). Además, fueron determinados tres coeficientes prefractales a la máxima resolución: número total de cajas interceptados en el plano del patrón (A), la lagunaridad fractal (λ1) y la entropía de Shannon (S1). Para todas las imágenes obtenidas, basado en la entropía, los análisis de clúster y de histogramas, el filtro espacial de 9x9 resultó ser el de tamaño de ventana óptimo. Los umbrales fueron seleccionados a partir del carácter bimodal de los histogramas. Los patrones binarios obtenidos mostraron áreas húmedas (blancas) y secas (negras) que permitieron su análisis. Todos los parámetros obtenidos mostraron diferencias significativas entre ambos conjuntos de patrones espaciales. Mientras los exponentes fractales aportan información sobre las características de llenado del patrón de humedad, los coeficientes prefractales representan propiedades del suelo investigado. La lagunaridad fractal fue el mejor discriminador entre los patrones de humedad aparente del suelo. ABSTRACT From last century, digital image processing and analysis was converted in a powerful tool to investigate soil properties at multiple resolutions, however, the best final procedure in these works not yet exist. The main problem to study vertical drainage from the moisture distribution, on a vertisol profile, is searching for suitable methods using these procedures. Our aim was to design a digital image processing methodology and its analysis to characterize the moisture content distribution of a vertisol profile. In this research, twelve soil pits were excavated on a bare Mazic Pellic Vertisol, six of them in May 13/2011 and the rest in May 19/2011 after a moderate rainfall event. Digital RGB images were taken from each vertisol pit using a Kodak™ camera selecting a size of 1600x945 pixels. Each soil image was processed to homogenized brightness and then a spatial filter with several window sizes was applied to select the optimum one. The RGB image obtained were divided in each matrix color selecting the best thresholds for each one, maximum and minimum, to be applied and get a digital binary pattern. This one was analyzed by estimating two fractal scaling exponents: box counting dimension (DBC
Resumo:
Background: Light microscopic analysis of diatom frustules is widely used both in basic and applied research, notably taxonomy, morphometrics, water quality monitoring and paleo-environmental studies. In these applications, usually large numbers of frustules need to be identified and / or measured. Although there is a need for automation in these applications, and image processing and analysis methods supporting these tasks have previously been developed, they did not become widespread in diatom analysis. While methodological reports for a wide variety of methods for image segmentation, diatom identification and feature extraction are available, no single implementation combining a subset of these into a readily applicable workflow accessible to diatomists exists. Results: The newly developed tool SHERPA offers a versatile image processing workflow focused on the identification and measurement of object outlines, handling all steps from image segmentation over object identification to feature extraction, and providing interactive functions for reviewing and revising results. Special attention was given to ease of use, applicability to a broad range of data and problems, and supporting high throughput analyses with minimal manual intervention. Conclusions: Tested with several diatom datasets from different sources and of various compositions, SHERPA proved its ability to successfully analyze large amounts of diatom micrographs depicting a broad range of species. SHERPA is unique in combining the following features: application of multiple segmentation methods and selection of the one giving the best result for each individual object; identification of shapes of interest based on outline matching against a template library; quality scoring and ranking of resulting outlines supporting quick quality checking; extraction of a wide range of outline shape descriptors widely used in diatom studies and elsewhere; minimizing the need for, but enabling manual quality control and corrections. Although primarily developed for analyzing images of diatom valves originating from automated microscopy, SHERPA can also be useful for other object detection, segmentation and outline-based identification problems.
Resumo:
The compound eyes of mantis shrimps, a group of tropical marine crustaceans, incorporate principles of serial and parallel processing of visual information that may be applicable to artificial imaging systems. Their eyes include numerous specializations for analysis of the spectral and polarizational properties of light, and include more photoreceptor classes for analysis of ultraviolet light, color, and polarization than occur in any other known visual system. This is possible because receptors in different regions of the eye are anatomically diverse and incorporate unusual structural features, such as spectral filters, not seen in other compound eyes. Unlike eyes of most other animals, eyes of mantis shrimps must move to acquire some types of visual information and to integrate color and polarization with spatial vision. Information leaving the retina appears to be processed into numerous parallel data streams leading into the central nervous system, greatly reducing the analytical requirements at higher levels. Many of these unusual features of mantis shrimp vision may inspire new sensor designs for machine vision
Resumo:
International School of Photonics, Cochin University of Science and Technology
Resumo:
Today several different unsupervised classification algorithms are commonly used to cluster similar patterns in a data set based only on its statistical properties. Specially in image data applications, self-organizing methods for unsupervised classification have been successfully applied for clustering pixels or group of pixels in order to perform segmentation tasks. The first important contribution of this paper refers to the development of a self-organizing method for data classification, named Enhanced Independent Component Analysis Mixture Model (EICAMM), which was built by proposing some modifications in the Independent Component Analysis Mixture Model (ICAMM). Such improvements were proposed by considering some of the model limitations as well as by analyzing how it should be improved in order to become more efficient. Moreover, a pre-processing methodology was also proposed, which is based on combining the Sparse Code Shrinkage (SCS) for image denoising and the Sobel edge detector. In the experiments of this work, the EICAMM and other self-organizing models were applied for segmenting images in their original and pre-processed versions. A comparative analysis showed satisfactory and competitive image segmentation results obtained by the proposals presented herein. (C) 2008 Published by Elsevier B.V.
Resumo:
Dissertation presented to obtain the degree of Doctor of Philosophy in Electrical Engineering, speciality on Perceptional Systems, by the Universidade Nova de Lisboa, Faculty of Sciences and Technology
Resumo:
Validation is the main bottleneck preventing theadoption of many medical image processing algorithms inthe clinical practice. In the classical approach,a-posteriori analysis is performed based on someobjective metrics. In this work, a different approachbased on Petri Nets (PN) is proposed. The basic ideaconsists in predicting the accuracy that will result froma given processing based on the characterization of thesources of inaccuracy of the system. Here we propose aproof of concept in the scenario of a diffusion imaginganalysis pipeline. A PN is built after the detection ofthe possible sources of inaccuracy. By integrating thefirst qualitative insights based on the PN withquantitative measures, it is possible to optimize the PNitself, to predict the inaccuracy of the system in adifferent setting. Results show that the proposed modelprovides a good prediction performance and suggests theoptimal processing approach.
Resumo:
This paper presents a method for automatic identification of dust devils tracks in MOC NA and HiRISE images of Mars. The method is based on Mathematical Morphology and is able to successfully process those images despite their difference in spatial resolution or size of the scene. A dataset of 200 images from the surface of Mars representative of the diversity of those track features was considered for developing, testing and evaluating our method, confronting the outputs with reference images made manually. Analysis showed a mean accuracy of about 92%. We also give some examples on how to use the results to get information about dust devils, namelly mean width, main direction of movement and coverage per scene. (c) 2012 Elsevier Ltd. All rights reserved.
Resumo:
Perfusion CT imaging of the liver has potential to improve evaluation of tumour angiogenesis. Quantitative parameters can be obtained applying mathematical models to Time Attenuation Curve (TAC). However, there are still some difficulties for an accurate quantification of perfusion parameters due, for example, to algorithms employed, to mathematical model, to patient’s weight and cardiac output and to the acquisition system. In this thesis, new parameters and alternative methodologies about liver perfusion CT are presented in order to investigate the cause of variability of this technique. Firstly analysis were made to assess the variability related to the mathematical model used to compute arterial Blood Flow (BFa) values. Results were obtained implementing algorithms based on “ maximum slope method” and “Dual input one compartment model” . Statistical analysis on simulated data demonstrated that the two methods are not interchangeable. Anyway slope method is always applicable in clinical context. Then variability related to TAC processing in the application of slope method is analyzed. Results compared with manual selection allow to identify the best automatic algorithm to compute BFa. The consistency of a Standardized Perfusion Index (SPV) was evaluated and a simplified calibration procedure was proposed. At the end the quantitative value of perfusion map was analyzed. ROI approach and map approach provide related values of BFa and this means that pixel by pixel algorithm give reliable quantitative results. Also in pixel by pixel approach slope method give better results. In conclusion the development of new automatic algorithms for a consistent computation of BFa and the analysis and definition of simplified technique to compute SPV parameter, represent an improvement in the field of liver perfusion CT analysis.
Resumo:
Extensive experience with the analysis of human prophase chromosomes and studies into the complexity of prophase GTG-banding patterns have suggested that at least some prophase chromosomal segments can be accurately identified and characterized independently of the morphology of the chromosome as a whole. In this dissertation the feasibility of identifying and analyzing specified prophase chromosome segments was thus investigated as an alternative approach to prophase chromosome analysis based on whole chromosome recognition. Through the use of prophase idiograms at the 850-band-stage (FRANCKE, 1981) and a comparison system based on the calculation of cross-correlation coefficients between idiogram profiles, we have demonstrated that it is possible to divide the 24 human prophase idiograms into a set of 94 unique band sequences. Each unique band sequence has a banding pattern that is recognizable and distinct from any other non-homologous chromosome portion.^ Using chromosomes 11p and 16 thru 22 to demonstrate unique band sequence integrity at the chromosome level, we found that prophase chromosome banding pattern variation can be compensated for and that a set of unique band sequences very similar to those at the idiogram level can be identified on actual chromosomes.^ The use of a unique band sequence approach in prophase chromosome analysis is expected to increase efficiency and sensitivity through more effective use of available banding information. The use of a unique band sequence approach to prophase chromosome analysis is discussed both at the routine level by cytogeneticists and at an image processing level with a semi-automated approach to prophase chromosome analysis. ^
Resumo:
In this PhD Thesis proposal, the principles of diffusion MRI (dMRI) in its application to the human brain mapping of connectivity are reviewed. The background section covers the fundamentals of dMRI, with special focus on those related to the distortions caused by susceptibility inhomogeneity across tissues. Also, a deep survey of available correction methodologies for this common artifact of dMRI is presented. Two methodological approaches to improved correction are introduced. Finally, the PhD proposal describes its objectives, the research plan, and the necessary resources.
Resumo:
The structural connectivity of the brain is considered to encode species-wise and subject-wise patterns that will unlock large areas of understanding of the human brain. Currently, diffusion MRI of the living brain enables to map the microstructure of tissue, allowing to track the pathways of fiber bundles connecting the cortical regions across the brain. These bundles are summarized in a network representation called connectome that is analyzed using graph theory. The extraction of the connectome from diffusion MRI requires a large processing flow including image enhancement, reconstruction, segmentation, registration, diffusion tracking, etc. Although a concerted effort has been devoted to the definition of standard pipelines for the connectome extraction, it is still crucial to define quality assessment protocols of these workflows. The definition of quality control protocols is hindered by the complexity of the pipelines under test and the absolute lack of gold-standards for diffusion MRI data. Here we characterize the impact on structural connectivity workflows of the geometrical deformation typically shown by diffusion MRI data due to the inhomogeneity of magnetic susceptibility across the imaged object. We propose an evaluation framework to compare the existing methodologies to correct for these artifacts including whole-brain realistic phantoms. Additionally, we design and implement an image segmentation and registration method to avoid performing the correction task and to enable processing in the native space of diffusion data. We release PySDCev, an evaluation framework for the quality control of connectivity pipelines, specialized in the study of susceptibility-derived distortions. In this context, we propose Diffantom, a whole-brain phantom that provides a solution to the lack of gold-standard data. The three correction methodologies under comparison performed reasonably, and it is difficult to determine which method is more advisable. We demonstrate that susceptibility-derived correction is necessary to increase the sensitivity of connectivity pipelines, at the cost of specificity. Finally, with the registration and segmentation tool called regseg we demonstrate how the problem of susceptibility-derived distortion can be overcome allowing data to be used in their original coordinates. This is crucial to increase the sensitivity of the whole pipeline without any loss in specificity.
Resumo:
"COO-2118-0029."
Resumo:
Three-dimensional spectroscopy techniques are becoming more and more popular, producing an increasing number of large data cubes. The challenge of extracting information from these cubes requires the development of new techniques for data processing and analysis. We apply the recently developed technique of principal component analysis (PCA) tomography to a data cube from the center of the elliptical galaxy NGC 7097 and show that this technique is effective in decomposing the data into physically interpretable information. We find that the first five principal components of our data are associated with distinct physical characteristics. In particular, we detect a low-ionization nuclear-emitting region (LINER) with a weak broad component in the Balmer lines. Two images of the LINER are present in our data, one seen through a disk of gas and dust, and the other after scattering by free electrons and/or dust particles in the ionization cone. Furthermore, we extract the spectrum of the LINER, decontaminated from stellar and extended nebular emission, using only the technique of PCA tomography. We anticipate that the scattered image has polarized light due to its scattered nature.