35 resultados para image processing and analysis

em Universidad Politécnica de Madrid


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Desde finales del siglo pasado, el procesamiento y análisis de imágenes digitales, se ha convertido en una poderosa herramienta para la investigación de las propiedades del suelo a múltiples resoluciones, sin embargo todavía no existen los mejores resultados en cuanto a estos trabajos. El principal problema para investigar el drenaje vertical a partir de la distribución de humedad en un perfil de vertisol es la búsqueda de métodos factibles que usen este procedimiento. El objetivo general es implementar una metodología para el procesamiento y análisis de imágenes digitales, que permita caracterizar la distribución del contenido de humedad de un perfil de vertisol. Para el estudio, doce calicatas fueron excavadas en un Mazic Pellic Vertisol, seis de ellas en mayo 13/2011 y el resto en mayo 19/2011 después de moderados eventos de lluvia. Las imágenes RGB de los perfiles fueron tomadas con una cámara Kodak™; con tamaños seleccionados de 1600 x 945 píxeles cada una fue procesada para homogeneizar el brillo y se aplicaron filtros suavizadores de diferentes tamaños de ventana, hasta obtener el óptimo. Cada imagen se dividió en sus matrices componentes, seleccionando los umbrales de cada una para ser aplicado y obtener el patrón digital binario. Este último fue analizado a través de la estimación de dos exponentes fractales: dimensión de conteo de cajas (DBC) y dimensión fractal de interfase húmedo seco (Di). Además, fueron determinados tres coeficientes prefractales a la máxima resolución: número total de cajas interceptados en el plano del patrón (A), la lagunaridad fractal (λ1) y la entropía de Shannon (S1). Para todas las imágenes obtenidas, basado en la entropía, los análisis de clúster y de histogramas, el filtro espacial de 9x9 resultó ser el de tamaño de ventana óptimo. Los umbrales fueron seleccionados a partir del carácter bimodal de los histogramas. Los patrones binarios obtenidos mostraron áreas húmedas (blancas) y secas (negras) que permitieron su análisis. Todos los parámetros obtenidos mostraron diferencias significativas entre ambos conjuntos de patrones espaciales. Mientras los exponentes fractales aportan información sobre las características de llenado del patrón de humedad, los coeficientes prefractales representan propiedades del suelo investigado. La lagunaridad fractal fue el mejor discriminador entre los patrones de humedad aparente del suelo. ABSTRACT From last century, digital image processing and analysis was converted in a powerful tool to investigate soil properties at multiple resolutions, however, the best final procedure in these works not yet exist. The main problem to study vertical drainage from the moisture distribution, on a vertisol profile, is searching for suitable methods using these procedures. Our aim was to design a digital image processing methodology and its analysis to characterize the moisture content distribution of a vertisol profile. In this research, twelve soil pits were excavated on a bare Mazic Pellic Vertisol, six of them in May 13/2011 and the rest in May 19/2011 after a moderate rainfall event. Digital RGB images were taken from each vertisol pit using a Kodak™ camera selecting a size of 1600x945 pixels. Each soil image was processed to homogenized brightness and then a spatial filter with several window sizes was applied to select the optimum one. The RGB image obtained were divided in each matrix color selecting the best thresholds for each one, maximum and minimum, to be applied and get a digital binary pattern. This one was analyzed by estimating two fractal scaling exponents: box counting dimension (DBC

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this PhD Thesis proposal, the principles of diffusion MRI (dMRI) in its application to the human brain mapping of connectivity are reviewed. The background section covers the fundamentals of dMRI, with special focus on those related to the distortions caused by susceptibility inhomogeneity across tissues. Also, a deep survey of available correction methodologies for this common artifact of dMRI is presented. Two methodological approaches to improved correction are introduced. Finally, the PhD proposal describes its objectives, the research plan, and the necessary resources.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The structural connectivity of the brain is considered to encode species-wise and subject-wise patterns that will unlock large areas of understanding of the human brain. Currently, diffusion MRI of the living brain enables to map the microstructure of tissue, allowing to track the pathways of fiber bundles connecting the cortical regions across the brain. These bundles are summarized in a network representation called connectome that is analyzed using graph theory. The extraction of the connectome from diffusion MRI requires a large processing flow including image enhancement, reconstruction, segmentation, registration, diffusion tracking, etc. Although a concerted effort has been devoted to the definition of standard pipelines for the connectome extraction, it is still crucial to define quality assessment protocols of these workflows. The definition of quality control protocols is hindered by the complexity of the pipelines under test and the absolute lack of gold-standards for diffusion MRI data. Here we characterize the impact on structural connectivity workflows of the geometrical deformation typically shown by diffusion MRI data due to the inhomogeneity of magnetic susceptibility across the imaged object. We propose an evaluation framework to compare the existing methodologies to correct for these artifacts including whole-brain realistic phantoms. Additionally, we design and implement an image segmentation and registration method to avoid performing the correction task and to enable processing in the native space of diffusion data. We release PySDCev, an evaluation framework for the quality control of connectivity pipelines, specialized in the study of susceptibility-derived distortions. In this context, we propose Diffantom, a whole-brain phantom that provides a solution to the lack of gold-standard data. The three correction methodologies under comparison performed reasonably, and it is difficult to determine which method is more advisable. We demonstrate that susceptibility-derived correction is necessary to increase the sensitivity of connectivity pipelines, at the cost of specificity. Finally, with the registration and segmentation tool called regseg we demonstrate how the problem of susceptibility-derived distortion can be overcome allowing data to be used in their original coordinates. This is crucial to increase the sensitivity of the whole pipeline without any loss in specificity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Monument conservation is related to the interaction between the original petrological parameters of the rock and external factors in the area where the building is sited, such as weather conditions, pollution, and so on. Depending on the environmental conditions and the characteristics of the materials used, different types of weathering predominate. In all, the appearance of surface crusts constitutes a first stage, whose origin can often be traced to the properties of the material itself. In the present study, different colours of “patinas” were distinguished by defining the threshold levels of greys associated with “pathology” in the histogram. These data were compared to background information and other parameters, such as mineralogical composition, porosity, and so on, as well as other visual signs of deterioration. The result is a map of the pathologies associated with “cover films” on monuments, which generate images by relating colour characteristics to desired properties or zones of interest.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

To properly understand and model animal embryogenesis it is crucial to obtain detailed measurements, both in time and space, about their gene expression domains and cell dynamics. Such challenge has been confronted in recent years by a surge of atlases which integrate a statistically relevant number of different individuals to get robust, complete information about their spatiotemporal locations of gene patterns. This paper will discuss the fundamental image analysis strategies required to build such models and the most common problems found along the way. We also discuss the main challenges and future goals in the field.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

To properly understand and model animal embryogenesis it is crucial to obtain detailed measurements, both in time and space, about their gene expression domains and cell dynamics. Such challenge has been confronted in recent years by a surge of atlases which integrate a statistically relevant number of different individuals to get robust, complete information about their spatiotemporal locations of gene patterns. This paper will discuss the fundamental image analysis strategies required to build such models and the most common problems found along the way. We also discuss the main challenges and future goals in the field.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Evolvable Hardware (EH) is a technique that consists of using reconfigurable hardware devices whose configuration is controlled by an Evolutionary Algorithm (EA). Our system consists of a fully-FPGA implemented scalable EH platform, where the Reconfigurable processing Core (RC) can adaptively increase or decrease in size. Figure 1 shows the architecture of the proposed System-on-Programmable-Chip (SoPC), consisting of a MicroBlaze processor responsible of controlling the whole system operation, a Reconfiguration Engine (RE), and a Reconfigurable processing Core which is able to change its size in both height and width. This system is used to implement image filters, which are generated autonomously thanks to the evolutionary process. The system is complemented with a camera that enables the usage of the platform for real time applications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

NIR Hyperspectral imaging (1000-2500 nm) combined with IDC allowed the detection of peanut traces down to adulteration percentages 0.01% Contrary to PLSR, IDC does not require a calibration set, but uses both expert and experimental information and suitable for quantification of an interest compound in complex matrices. The obtained results shows the feasibility of using HSI systems for the detection of peanut traces in conjunction with chemical procedures, such as RT-PCR and ELISA

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As embedded systems evolve, problems inherent to technology become important limitations. In less than ten years, chips will exceed the maximum allowed power consumption affecting performance, since, even though the resources available per chip are increasing, frequency of operation has stalled. Besides, as the level of integration is increased, it is difficult to keep defect density under control, so new fault tolerant techniques are required. In this demo work, a new dynamically adaptable virtual architecture (ARTICo3) to allow dynamic and context-aware use of resources is implemented in a high performance Wireless Sensor node (HiReCookie) to perform an image processing application.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Video analytics play a critical role in most recent traffic monitoring and driver assistance systems. In this context, the correct detection and classification of surrounding vehicles through image analysis has been the focus of extensive research in the last years. Most of the pieces of work reported for image-based vehicle verification make use of supervised classification approaches and resort to techniques, such as histograms of oriented gradients (HOG), principal component analysis (PCA), and Gabor filters, among others. Unfortunately, existing approaches are lacking in two respects: first, comparison between methods using a common body of work has not been addressed; second, no study of the combination potentiality of popular features for vehicle classification has been reported. In this study the performance of the different techniques is first reviewed and compared using a common public database. Then, the combination capabilities of these techniques are explored and a methodology is presented for the fusion of classifiers built upon them, taking into account also the vehicle pose. The study unveils the limitations of single-feature based classification and makes clear that fusion of classifiers is highly beneficial for vehicle verification.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a computer vision system that successfully discriminates between weed patches and crop rows under uncontrolled lighting in real-time. The system consists of two independent subsystems, a fast image processing delivering results in real-time (Fast Image Processing, FIP), and a slower and more accurate processing (Robust Crop Row Detection, RCRD) that is used to correct the first subsystem's mistakes. This combination produces a system that achieves very good results under a wide variety of conditions. Tested on several maize videos taken of different fields and during different years, the system successfully detects an average of 95% of weeds and 80% of crops under different illumination, soil humidity and weed/crop growth conditions. Moreover, the system has been shown to produce acceptable results even under very difficult conditions, such as in the presence of dramatic sowing errors or abrupt camera movements. The computer vision system has been developed for integration into a treatment system because the ideal setup for any weed sprayer system would include a tool that could provide information on the weeds and crops present at each point in real-time, while the tractor mounting the spraying bar is moving

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A first study in order to construct a simple model of the mammalian retina is reported. The basic elements for this model are Optical Programmable Logic Cells, OPLCs, previously employed as a functional element for Optical Computing. The same type of circuit simulates the five types of neurons present in the retina. Different responses are obtained by modifying either internal or external connections. Two types of behaviors are reported: symmetrical and non-symmetrical with respect to light position. Some other higher functions, as the possibility to differentiate between symmetric and non-symmetric light images, are performed by another simulation of the first layers of the visual cortex. The possibility to apply these models to image processing is reported.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Most of the present digital images processing methods are related with objective characterization of external properties as shape, form or colour. This information concerns objective characteristics of different bodies and is applied to extract details to perform several different tasks. But in some occasions, some other type of information is needed. This is the case when the image processing system is going to be applied to some operation related with living bodies. In this case, some other type of object information may be useful. As a matter of fact, it may give additional knowledge about its subjective properties. Some of these properties are object symmetry, parallelism between lines and the feeling of size. These types of properties concerns more to internal sensations of living beings when they are related with their environment than to the objective information obtained by artificial systems. This paper presents an elemental system able to detect some of the above-mentioned parameters. A first mathematical model to analyze these situations is reported. This theoretical model will give the possibility to implement a simple working system. The basis of this system is the use of optical logic cells, previously employed in optical computing.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Long-length ultrafine-grained (UFG) Ti rods are produced by equal-channel angular pressing via the conform scheme (ECAP-C) at 200 °C, which is followed by drawing at 200 °C. The evolution of microstructure, macrotexture, and mechanical properties (yield strength, ultimate tensile strength, failure stress, uniform elongation, elongation to failure) of pure Ti during this thermo-mechanical processing is studied. Special attention is also paid to the effect of microstructure on the mechanical behavior of the material after macrolocalization of plastic flow. The number of ECAP-C passes varies in the range of 1–10. The microstructure is more refined with increasing number of ECAP-C passes. Formation of homogeneous microstructure with a grain/subgrain size of 200 nm and its saturation after 6 ECAP-C passes are observed. Strength properties increase with increasing number of ECAP passes and saturate after 6 ECAP-C passes to a yield strength of 973 MPa, an ultimate tensile strength of 1035 MPa, and a true failure stress of 1400 MPa (from 625, 750, and 1150 MPa in the as-received condition). The true strain at failure failure decreases after ECAP-C processing. The reduction of area and true strain to failure values do not decrease after ECAP-C processing. The sample after 6 ECAP-C passes is subjected to drawing at 200¯C resulting in reduction of a grain/subgrain size to 150 nm, formation of (10 View the MathML source1¯0) fiber texture with respect to the rod axis, and further increase of the yield strength up to 1190 MPa, the ultimate tensile strength up to 1230 MPa and the true failure stress up to 1600 MPa. It is demonstrated that UFG CP Ti has low resistance to macrolocalization of plastic deformation and high resistance to crack formation after necking.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A new technology is being proposed as a solution to the problem of unintentional facial detection and recognition in pictures in which the individuals appearing want to express their privacy preferences, through the use of different tags. The existing methods for face de-identification were mostly ad hoc solutions that only provided an absolute binary solution in a privacy context such as pixelation, or a bar mask. As the number and users of social networks are increasing, our preferences regarding our privacy may become more complex, leaving these absolute binary solutions as something obsolete. The proposed technology overcomes this problem by embedding information in a tag which will be placed close to the face without being disruptive. Through a decoding method the tag will provide the preferences that will be applied to the images in further stages.