61 resultados para Digital image processing
em Universidad Politécnica de Madrid
Resumo:
Monument conservation is related to the interaction between the original petrological parameters of the rock and external factors in the area where the building is sited, such as weather conditions, pollution, and so on. Depending on the environmental conditions and the characteristics of the materials used, different types of weathering predominate. In all, the appearance of surface crusts constitutes a first stage, whose origin can often be traced to the properties of the material itself. In the present study, different colours of “patinas” were distinguished by defining the threshold levels of greys associated with “pathology” in the histogram. These data were compared to background information and other parameters, such as mineralogical composition, porosity, and so on, as well as other visual signs of deterioration. The result is a map of the pathologies associated with “cover films” on monuments, which generate images by relating colour characteristics to desired properties or zones of interest.
Resumo:
Most of the present digital images processing methods are related with objective characterization of external properties as shape, form or colour. This information concerns objective characteristics of different bodies and is applied to extract details to perform several different tasks. But in some occasions, some other type of information is needed. This is the case when the image processing system is going to be applied to some operation related with living bodies. In this case, some other type of object information may be useful. As a matter of fact, it may give additional knowledge about its subjective properties. Some of these properties are object symmetry, parallelism between lines and the feeling of size. These types of properties concerns more to internal sensations of living beings when they are related with their environment than to the objective information obtained by artificial systems. This paper presents an elemental system able to detect some of the above-mentioned parameters. A first mathematical model to analyze these situations is reported. This theoretical model will give the possibility to implement a simple working system. The basis of this system is the use of optical logic cells, previously employed in optical computing.
Resumo:
Mining in the Iberian Pyrite Belt (IPB), the biggest VMS metallogenetic province known in the world to date, has to face a deep crisis in spite of the huge reserves still known after ≈5 000 years of production. This is due to several factors, as the difficult processing of complex Cu-Pb-Zn-Ag- Au ores, the exhaustion of the oxidation zone orebodies (the richest for gold, in gossan), the scarce demand for sulphuric acid in the world market, and harder environmental regulations. Of these factors, only the first and the last mentioned can be addressed by local ore geologists. A reactivation of mining can therefore only be achieved by an improved and more efficient ore processing, under the constraint of strict environmental controls. Digital image analysis of the ores, coupled to reflected light microscopy, provides a quantified and reliable mineralogical and textural characterization of the ores. The automation of the procedure for the first time furnishes the process engineers with real-time information, to improve the process and to preclude or control pollution; it can be applied to metallurgical tailings as well. This is shown by some examples of the IPB.
Resumo:
To properly understand and model animal embryogenesis it is crucial to obtain detailed measurements, both in time and space, about their gene expression domains and cell dynamics. Such challenge has been confronted in recent years by a surge of atlases which integrate a statistically relevant number of different individuals to get robust, complete information about their spatiotemporal locations of gene patterns. This paper will discuss the fundamental image analysis strategies required to build such models and the most common problems found along the way. We also discuss the main challenges and future goals in the field.
Resumo:
To properly understand and model animal embryogenesis it is crucial to obtain detailed measurements, both in time and space, about their gene expression domains and cell dynamics. Such challenge has been confronted in recent years by a surge of atlases which integrate a statistically relevant number of different individuals to get robust, complete information about their spatiotemporal locations of gene patterns. This paper will discuss the fundamental image analysis strategies required to build such models and the most common problems found along the way. We also discuss the main challenges and future goals in the field.
Resumo:
This paper presents a computer vision system that successfully discriminates between weed patches and crop rows under uncontrolled lighting in real-time. The system consists of two independent subsystems, a fast image processing delivering results in real-time (Fast Image Processing, FIP), and a slower and more accurate processing (Robust Crop Row Detection, RCRD) that is used to correct the first subsystem's mistakes. This combination produces a system that achieves very good results under a wide variety of conditions. Tested on several maize videos taken of different fields and during different years, the system successfully detects an average of 95% of weeds and 80% of crops under different illumination, soil humidity and weed/crop growth conditions. Moreover, the system has been shown to produce acceptable results even under very difficult conditions, such as in the presence of dramatic sowing errors or abrupt camera movements. The computer vision system has been developed for integration into a treatment system because the ideal setup for any weed sprayer system would include a tool that could provide information on the weeds and crops present at each point in real-time, while the tractor mounting the spraying bar is moving
Resumo:
Matlab, uno de los paquetes de software matemático más utilizados actualmente en el mundo de la docencia y de la investigación, dispone de entre sus muchas herramientas una específica para el procesado digital de imágenes. Esta toolbox de procesado digital de imágenes está formada por un conjunto de funciones adicionales que amplían la capacidad del entorno numérico de Matlab y permiten realizar un gran número de operaciones de procesado digital de imágenes directamente a través del programa principal. Sin embargo, pese a que MATLAB cuenta con un buen apartado de ayuda tanto online como dentro del propio programa principal, la bibliografía disponible en castellano es muy limitada y en el caso particular de la toolbox de procesado digital de imágenes es prácticamente nula y altamente especializada, lo que requiere que los usuarios tengan una sólida formación en matemáticas y en procesado digital de imágenes. Partiendo de una labor de análisis de todas las funciones y posibilidades disponibles en la herramienta del programa, el proyecto clasificará, resumirá y explicará cada una de ellas a nivel de usuario, definiendo todas las variables de entrada y salida posibles, describiendo las tareas más habituales en las que se emplea cada función, comparando resultados y proporcionando ejemplos aclaratorios que ayuden a entender su uso y aplicación. Además, se introducirá al lector en el uso general de Matlab explicando las operaciones esenciales del programa, y se aclararán los conceptos más avanzados de la toolbox para que no sea necesaria una extensa formación previa. De este modo, cualquier alumno o profesor que se quiera iniciar en el procesado digital de imágenes con Matlab dispondrá de un documento que le servirá tanto para consultar y entender el funcionamiento de cualquier función de la toolbox como para implementar las operaciones más recurrentes dentro del procesado digital de imágenes. Matlab, one of the most used numerical computing environments in the world of research and teaching, has among its many tools a specific one for digital image processing. This digital image processing toolbox consists of a set of additional functions that extend the power of the digital environment of Matlab and allow to execute a large number of operations of digital image processing directly through the main program. However, despite the fact that MATLAB has a good help section both online and within the main program, the available bibliography is very limited in Castilian and is negligible and highly specialized in the particular case of the image processing toolbox, being necessary a strong background in mathematics and digital image processing. Starting from an analysis of all the available functions and possibilities in the program tool, the document will classify, summarize and explain each function at user level, defining all input and output variables possible, describing common tasks in which each feature is used, comparing results and providing illustrative examples to help understand its use and application. In addition, the reader will be introduced in the general use of Matlab explaining the essential operations within the program and clarifying the most advanced concepts of the toolbox so that an extensive prior formation will not be necessary. Thus, any student or teacher who wants to start digital image processing with Matlab will have a document that will serve to check and understand the operation of any function of the toolbox and also to implement the most recurrent operations in digital image processing.
Resumo:
Digital image correlation (DIC) is applied to analyzing the deformation mechanisms under transverse compression in a fiber-reinforced composite. To this end, compression tests in a direction perpendicular to the fibers were carried out inside a scanning electron microscope and secondary electron images obtained at different magnifications during the test. Optimum DIC parameters to resolve the displacement and strain field were computed from numerical simulations of a model composite and they were applied to micrographs obtained at different magnifications (250_, 2000_, and 6000_). It is shown that DIC of low-magnification micrographs was able to capture the long range fluctuations in strain due to the presence of matrix-rich and fiber-rich zones, responsible for the onset of damage. At higher magnification, the strain fields obtained with DIC qualitatively reproduce the non-homogeneous deformation pattern due to the presence of stiff fibers dispersed in a compliant matrix and provide accurate results of the average composite strain. However, comparison with finite element simulations revealed that DIC was not able to accurately capture the average strain in each phase.
Resumo:
A first study in order to construct a simple model of the mammalian retina is reported. The basic elements for this model are Optical Programmable Logic Cells, OPLCs, previously employed as a functional element for Optical Computing. The same type of circuit simulates the five types of neurons present in the retina. Different responses are obtained by modifying either internal or external connections. Two types of behaviors are reported: symmetrical and non-symmetrical with respect to light position. Some other higher functions, as the possibility to differentiate between symmetric and non-symmetric light images, are performed by another simulation of the first layers of the visual cortex. The possibility to apply these models to image processing is reported.
Resumo:
In this PhD Thesis proposal, the principles of diffusion MRI (dMRI) in its application to the human brain mapping of connectivity are reviewed. The background section covers the fundamentals of dMRI, with special focus on those related to the distortions caused by susceptibility inhomogeneity across tissues. Also, a deep survey of available correction methodologies for this common artifact of dMRI is presented. Two methodological approaches to improved correction are introduced. Finally, the PhD proposal describes its objectives, the research plan, and the necessary resources.
Resumo:
A proposal for a model of the primary visual cortex is reported. It is structured with the basis of a simple unit cell able to perform fourteen pairs of different boolean functions corresponding to the two possible inputs. As a first step, a model of the retina is presented. Different types of responses, according to the different possibilities of interconnecting the building blocks, have been obtained. These responses constitute the basis for an initial configuration of the mammalian primary visual cortex. Some qualitative functions, as symmetry or size of an optical input, have been obtained. A proposal to extend this model to some higher functions, concludes the paper.
Resumo:
Evolvable Hardware (EH) is a technique that consists of using reconfigurable hardware devices whose configuration is controlled by an Evolutionary Algorithm (EA). Our system consists of a fully-FPGA implemented scalable EH platform, where the Reconfigurable processing Core (RC) can adaptively increase or decrease in size. Figure 1 shows the architecture of the proposed System-on-Programmable-Chip (SoPC), consisting of a MicroBlaze processor responsible of controlling the whole system operation, a Reconfiguration Engine (RE), and a Reconfigurable processing Core which is able to change its size in both height and width. This system is used to implement image filters, which are generated autonomously thanks to the evolutionary process. The system is complemented with a camera that enables the usage of the platform for real time applications.
Resumo:
NIR Hyperspectral imaging (1000-2500 nm) combined with IDC allowed the detection of peanut traces down to adulteration percentages 0.01% Contrary to PLSR, IDC does not require a calibration set, but uses both expert and experimental information and suitable for quantification of an interest compound in complex matrices. The obtained results shows the feasibility of using HSI systems for the detection of peanut traces in conjunction with chemical procedures, such as RT-PCR and ELISA
Resumo:
As embedded systems evolve, problems inherent to technology become important limitations. In less than ten years, chips will exceed the maximum allowed power consumption affecting performance, since, even though the resources available per chip are increasing, frequency of operation has stalled. Besides, as the level of integration is increased, it is difficult to keep defect density under control, so new fault tolerant techniques are required. In this demo work, a new dynamically adaptable virtual architecture (ARTICo3) to allow dynamic and context-aware use of resources is implemented in a high performance Wireless Sensor node (HiReCookie) to perform an image processing application.
Resumo:
The main problem to study vertical drainage from the moisture distribution, on a vertisol profile, is searching for suitable methods using these procedures. Our aim was to design a digital image processing methodology and its analysis to characterize the moisture content distribution of a vertisol profile. In this research, twelve soil pits were excavated on a ba re Mazic Pellic Vertisols ix of them in May 13/2011 and the rest in May 19 /2011 after a moderate rainfall event. Digital RGB images were taken from each vertisol pit using a Kodak? camera selecting a size of 1600x945 pixels. Each soil image was processed to homogenized brightness and then a spatial filter with several window sizes was applied to select the optimum one. The RGB image obtained were divided in each matrix color selecting the best thresholds for each one, maximum and minimum, to be applied and get a digital binary pattern. This one was analyzed by estimating two fractal scaling exponents box counting dimension D BC) and interface fractal dimension (D) In addition, three pre-fractal scaling coefficients were determinate at maximum resolution: total number of boxes intercepting the foreground pattern (A), fractal lacunarity (?1) and Shannon entropy S1). For all the images processed the spatial filter 9x9 was the optimum based on entropy, cluster and histogram criteria. Thresholds for each color were selected based on bimodal histograms.