29 resultados para image processing filters
em Universidad Politécnica de Madrid
Resumo:
Evolvable Hardware (EH) is a technique that consists of using reconfigurable hardware devices whose configuration is controlled by an Evolutionary Algorithm (EA). Our system consists of a fully-FPGA implemented scalable EH platform, where the Reconfigurable processing Core (RC) can adaptively increase or decrease in size. Figure 1 shows the architecture of the proposed System-on-Programmable-Chip (SoPC), consisting of a MicroBlaze processor responsible of controlling the whole system operation, a Reconfiguration Engine (RE), and a Reconfigurable processing Core which is able to change its size in both height and width. This system is used to implement image filters, which are generated autonomously thanks to the evolutionary process. The system is complemented with a camera that enables the usage of the platform for real time applications.
Resumo:
Video analytics play a critical role in most recent traffic monitoring and driver assistance systems. In this context, the correct detection and classification of surrounding vehicles through image analysis has been the focus of extensive research in the last years. Most of the pieces of work reported for image-based vehicle verification make use of supervised classification approaches and resort to techniques, such as histograms of oriented gradients (HOG), principal component analysis (PCA), and Gabor filters, among others. Unfortunately, existing approaches are lacking in two respects: first, comparison between methods using a common body of work has not been addressed; second, no study of the combination potentiality of popular features for vehicle classification has been reported. In this study the performance of the different techniques is first reviewed and compared using a common public database. Then, the combination capabilities of these techniques are explored and a methodology is presented for the fusion of classifiers built upon them, taking into account also the vehicle pose. The study unveils the limitations of single-feature based classification and makes clear that fusion of classifiers is highly beneficial for vehicle verification.
Resumo:
Monument conservation is related to the interaction between the original petrological parameters of the rock and external factors in the area where the building is sited, such as weather conditions, pollution, and so on. Depending on the environmental conditions and the characteristics of the materials used, different types of weathering predominate. In all, the appearance of surface crusts constitutes a first stage, whose origin can often be traced to the properties of the material itself. In the present study, different colours of “patinas” were distinguished by defining the threshold levels of greys associated with “pathology” in the histogram. These data were compared to background information and other parameters, such as mineralogical composition, porosity, and so on, as well as other visual signs of deterioration. The result is a map of the pathologies associated with “cover films” on monuments, which generate images by relating colour characteristics to desired properties or zones of interest.
Resumo:
To properly understand and model animal embryogenesis it is crucial to obtain detailed measurements, both in time and space, about their gene expression domains and cell dynamics. Such challenge has been confronted in recent years by a surge of atlases which integrate a statistically relevant number of different individuals to get robust, complete information about their spatiotemporal locations of gene patterns. This paper will discuss the fundamental image analysis strategies required to build such models and the most common problems found along the way. We also discuss the main challenges and future goals in the field.
Resumo:
To properly understand and model animal embryogenesis it is crucial to obtain detailed measurements, both in time and space, about their gene expression domains and cell dynamics. Such challenge has been confronted in recent years by a surge of atlases which integrate a statistically relevant number of different individuals to get robust, complete information about their spatiotemporal locations of gene patterns. This paper will discuss the fundamental image analysis strategies required to build such models and the most common problems found along the way. We also discuss the main challenges and future goals in the field.
Resumo:
This paper presents a computer vision system that successfully discriminates between weed patches and crop rows under uncontrolled lighting in real-time. The system consists of two independent subsystems, a fast image processing delivering results in real-time (Fast Image Processing, FIP), and a slower and more accurate processing (Robust Crop Row Detection, RCRD) that is used to correct the first subsystem's mistakes. This combination produces a system that achieves very good results under a wide variety of conditions. Tested on several maize videos taken of different fields and during different years, the system successfully detects an average of 95% of weeds and 80% of crops under different illumination, soil humidity and weed/crop growth conditions. Moreover, the system has been shown to produce acceptable results even under very difficult conditions, such as in the presence of dramatic sowing errors or abrupt camera movements. The computer vision system has been developed for integration into a treatment system because the ideal setup for any weed sprayer system would include a tool that could provide information on the weeds and crops present at each point in real-time, while the tractor mounting the spraying bar is moving
Resumo:
A first study in order to construct a simple model of the mammalian retina is reported. The basic elements for this model are Optical Programmable Logic Cells, OPLCs, previously employed as a functional element for Optical Computing. The same type of circuit simulates the five types of neurons present in the retina. Different responses are obtained by modifying either internal or external connections. Two types of behaviors are reported: symmetrical and non-symmetrical with respect to light position. Some other higher functions, as the possibility to differentiate between symmetric and non-symmetric light images, are performed by another simulation of the first layers of the visual cortex. The possibility to apply these models to image processing is reported.
Resumo:
In this PhD Thesis proposal, the principles of diffusion MRI (dMRI) in its application to the human brain mapping of connectivity are reviewed. The background section covers the fundamentals of dMRI, with special focus on those related to the distortions caused by susceptibility inhomogeneity across tissues. Also, a deep survey of available correction methodologies for this common artifact of dMRI is presented. Two methodological approaches to improved correction are introduced. Finally, the PhD proposal describes its objectives, the research plan, and the necessary resources.
Resumo:
Most of the present digital images processing methods are related with objective characterization of external properties as shape, form or colour. This information concerns objective characteristics of different bodies and is applied to extract details to perform several different tasks. But in some occasions, some other type of information is needed. This is the case when the image processing system is going to be applied to some operation related with living bodies. In this case, some other type of object information may be useful. As a matter of fact, it may give additional knowledge about its subjective properties. Some of these properties are object symmetry, parallelism between lines and the feeling of size. These types of properties concerns more to internal sensations of living beings when they are related with their environment than to the objective information obtained by artificial systems. This paper presents an elemental system able to detect some of the above-mentioned parameters. A first mathematical model to analyze these situations is reported. This theoretical model will give the possibility to implement a simple working system. The basis of this system is the use of optical logic cells, previously employed in optical computing.
Resumo:
NIR Hyperspectral imaging (1000-2500 nm) combined with IDC allowed the detection of peanut traces down to adulteration percentages 0.01% Contrary to PLSR, IDC does not require a calibration set, but uses both expert and experimental information and suitable for quantification of an interest compound in complex matrices. The obtained results shows the feasibility of using HSI systems for the detection of peanut traces in conjunction with chemical procedures, such as RT-PCR and ELISA
Resumo:
As embedded systems evolve, problems inherent to technology become important limitations. In less than ten years, chips will exceed the maximum allowed power consumption affecting performance, since, even though the resources available per chip are increasing, frequency of operation has stalled. Besides, as the level of integration is increased, it is difficult to keep defect density under control, so new fault tolerant techniques are required. In this demo work, a new dynamically adaptable virtual architecture (ARTICo3) to allow dynamic and context-aware use of resources is implemented in a high performance Wireless Sensor node (HiReCookie) to perform an image processing application.
Resumo:
A new technology is being proposed as a solution to the problem of unintentional facial detection and recognition in pictures in which the individuals appearing want to express their privacy preferences, through the use of different tags. The existing methods for face de-identification were mostly ad hoc solutions that only provided an absolute binary solution in a privacy context such as pixelation, or a bar mask. As the number and users of social networks are increasing, our preferences regarding our privacy may become more complex, leaving these absolute binary solutions as something obsolete. The proposed technology overcomes this problem by embedding information in a tag which will be placed close to the face without being disruptive. Through a decoding method the tag will provide the preferences that will be applied to the images in further stages.
Resumo:
The structural connectivity of the brain is considered to encode species-wise and subject-wise patterns that will unlock large areas of understanding of the human brain. Currently, diffusion MRI of the living brain enables to map the microstructure of tissue, allowing to track the pathways of fiber bundles connecting the cortical regions across the brain. These bundles are summarized in a network representation called connectome that is analyzed using graph theory. The extraction of the connectome from diffusion MRI requires a large processing flow including image enhancement, reconstruction, segmentation, registration, diffusion tracking, etc. Although a concerted effort has been devoted to the definition of standard pipelines for the connectome extraction, it is still crucial to define quality assessment protocols of these workflows. The definition of quality control protocols is hindered by the complexity of the pipelines under test and the absolute lack of gold-standards for diffusion MRI data. Here we characterize the impact on structural connectivity workflows of the geometrical deformation typically shown by diffusion MRI data due to the inhomogeneity of magnetic susceptibility across the imaged object. We propose an evaluation framework to compare the existing methodologies to correct for these artifacts including whole-brain realistic phantoms. Additionally, we design and implement an image segmentation and registration method to avoid performing the correction task and to enable processing in the native space of diffusion data. We release PySDCev, an evaluation framework for the quality control of connectivity pipelines, specialized in the study of susceptibility-derived distortions. In this context, we propose Diffantom, a whole-brain phantom that provides a solution to the lack of gold-standard data. The three correction methodologies under comparison performed reasonably, and it is difficult to determine which method is more advisable. We demonstrate that susceptibility-derived correction is necessary to increase the sensitivity of connectivity pipelines, at the cost of specificity. Finally, with the registration and segmentation tool called regseg we demonstrate how the problem of susceptibility-derived distortion can be overcome allowing data to be used in their original coordinates. This is crucial to increase the sensitivity of the whole pipeline without any loss in specificity.
Resumo:
We propose to directly process 3D + t image sequences with mathematical morphology operators, using a new classi?cation of the 3D+t structuring elements. Several methods (?ltering, tracking, segmentation) dedicated to the analysis of 3D + t datasets of zebra?sh embryogenesis are introduced and validated through a synthetic dataset. Then, we illustrate the application of these methods to the analysis of datasets of zebra?sh early development acquired with various microscopy techniques. This processing paradigm produces spatio-temporal coherent results as it bene?ts from the intrinsic redundancy of the temporal dimension, and minimizes the needs for human intervention in semi-automatic algorithms.
Resumo:
Background Gray scale images make the bulk of data in bio-medical image analysis, and hence, the main focus of many image processing tasks lies in the processing of these monochrome images. With ever improving acquisition devices, spatial and temporal image resolution increases, and data sets become very large. Various image processing frameworks exists that make the development of new algorithms easy by using high level programming languages or visual programming. These frameworks are also accessable to researchers that have no background or little in software development because they take care of otherwise complex tasks. Specifically, the management of working memory is taken care of automatically, usually at the price of requiring more it. As a result, processing large data sets with these tools becomes increasingly difficult on work station class computers. One alternative to using these high level processing tools is the development of new algorithms in a languages like C++, that gives the developer full control over how memory is handled, but the resulting workflow for the prototyping of new algorithms is rather time intensive, and also not appropriate for a researcher with little or no knowledge in software development. Another alternative is in using command line tools that run image processing tasks, use the hard disk to store intermediate results, and provide automation by using shell scripts. Although not as convenient as, e.g. visual programming, this approach is still accessable to researchers without a background in computer science. However, only few tools exist that provide this kind of processing interface, they are usually quite task specific, and don’t provide an clear approach when one wants to shape a new command line tool from a prototype shell script. Results The proposed framework, MIA, provides a combination of command line tools, plug-ins, and libraries that make it possible to run image processing tasks interactively in a command shell and to prototype by using the according shell scripting language. Since the hard disk becomes the temporal storage memory management is usually a non-issue in the prototyping phase. By using string-based descriptions for filters, optimizers, and the likes, the transition from shell scripts to full fledged programs implemented in C++ is also made easy. In addition, its design based on atomic plug-ins and single tasks command line tools makes it easy to extend MIA, usually without the requirement to touch or recompile existing code. Conclusion In this article, we describe the general design of MIA, a general purpouse framework for gray scale image processing. We demonstrated the applicability of the software with example applications from three different research scenarios, namely motion compensation in myocardial perfusion imaging, the processing of high resolution image data that arises in virtual anthropology, and retrospective analysis of treatment outcome in orthognathic surgery. With MIA prototyping algorithms by using shell scripts that combine small, single-task command line tools is a viable alternative to the use of high level languages, an approach that is especially useful when large data sets need to be processed.