871 resultados para Machine vision and image processing
Resumo:
An automatic image processing and analysis technique has been developed for quantitative characterization of multi-phase materials. For the development of this technique is used the Khoros system that offers the basic morphological tools and a flexible, visual programming language. These techniques are implemented in a highly user oriented image processing environment that allows the user to adapt each step of the processing to his special requirements.To illustrate the implementation and performance of this technique, images of two different materials are processed for microstructure characterization. The result is presented through the determination of volume fraction of the different phases or precipitates.
Resumo:
This work is an example of the improvement on quantitative fractography by means of digital image processing and light microscopy. Two techniques are presented to investigate the quantitative fracture behavior of Ti-4Al-4V heat-treated alloy specimens, under Charpy impact testing. The first technique is the Minkowski method for fractal dimension measurement from surface profiles, revealing the multifractal character of Ti-4Al-4V fracture. It was not observed a clear positive correlation of fractal values against Charpy energies for Ti-4Al-4V alloy specimens, due to their ductility, microstructural heterogeneities and the dynamic loading characteristics at region near the V-notch. The second technique provides an entire elevation map of fracture surface by extracting in-focus regions for each picture from a stack of images acquired at successive focus positions, then computing the surface roughness. Extended-focus reconstruction has been used to explain the behavior along fracture surface. Since these techniques are based on light microscopy, their inherent low cost is very interesting for failure investigations.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Mobile robots need autonomy to fulfill their tasks. Such autonomy is related whith their capacity to explorer and to recognize their navigation environments. In this context, the present work considers techniques for the classification and extraction of features from images, using artificial neural networks. This images are used in the mapping and localization system of LACE (Automation and Evolutive Computing Laboratory) mobile robot. In this direction, the robot uses a sensorial system composed by ultrasound sensors and a catadioptric vision system equipped with a camera and a conical mirror. The mapping system is composed of three modules; two of them will be presented in this paper: the classifier and the characterizer modules. Results of these modules simulations are presented in this paper.
Resumo:
A body of research has developed within the context of nonlinear signal and image processing that deals with the automatic, statistical design of digital window-based filters. Based on pairs of ideal and observed signals, a filter is designed in an effort to minimize the error between the ideal and filtered signals. The goodness of an optimal filter depends on the relation between the ideal and observed signals, but the goodness of a designed filter also depends on the amount of sample data from which it is designed. In order to lessen the design cost, a filter is often chosen from a given class of filters, thereby constraining the optimization and increasing the error of the optimal filter. To a great extent, the problem of filter design concerns striking the correct balance between the degree of constraint and the design cost. From a different perspective and in a different context, the problem of constraint versus sample size has been a major focus of study within the theory of pattern recognition. This paper discusses the design problem for nonlinear signal processing, shows how the issue naturally transitions into pattern recognition, and then provides a review of salient related pattern-recognition theory. In particular, it discusses classification rules, constrained classification, the Vapnik-Chervonenkis theory, and implications of that theory for morphological classifiers and neural networks. The paper closes by discussing some design approaches developed for nonlinear signal processing, and how the nature of these naturally lead to a decomposition of the error of a designed filter into a sum of the following components: the Bayes error of the unconstrained optimal filter, the cost of constraint, the cost of reducing complexity by compressing the original signal distribution, the design cost, and the contribution of prior knowledge to a decrease in the error. The main purpose of the paper is to present fundamental principles of pattern recognition theory within the framework of active research in nonlinear signal processing.
Resumo:
We outline a method for registration of images of cross sections using the concepts of The Generalized Hough Transform (GHT). The approach may be useful in situations where automation should be a concern. To overcome known problems of noise of traditional GHT we have implemented a slight modified version of the basic algorithm. The modification consists of eliminating points of no interest in the process before the application of the accumulation step of the algorithm. This procedure minimizes the amount of accumulation points while reducing the probability of appearing of spurious peaks. Also, we apply image warping techniques to interpolate images among cross sections. This is needed where the distance of samples between sections is too large. Then it is suggested that the step of registration with GHT can help the interpolation automation by simplifying the correspondence between points of images. Some results are shown.
Resumo:
Human beings perceive images through their properties, like colour, shape, size, and texture. Texture is a fertile source of information about the physical environment. Images of low density crowds tend to present coarse textures, while images of dense crowds tend to present fine textures. This paper describes a new technique for automatic estimation of crowd density, which is a part of the problem of automatic crowd monitoring, using texture information based on grey-level transition probabilities on digitised images. Crowd density feature vectors are extracted from such images and used by a self organising neural network which is responsible for the crowd density estimation. Results obtained respectively to the estimation of the number of people in a specific area of Liverpool Street Railway Station in London (UK) are presented.
Resumo:
OBJECTIVES: Despite the recent success regarding the transplantation of tissue-engineered airways, the mechanical properties of these grafts are not well understood. Mechanical assessment of a tissue-engineered airway graft before implantation may be used in the future as a predictor of function. The aim of this preliminary work was to develop a noninvasive image-processing environment for the assessment of airway mechanics.METHOD: Decellularized, recellularized and normal tracheas (groups DECEL, RECEL, and CONTROL, respectively) immersed in Krebs-Henseleit solution were ventilated by a small-animal ventilator connected to a Fleisch pneumotachograph and two pressure transducers (differential and gauge). A camera connected to a stereomicroscope captured images of the pulsation of the trachea before instillation of saline solution and after instillation of Krebs-Henseleit solution, followed by instillation with Krebs-Henseleit with methacholine 0.1 M (protocols A, K and KMCh, respectively). The data were post-processed with computer software and statistical comparisons between groups and protocols were performed.RESULTS: There were statistically significant variations in the image measurements of the medial region of the trachea between the groups (two-way analysis of variance [ANOVA], p<0.01) and of the proximal region between the groups and protocols (two-way ANOVA, p<0.01).CONCLUSIONS: The technique developed in this study is an innovative method for performing a mechanical assessment of engineered tracheal grafts that will enable evaluation of the viscoelastic properties of neo-tracheas prior to transplantation.
Resumo:
With the widespread proliferation of computers, many human activities entail the use of automatic image analysis. The basic features used for image analysis include color, texture, and shape. In this paper, we propose a new shape description method, called Hough Transform Statistics (HTS), which uses statistics from the Hough space to characterize the shape of objects or regions in digital images. A modified version of this method, called Hough Transform Statistics neighborhood (HTSn), is also presented. Experiments carried out on three popular public image databases showed that the HTS and HTSn descriptors are robust, since they presented precision-recall results much better than several other well-known shape description methods. When compared to Beam Angle Statistics (BAS) method, a shape description method that inspired their development, both the HTS and the HTSn methods presented inferior results regarding the precision-recall criterion, but superior results in the processing time and multiscale separability criteria. The linear complexity of the HTS and the HTSn algorithms, in contrast to BAS, make them more appropriate for shape analysis in high-resolution image retrieval tasks when very large databases are used, which are very common nowadays. (C) 2014 Elsevier Inc. All rights reserved.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Although the hydrophobicity is usually an arduous parameter to be determined in the field, it has been pointed out as a good option to monitor aging of polymeric outdoor insulators. Concerning this purpose, digital image processing of photos taken from wet insulators has been the main technique nowadays. However, important challenges on this technique still remain to be overcome, such as; images from non-controlled illumination conditions can interfere on analyses and no existence of standard surfaces with different levels of hydrophobicity. In this paper, the photo image samples were digitally filtered to reduce the illumination influence, and hydrophobic surface samples were prepared from wetting silicon surfaces with solution of water-alcohol. Furthermore norevious studies triying to quantify and relate these properties in a mathematical function were found, that could be used in the field by the electrical companies. Based on such considerations, high quality images of countless hydrophobic surfaces were obtained and three different image processing methodologies, the fractal dimension and two Haralick textures descriptors, entropy and homogeneity, associated with several digital filters, were compared. The entropy parameter Haralick's descriptors filtered with the White Top-Hat filter presented the best result to classify the hydrophobicity.
Resumo:
[EN]The human face provides useful information during interaction; therefore, any system integrating Vision- BasedHuman Computer Interaction requires fast and reliable face and facial feature detection. Different approaches have focused on this ability but only open source implementations have been extensively used by researchers. A good example is the Viola–Jones object detection framework that particularly in the context of facial processing has been frequently used.
Resumo:
This thesis investigates two distinct research topics. The main topic (Part I) is the computational modelling of cardiomyocytes derived from human stem cells, both embryonic (hESC-CM) and induced-pluripotent (hiPSC-CM). The aim of this research line lies in developing models of the electrophysiology of hESC-CM and hiPSC-CM in order to integrate the available experimental data and getting in-silico models to be used for studying/making new hypotheses/planning experiments on aspects not fully understood yet, such as the maturation process, the functionality of the Ca2+ hangling or why the hESC-CM/hiPSC-CM action potentials (APs) show some differences with respect to APs from adult cardiomyocytes. Chapter I.1 introduces the main concepts about hESC-CMs/hiPSC-CMs, the cardiac AP, and computational modelling. Chapter I.2 presents the hESC-CM AP model, able to simulate the maturation process through two developmental stages, Early and Late, based on experimental and literature data. Chapter I.3 describes the hiPSC-CM AP model, able to simulate the ventricular-like and atrial-like phenotypes. This model was used to assess which currents are responsible for the differences between the ventricular-like AP and the adult ventricular AP. The secondary topic (Part II) consists in the study of texture descriptors for biological image processing. Chapter II.1 provides an overview on important texture descriptors such as Local Binary Pattern or Local Phase Quantization. Moreover the non-binary coding and the multi-threshold approach are here introduced. Chapter II.2 shows that the non-binary coding and the multi-threshold approach improve the classification performance of cellular/sub-cellular part images, taken from six datasets. Chapter II.3 describes the case study of the classification of indirect immunofluorescence images of HEp2 cells, used for the antinuclear antibody clinical test. Finally the general conclusions are reported.
Resumo:
Perfusion CT imaging of the liver has potential to improve evaluation of tumour angiogenesis. Quantitative parameters can be obtained applying mathematical models to Time Attenuation Curve (TAC). However, there are still some difficulties for an accurate quantification of perfusion parameters due, for example, to algorithms employed, to mathematical model, to patient’s weight and cardiac output and to the acquisition system. In this thesis, new parameters and alternative methodologies about liver perfusion CT are presented in order to investigate the cause of variability of this technique. Firstly analysis were made to assess the variability related to the mathematical model used to compute arterial Blood Flow (BFa) values. Results were obtained implementing algorithms based on “ maximum slope method” and “Dual input one compartment model” . Statistical analysis on simulated data demonstrated that the two methods are not interchangeable. Anyway slope method is always applicable in clinical context. Then variability related to TAC processing in the application of slope method is analyzed. Results compared with manual selection allow to identify the best automatic algorithm to compute BFa. The consistency of a Standardized Perfusion Index (SPV) was evaluated and a simplified calibration procedure was proposed. At the end the quantitative value of perfusion map was analyzed. ROI approach and map approach provide related values of BFa and this means that pixel by pixel algorithm give reliable quantitative results. Also in pixel by pixel approach slope method give better results. In conclusion the development of new automatic algorithms for a consistent computation of BFa and the analysis and definition of simplified technique to compute SPV parameter, represent an improvement in the field of liver perfusion CT analysis.
Resumo:
To analyze the impact of opacities in the optical pathway and image compression of 32-bit raw data to 8-bit jpg images on quantified optical coherence tomography (OCT) image analysis.