155 resultados para Image processing - Digital techniques
Resumo:
Image segmentation is a process frequently used in several different areas including Cartography. Feature extraction is a very troublesome task, and successful results require more complex techniques and good quality data. The aims of this paper is to study Digital Image Processing techniques, with emphasis in Mathematical Morphology, to use Remote Sensing imagery, making image segmentation, using morphological operators, mainly the multi-scale morphological gradient operator. In the segmentation process, pre-processing operators of Mathematical Morphology were used, and the multi-scales gradient was implemented to create one of the images used as marker image. Orbital image of the Landsat satellite, sensor TM was used. The MATLAB software was used in the implementation of the routines. With the accomplishment of tests, the performance of the implemented operators was verified and carried through the analysis of the results. The extration of linear feature, using mathematical morphology techniques, can contribute in cartographic applications, as cartographic products updating. The comparison to the best result obtained was performed by means of the morphology with conventional techniques of features extraction. © Springer-Verlag 2004.
Resumo:
This article describes the development of a method for analysis of the shape of the stretch zone surface based on parallax measurement theory and using digital image processing techniques. Accurate criteria for the definition of the boundaries of the stretch zone are established from profiles of fracture surfaces obtained from crack tip opening displacement tests on Al-7050 alloy samples. The elevation profiles behavior analysis is based on stretch zone width and height parameters. It is concluded that the geometry of the stretch zone profiles under plane strain conditions can be described by a semi-parabolic relationship. (C) Elsevier B.V., 1999. All rights reserved.
Resumo:
Purpose: To evaluate reproducibility and precision of ocular measurements by digital photograph analysis, in addition to the transformation of the measures according to the individual iris diameter as an oculometric reference. Methods: Twenty-four eyes have been digitally photographed in a standardized way at two distances. Two researchers have analyzed these printed images using a caliper and these digital forms by ImageJ 1.37 (TM). Several external ocular parameters were estimated (mm and as iris diameter) and methods of measurement compared regarding their precision, agreement and correlation. Results: Caliper and digital analysis of oculometric measures provided significant agreement and correlation, nevertheless the precision of digital measures was higher. The estimates of numeric transformation from oculometric measures according to individual iris diameter resulted in great correlation to caliper measures and high agreement when compared to different distances of taking the photographs. Conclusions: Facial digital photographs allowed oculometric precise and reproducible estimates, endorsing clinical research usefulness. Using iris diameter as individual oculometric reference disclosed high reproducibility when facial photographs were taken at different distances.
Resumo:
OBJETIVO: Avaliar o desempenho da análise de imagem digital na estimativa da área acometida pelas úlceras crônicas dos membros inferiores. MÉTODOS: Estudo prospectivo em que foram mensuradas úlceras empregando o método planimétrico clássico, utilizando desenho dos seus contornos em filme plástico transparente, medida sua área posteriormente por folha milimetrada. Esses valores foram utilizados como padrão para a comparação com a estimativa de área pelas fotografias digitais padronizadas das úlceras e dos desenhos das mesmas em filme plástico. Para criar um referencial de conversão dos pixels em milímetros, foi empregado um adesivo com tamanho conhecido, adjacente à úlcera. RESULTADOS: foram avaliadas 42 lesões em 20 pacientes portadores de úlceras crônicas de membros inferiores. As áreas das úlceras variaram de 0,24 a 101,65cm². Observou-se forte correlação entre as medidas planimétricas e as fotos das úlceras (R²=0,86 p<0,01), porém a correlação das medidas planimétricas com as fotos digitais dos desenhos das úlceras foi ainda maior (R²=0,99 p<0,01). CONCLUSÃO: A fotografia digital padronizada revelou-se método rápido, preciso e não-invasivo capaz de estimar a área afetada por úlceras. A avaliação das medidas fotográficas dos contornos das úlceras deve ser preferida em relação à análise de sua fotografia direta.
Resumo:
OBJETIVOS: Avaliar o posicionamento palpebral em portadores de cavidade anoftálmica com e sem prótese ocular externa, utilizando o processamento de imagem digital. MÉTODOS: Dezoito pacientes foram avaliados qualitativa e quantitativamente na Faculdade de Medicina de Botucatu - Universidade Estadual Paulista - UNESP, com e sem a prótese externa. Usando imagens obtidas por filmadora e processadas usando o programa Scion Image, mediu-se a altura do sulco palpebral superior, a altura da fenda palpebral e os ângulos palpebrais dos cantos interno e externo. RESULTADOS: Pseudo-estrabismo e sulco palpebral superior profundo foram as alterações mais freqüentes ao exame externo. Houve diferença significativa em todas as variáveis estudadas, com diminuição da altura do sulco palpebral superior, aumento da área da fenda palpebral e aumento dos ângulos palpebrais interno e externo quando o paciente estava usando a prótese externa. CONCLUSÃO: Todos os pacientes avaliados apresentaram algum tipo de anormalidade órbito-palpebral, o que reflete a dificuldade em se proporcionar ao portador de cavidade anoftálmica um aspecto idêntico ao que existe na órbita normal. O processamento de imagens digitais permitiu avaliação objetiva das dimensões óculo-palpebrais, o que poderá contribuir nas avaliações seqüenciais dos portadores de cavidade anoftálmica.
Resumo:
The aim of this study was to analyze the color alterations performed by the CIE L*a*b* system in the digital imaging of shade guide tabs, which were obtained photographically according to the automatic and manual modes. This study also sought to examine the observers' agreement in quantifying the coordinates. Four Vita Lumin Vaccum shade guide tabs were used: A3.5, B1, B3 and C4. An EOS Canon digital camera was used to record the digital images of the shade tabs, and the images were processed using Adobe Photoshop software. A total of 80 observations (five replicates of each shade according to two observers in two modes, specifically, automatic and manual) were obtained, leading to color values of L*, a* and b*. The color difference (AE) between the modes was calculated and classified as either clinically acceptable or unacceptable. The results indicated that there was agreement between the two observers in obtaining the L*, a* and b* values related to all guides. However, the B1, B3, and C4 shade tabs had AE values classified as clinically acceptable (Delta E = 0.44, Delta E = 2.04 and Delta E = 2.69, respectively). The A3.5 shade tab had a AE value classified as clinically unacceptable (Delta E = 4.17), as it presented higher values for luminosity in the automatic mode (L* = 54.0) than in the manual mode (L* = 50.6). It was concluded that the B1, B3 and C4 shade tabs can be used at any of the modes in digital camera (manual or automatic), which was a different finding from that observed for the A3.5 shade tab.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
A body of research has developed within the context of nonlinear signal and image processing that deals with the automatic, statistical design of digital window-based filters. Based on pairs of ideal and observed signals, a filter is designed in an effort to minimize the error between the ideal and filtered signals. The goodness of an optimal filter depends on the relation between the ideal and observed signals, but the goodness of a designed filter also depends on the amount of sample data from which it is designed. In order to lessen the design cost, a filter is often chosen from a given class of filters, thereby constraining the optimization and increasing the error of the optimal filter. To a great extent, the problem of filter design concerns striking the correct balance between the degree of constraint and the design cost. From a different perspective and in a different context, the problem of constraint versus sample size has been a major focus of study within the theory of pattern recognition. This paper discusses the design problem for nonlinear signal processing, shows how the issue naturally transitions into pattern recognition, and then provides a review of salient related pattern-recognition theory. In particular, it discusses classification rules, constrained classification, the Vapnik-Chervonenkis theory, and implications of that theory for morphological classifiers and neural networks. The paper closes by discussing some design approaches developed for nonlinear signal processing, and how the nature of these naturally lead to a decomposition of the error of a designed filter into a sum of the following components: the Bayes error of the unconstrained optimal filter, the cost of constraint, the cost of reducing complexity by compressing the original signal distribution, the design cost, and the contribution of prior knowledge to a decrease in the error. The main purpose of the paper is to present fundamental principles of pattern recognition theory within the framework of active research in nonlinear signal processing.
Resumo:
We outline a method for registration of images of cross sections using the concepts of The Generalized Hough Transform (GHT). The approach may be useful in situations where automation should be a concern. To overcome known problems of noise of traditional GHT we have implemented a slight modified version of the basic algorithm. The modification consists of eliminating points of no interest in the process before the application of the accumulation step of the algorithm. This procedure minimizes the amount of accumulation points while reducing the probability of appearing of spurious peaks. Also, we apply image warping techniques to interpolate images among cross sections. This is needed where the distance of samples between sections is too large. Then it is suggested that the step of registration with GHT can help the interpolation automation by simplifying the correspondence between points of images. Some results are shown.
Resumo:
Grinding process is usually the last finishing process of a precision component in the manufacturing industries. This process is utilized for manufacturing parts of different materials, so it demands results such as low roughness, dimensional and shape error control, optimum tool-life, with minimum cost and time. Damages on the parts are very expensive since the previous processes and the grinding itself are useless when the part is damaged in this stage. This work aims to investigate the efficiency of digital signal processing tools of acoustic emission signals in order to detect thermal damages in grinding process. To accomplish such a goal, an experimental work was carried out for 15 runs in a surface grinding machine operating with an aluminum oxide grinding wheel and ABNT 1045 e VC131 steels. The acoustic emission signals were acquired from a fixed sensor placed on the workpiece holder. A high sampling rate acquisition system at 2.5 MHz was used to collect the raw acoustic emission instead of root mean square value usually employed. In each test AE data was analyzed off-line, with results compared to inspection of each workpiece for burn and other metallurgical anomaly. A number of statistical signal processing tools have been evaluated.
Resumo:
This paper presents results from an efficient approach to an automatic detection and extraction of human faces from images with any color, texture or objects in background, that consist in find isosceles triangles formed by the eyes and mouth.
Resumo:
This paper presents a dynamic programming approach for semi-automated road extraction from medium-and high-resolution images. This method is a modified version of a pre-existing dynamic programming method for road extraction from low-resolution images. The basic assumption of this pre-existing method is that roads manifest as lines in low-resolution images (pixel footprint> 2 m) and as such can be modeled and extracted as linear features. On the other hand, roads manifest as ribbon features in medium- and high-resolution images (pixel footprint ≤ 2 m) and, as a result, the focus of road extraction becomes the road centerlines. The original method can not accurately extract road centerlines from medium- and high- resolution images. In view of this, we propose a modification of the merit function of the original approach, which is carried out by a constraint function embedding road edge properties. Experimental results demonstrated the modified algorithm's potential in extracting road centerlines from medium- and high-resolution images.
Resumo:
The Brazilian Cartography presents great deficiency in cartographic products updating. This form, Remote Sensins techniques together Digital Processing Images - DPI, are contributing to improve this problem. The Mathematical Morphology theory was used in this work. The principal function was the pruning operator. With its were extracted the interest features that can be used in cartographic process updating. The obtained results are positives and showed the use potential of mathematical morphology theory in cartography, mainly in updating.
Resumo:
The aim of this paper is to present a photogrammetric method for determining the dimensions of flat surfaces, such as billboards, based on a single digital image. A mathematical model was adapted to generate linear equations for vertical and horizontal lines in the object space. These lines are identified and measured in the image and the rotation matrix is computed using an indirect method. The distance between the camera and the surface is measured using a lasermeter, providing the coordinates of the camera perspective center. Eccentricity of the lasermeter center related to the camera perspective center is modeled by three translations, which are computed using a calibration procedure. Some experiments were performed to test the proposed method and the achieved results are within a relative error of about 1 percent in areas and distances in the object space. This accuracy fulfills the requirements of the intended applications. © 2005 American Society for Photogrammetry and Remote Sensing.
Resumo:
Several kinds of research in road extraction have been carried out in the last 6 years by the Photogrammetry and Computer Vision Research Group (GPF&VC - Grupo de Pesquisa em Fotogrametria e Visão Computacional). Several semi-automatic road extraction methodologies have been developed, including sequential and optimizatin techniques. The GP-F&VC has also been developing fully automatic methodologies for road extraction. This paper presents an overview of the GP-F&VC research in road extraction from digital images, along with examples of results obtained by the developed methodologies.