915 resultados para computational image processing


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Uma imagem engloba informação que precisa ser organizada para interpretar e compreender seu conteúdo. Existem diversas técnicas computacionais para extrair a principal informação de uma imagem e podem ser divididas em três áreas: análise de cor, textura e forma. Uma das principais delas é a análise de forma, por descrever características de objetos baseadas em seus pontos fronteira. Propomos um método de caracterização de imagens, por meio da análise de forma, baseada nas propriedades espectrais do laplaciano em grafos. O procedimento construiu grafos G baseados nos pontos fronteira do objeto, cujas conexões entre vértices são determinadas por limiares T_l. A partir dos grafos obtêm-se a matriz de adjacência A e a matriz de graus D, as quais definem a matriz Laplaciana L=D -A. A decomposição espectral da matriz Laplaciana (autovalores) é investigada para descrever características das imagens. Duas abordagens são consideradas: a) Análise do vetor característico baseado em limiares e a histogramas, considera dois parâmetros o intervalo de classes IC_l e o limiar T_l; b) Análise do vetor característico baseado em vários limiares para autovalores fixos; os quais representam o segundo e último autovalor da matriz L. As técnicas foram testada em três coleções de imagens: sintéticas (Genéricas), parasitas intestinais (SADPI) e folhas de plantas (CNShape), cada uma destas com suas próprias características e desafios. Na avaliação dos resultados, empregamos o modelo de classificação support vector machine (SVM), o qual avalia nossas abordagens, determinando o índice de separação das categorias. A primeira abordagem obteve um acerto de 90 % com a coleção de imagens Genéricas, 88 % na coleção SADPI, e 72 % na coleção CNShape. Na segunda abordagem, obtém-se uma taxa de acerto de 97 % com a coleção de imagens Genéricas; 83 % para SADPI e 86 % no CNShape. Os resultados mostram que a classificação de imagens a partir do espectro do Laplaciano, consegue categorizá-las satisfatoriamente.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Póster presentado en SPIE Photonics Europe, Brussels, 16-19 April 2012.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Measurement of concrete strain through non-invasive methods is of great importance in civil engineering and structural analysis. Traditional methods use laser speckle and high quality cameras that may result too expensive for many applications. Here we present a method for measuring concrete deformations with a standard reflex camera and image processing for tracking objects in the concretes surface. Two different approaches are presented here. In the first one, on-purpose objects are drawn on the surface, while on the second one we track small defects on the surface due to air bubbles in the hardening process. The method has been tested on a concrete sample under several loading/unloading cycles. A stop-motion sequence of the process has been captured and analyzed. Results have been successfully compared with the values given by a strain gauge. Accuracy of our methods in tracking objects is below 8 μm, in the order of more expensive commercial devices.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

"UILU-ENG 84 1703"--Cover.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

"September 1991."

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Cox's theorem states that, under certain assumptions, any measure of belief is isomorphic to a probability measure. This theorem, although intended as a justification of the subjectivist interpretation of probability theory, is sometimes presented as an argument for more controversial theses. Of particular interest is the thesis that the only coherent means of representing uncertainty is via the probability calculus. In this paper I examine the logical assumptions of Cox's theorem and I show how these impinge on the philosophical conclusions thought to be supported by the theorem. I show that the more controversial thesis is not supported by Cox's theorem. (C) 2003 Elsevier Inc. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Multiplication and comultiplication of beliefs represent a generalisation of multiplication and comultiplication of probabilities as well as of binary logic AND and OR. Our approach follows that of subjective logic, where belief functions are expressed as opinions that are interpreted as being equivalent to beta probability distributions. We compare different types of opinion product and coproduct, and show that they represent very good approximations of the analytical product and coproduct of beta probability distributions. We also define division and codivision of opinions, and compare our framework with other logic frameworks for combining uncertain propositions. (C) 2004 Elsevier Inc. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Extraction and reconstruction of rectal wall structures from an ultrasound image is helpful for surgeons in rectal clinical diagnosis and 3-D reconstruction of rectal structures from ultrasound images. The primary task is to extract the boundary of the muscular layers on the rectal wall. However, due to the low SNR from ultrasound imaging and the thin muscular layer structure of the rectum, this boundary detection task remains a challenge. An active contour model is an effective high-level model, which has been used successfully to aid the tasks of object representation and recognition in many image-processing applications. We present a novel multigradient field active contour algorithm with an extended ability for multiple-object detection, which overcomes some limitations of ordinary active contour models—"snakes." The core part in the algorithm is the proposal of multigradient vector fields, which are used to replace image forces in kinetic function for alternative constraints on the deformation of active contour, thereby partially solving the initialization limitation of active contour for rectal wall boundary detection. An adaptive expanding force is also added to the model to help the active contour go through the homogenous region in the image. The efficacy of the model is explained and tested on the boundary detection of a ring-shaped image, a synthetic image, and an ultrasound image. The experimental results show that the proposed multigradient field-active contour is feasible for multilayer boundary detection of rectal wall

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Most face recognition systems only work well under quite constrained environments. In particular, the illumination conditions, facial expressions and head pose must be tightly controlled for good recognition performance. In 2004, we proposed a new face recognition algorithm, Adaptive Principal Component Analysis (APCA) [4], which performs well against both lighting variation and expression change. But like other eigenface-derived face recognition algorithms, APCA only performs well with frontal face images. The work presented in this paper is an extension of our previous work to also accommodate variations in head pose. Following the approach of Cootes et al, we develop a face model and a rotation model which can be used to interpret facial features and synthesize realistic frontal face images when given a single novel face image. We use a Viola-Jones based face detector to detect the face in real-time and thus solve the initialization problem for our Active Appearance Model search. Experiments show that our approach can achieve good recognition rates on face images across a wide range of head poses. Indeed recognition rates are improved by up to a factor of 5 compared to standard PCA.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A novel algorithm for performing registration of dynamic contrast-enhanced (DCE) MRI data of the breast is presented. It is based on an algorithm known as iterated dynamic programming originally devised to solve the stereo matching problem. Using artificially distorted DCE-MRI breast images it is shown that the proposed algorithm is able to correct for movement and distortions over a larger range than is likely to occur during routine clinical examination. In addition, using a clinical DCE-MRI data set with an expertly labeled suspicious region, it is shown that the proposed algorithm significantly reduces the variability of the enhancement curves at the pixel level yielding more pronounced uptake and washout phases.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Image segmentation is one of the most computationally intensive operations in image processing and computer vision. This is because a large volume of data is involved and many different features have to be extracted from the image data. This thesis is concerned with the investigation of practical issues related to the implementation of several classes of image segmentation algorithms on parallel architectures. The Transputer is used as the basic building block of hardware architectures and Occam is used as the programming language. The segmentation methods chosen for implementation are convolution, for edge-based segmentation; the Split and Merge algorithm for segmenting non-textured regions; and the Granlund method for segmentation of textured images. Three different convolution methods have been implemented. The direct method of convolution, carried out in the spatial domain, uses the array architecture. The other two methods, based on convolution in the frequency domain, require the use of the two-dimensional Fourier transform. Parallel implementations of two different Fast Fourier Transform algorithms have been developed, incorporating original solutions. For the Row-Column method the array architecture has been adopted, and for the Vector-Radix method, the pyramid architecture. The texture segmentation algorithm, for which a system-level design is given, demonstrates a further application of the Vector-Radix Fourier transform. A novel concurrent version of the quad-tree based Split and Merge algorithm has been implemented on the pyramid architecture. The performance of the developed parallel implementations is analysed. Many of the obtained speed-up and efficiency measures show values close to their respective theoretical maxima. Where appropriate comparisons are drawn between different implementations. The thesis concludes with comments on general issues related to the use of the Transputer system as a development tool for image processing applications; and on the issues related to the engineering of concurrent image processing applications.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Classical studies of area summation measure contrast detection thresholds as a function of grating diameter. Unfortunately, (i) this approach is compromised by retinal inhomogeneity and (ii) it potentially confounds summation of signal with summation of internal noise. The Swiss cheese stimulus of T. S. Meese and R. J. Summers (2007) and the closely related Battenberg stimulus of T. S. Meese (2010) were designed to avoid these problems by keeping target diameter constant and modulating interdigitated checks of first-order carrier contrast within the stimulus region. This approach has revealed a contrast integration process with greater potency than the classical model of spatial probability summation. Here, we used Swiss cheese stimuli to investigate the spatial limits of contrast integration over a range of carrier frequencies (1–16 c/deg) and raised plaid modulator frequencies (0.25–32 cycles/check). Subthreshold summation for interdigitated carrier pairs remained strong (~4 to 6 dB) up to 4 to 8 cycles/check. Our computational analysis of these results implied linear signal combination (following square-law transduction) over either (i) 12 carrier cycles or more or (ii) 1.27 deg or more. Our model has three stages of summation: short-range summation within linear receptive fields, medium-range integration to compute contrast energy for multiple patches of the image, and long-range pooling of the contrast integrators by probability summation. Our analysis legitimizes the inclusion of widespread integration of signal (and noise) within hierarchical image processing models. It also confirms the individual differences in the spatial extent of integration that emerge from our approach.