959 resultados para Techniques: Image Processing


Relevância:

90.00% 90.00%

Publicador:

Resumo:

One of the key aspects in 3D-image registration is the computation of the joint intensity histogram. We propose a new approach to compute this histogram using uniformly distributed random lines to sample stochastically the overlapping volume between two 3D-images. The intensity values are captured from the lines at evenly spaced positions, taking an initial random offset different for each line. This method provides us with an accurate, robust and fast mutual information-based registration. The interpolation effects are drastically reduced, due to the stochastic nature of the line generation, and the alignment process is also accelerated. The results obtained show a better performance of the introduced method than the classic computation of the joint histogram

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In image processing, segmentation algorithms constitute one of the main focuses of research. In this paper, new image segmentation algorithms based on a hard version of the information bottleneck method are presented. The objective of this method is to extract a compact representation of a variable, considered the input, with minimal loss of mutual information with respect to another variable, considered the output. First, we introduce a split-and-merge algorithm based on the definition of an information channel between a set of regions (input) of the image and the intensity histogram bins (output). From this channel, the maximization of the mutual information gain is used to optimize the image partitioning. Then, the merging process of the regions obtained in the previous phase is carried out by minimizing the loss of mutual information. From the inversion of the above channel, we also present a new histogram clustering algorithm based on the minimization of the mutual information loss, where now the input variable represents the histogram bins and the output is given by the set of regions obtained from the above split-and-merge algorithm. Finally, we introduce two new clustering algorithms which show how the information bottleneck method can be applied to the registration channel obtained when two multimodal images are correctly aligned. Different experiments on 2-D and 3-D images show the behavior of the proposed algorithms

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper, an information theoretic framework for image segmentation is presented. This approach is based on the information channel that goes from the image intensity histogram to the regions of the partitioned image. It allows us to define a new family of segmentation methods which maximize the mutual information of the channel. Firstly, a greedy top-down algorithm which partitions an image into homogeneous regions is introduced. Secondly, a histogram quantization algorithm which clusters color bins in a greedy bottom-up way is defined. Finally, the resulting regions in the partitioning algorithm can optionally be merged using the quantized histogram

Relevância:

90.00% 90.00%

Publicador:

Resumo:

El càncer de pell es considera un dels tipus de càncer més freqüents actualment, entre d'altres factors degut a l'augment en l'exposició a la radiació ultraviolada (UV). Recentment la utilització de la Microscòpia Confocal (MCF) per a l'avaluació i diagnosi del càncer de pell ha rebut un important interès. El principal avantatge és la capacitat de visualitzar en temps real la regió d'interès a nivell cel·lular, similar a la informació obtinguda en una biòpsia, sense el patiment que suposa per al pacient. El principal inconvenient però, és que les imatges obtingudes amb MCF són difícils d'interpretar per als metges en el format actual (conjunt de talls 2D a diferents profunditats de la pell). El microscopi confocal és una de les tècniques més actuals de diagnòstic, i s'ha establert com a una eina per obtenir imatges d'alta resolució i reconstruccions 3-D d'una gran varietat de mostres biològiques. És capaç d'escombrar diferents plans en l'eix Z, obtenint imatges 2D de diferent profunditat juntament amb la informació dels paràmetres de captura (com ara la profunditat, potència del làser, posicionament en x,y,z, etc). Mitjançant eines informàtiques es pot integrar aquesta informació en un model 3D de la regió d'interès. L'objectiu principal d'aquest projecte és el desenvolupament d'una eina per a l'ajuda en la interpretació de les imatges MCF i així poder millorar el diagnosi del càncer de pell

Relevância:

90.00% 90.00%

Publicador:

Resumo:

El processament de dades cardíaques és, sinó el que més, un dels més complexes de tractar. El problema principal és que a diferència d’altres parts de l’organisme, el cor del pacient està en moviment continu. Aquest moviment queda representat en les imatges generades pels aparells de captació en forma de soroll. Aquest soroll no només dificulta la detecció de les patologies per part dels cardiòlegs i els especialistes sinó que també en moltes ocasions limita l’aplicació de certes tècniques i mètodes. Així per exemple, l’aplicació de mètodes de visualització 3D (mètodes que permeten generar una representació 3D d’un òrgan) que poden aplicar-se fàcilment en visualització de dades del cervell no són aplicables sobre dades de cor. El Grup d’Informàtica Gràfica de la Universitat de Girona, juntament amb l’Institut de Diagnòstic per la Imatge (IDI) de l'hospital Dr. Josep Trueta, està col·laborant en el desenvolupament de noves eines informàtiques que donin suport al diagnòstic. Una de les prioritats actuals de l'IDI és el tractament de malalties cardíaques. Es disposa d’una plataforma anomenada Starviewer que integra les operacions bàsiques de manipulació i visualització de dades mèdiques. L’objectiu d’aquest projecte és el de desenvolupar i integrar en la plataforma Starviewer els mòduls necessaris per poder tractar, manipular i visualitzar dades cardíaques provinents de ressònancies magnètiques

Relevância:

90.00% 90.00%

Publicador:

Resumo:

La visualització científica estudia i defineix algorismes i estructures de dades que permeten fer comprensibles conjunts de dades a través d’imatges. En el cas de les aplicacions mèdiques les dades que cal interpretar provenen de diferents dispositius de captació i es representen en un model de vòxels. La utilitat d’aquest model de vòxels depèn de poder-lo veure des del punt de vista ideal, és a dir el que aporti més informació. D’altra banda, existeix la tècnica dels Miralls Màgics que permet veure el model de vòxels des de diferents punts de vista alhora i mostrant diferents valors de propietat a cada mirall. En aquest projecte implementarem un algorisme que permetrà determinar el punt de vista ideal per visualitzar un model de vòxels així com també els punts de vista ideals per als miralls per tal d’aconseguir el màxim d’informació possible del model de vòxels. Aquest algorisme es basa en la teoria de la informació per saber quina és la millor visualització. L’algorisme també permetrà determinar l’assignació de colors òptima per al model de vòxels

Relevância:

90.00% 90.00%

Publicador:

Resumo:

One of the main challenges for developers of new human-computer interfaces is to provide a more natural way of interacting with computer systems, avoiding excessive use of hand and finger movements. In this way, also a valuable alternative communication pathway is provided to people suffering from motor disabilities. This paper describes the construction of a low cost eye tracker using a fixed head setup. Therefore a webcam, laptop and an infrared lighting source were used together with a simple frame to fix the head of the user. Furthermore, detailed information on the various image processing techniques used for filtering the centre of the pupil and different methods to calculate the point of gaze are discussed. An overall accuracy of 1.5 degrees was obtained while keeping the hardware cost of the device below 100 euros.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper, we introduce a novel high-level visual content descriptor which is devised for performing semantic-based image classification and retrieval. The work can be treated as an attempt to bridge the so called “semantic gap”. The proposed image feature vector model is fundamentally underpinned by the image labelling framework, called Collaterally Confirmed Labelling (CCL), which incorporates the collateral knowledge extracted from the collateral texts of the images with the state-of-the-art low-level image processing and visual feature extraction techniques for automatically assigning linguistic keywords to image regions. Two different high-level image feature vector models are developed based on the CCL labelling of results for the purposes of image data clustering and retrieval respectively. A subset of the Corel image collection has been used for evaluating our proposed method. The experimental results to-date already indicates that our proposed semantic-based visual content descriptors outperform both traditional visual and textual image feature models.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Visually impaired people have a very different view of the world such that seemingly simple environments as viewed by a ‘normally’ sighted people can be difficult for people with visual impairments to access and move around. This is a problem that can be hard to fully comprehend by people with ‘normal vision’ even when guidelines for inclusive design are available. This paper investigates ways in which image processing techniques can be used to simulate the characteristics of a number of common visual impairments in order to provide, planners, designers and architects, with a visual representation of how people with visual impairments view their environment, thereby promoting greater understanding of the issues, the creation of more accessible buildings and public spaces and increased accessibility for visually impaired people in everyday situations.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Methods for producing nonuniform transformations, or regradings, of discrete data are discussed. The transformations are useful in image processing, principally for enhancement and normalization of scenes. Regradings which “equidistribute” the histogram of the data, that is, which transform it into a constant function, are determined. Techniques for smoothing the regrading, dependent upon a continuously variable parameter, are presented. Generalized methods for constructing regradings such that the histogram of the data is transformed into any prescribed function are also discussed. Numerical algorithms for implementing the procedures and applications to specific examples are described.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Sea surface temperature has been an important application of remote sensing from space for three decades. This chapter first describes well-established methods that have delivered valuable routine observations of sea surface temperature for meteorology and oceanography. Increasingly demanding requirements, often related to climate science, have highlighted some limitations of these ap-proaches. Practitioners have had to revisit techniques of estimation, of characterising uncertainty, and of validating observations—and even to reconsider the meaning(s) of “sea surface temperature”. The current understanding of these issues is reviewed, drawing attention to ongoing questions. Lastly, the prospect for thermal remote sens-ing of sea surface temperature over coming years is discussed.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We present a new technique for obtaining model fittings to very long baseline interferometric images of astrophysical jets. The method minimizes a performance function proportional to the sum of the squared difference between the model and observed images. The model image is constructed by summing N(s) elliptical Gaussian sources characterized by six parameters: two-dimensional peak position, peak intensity, eccentricity, amplitude, and orientation angle of the major axis. We present results for the fitting of two main benchmark jets: the first constructed from three individual Gaussian sources, the second formed by five Gaussian sources. Both jets were analyzed by our cross-entropy technique in finite and infinite signal-to-noise regimes, the background noise chosen to mimic that found in interferometric radio maps. Those images were constructed to simulate most of the conditions encountered in interferometric images of active galactic nuclei. We show that the cross-entropy technique is capable of recovering the parameters of the sources with a similar accuracy to that obtained from the very traditional Astronomical Image Processing System Package task IMFIT when the image is relatively simple (e. g., few components). For more complex interferometric maps, our method displays superior performance in recovering the parameters of the jet components. Our methodology is also able to show quantitatively the number of individual components present in an image. An additional application of the cross-entropy technique to a real image of a BL Lac object is shown and discussed. Our results indicate that our cross-entropy model-fitting technique must be used in situations involving the analysis of complex emission regions having more than three sources, even though it is substantially slower than current model-fitting tasks (at least 10,000 times slower for a single processor, depending on the number of sources to be optimized). As in the case of any model fitting performed in the image plane, caution is required in analyzing images constructed from a poorly sampled (u, v) plane.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A novel mathematical framework inspired on Morse Theory for topological triangle characterization in 2D meshes is introduced that is useful for applications involving the creation of mesh models of objects whose geometry is not known a priori. The framework guarantees a precise control of topological changes introduced as a result of triangle insertion/removal operations and enables the definition of intuitive high-level operators for managing the mesh while keeping its topological integrity. An application is described in the implementation of an innovative approach for the detection of 2D objects from images that integrates the topological control enabled by geometric modeling with traditional image processing techniques. (C) 2008 Published by Elsevier B.V.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper we present a novel approach for multispectral image contextual classification by combining iterative combinatorial optimization algorithms. The pixel-wise decision rule is defined using a Bayesian approach to combine two MRF models: a Gaussian Markov Random Field (GMRF) for the observations (likelihood) and a Potts model for the a priori knowledge, to regularize the solution in the presence of noisy data. Hence, the classification problem is stated according to a Maximum a Posteriori (MAP) framework. In order to approximate the MAP solution we apply several combinatorial optimization methods using multiple simultaneous initializations, making the solution less sensitive to the initial conditions and reducing both computational cost and time in comparison to Simulated Annealing, often unfeasible in many real image processing applications. Markov Random Field model parameters are estimated by Maximum Pseudo-Likelihood (MPL) approach, avoiding manual adjustments in the choice of the regularization parameters. Asymptotic evaluations assess the accuracy of the proposed parameter estimation procedure. To test and evaluate the proposed classification method, we adopt metrics for quantitative performance assessment (Cohen`s Kappa coefficient), allowing a robust and accurate statistical analysis. The obtained results clearly show that combining sub-optimal contextual algorithms significantly improves the classification performance, indicating the effectiveness of the proposed methodology. (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The design of translation invariant and locally defined binary image operators over large windows is made difficult by decreased statistical precision and increased training time. We present a complete framework for the application of stacked design, a recently proposed technique to create two-stage operators that circumvents that difficulty. We propose a novel algorithm, based on Information Theory, to find groups of pixels that should be used together to predict the Output Value. We employ this algorithm to automate the process of creating a set of first-level operators that are later combined in a global operator. We also propose a principled way to guide this combination, by using feature selection and model comparison. Experimental results Show that the proposed framework leads to better results than single stage design. (C) 2009 Elsevier B.V. All rights reserved.