908 resultados para Image-to-Image Variation


Relevância:

50.00% 50.00%

Publicador:

Resumo:

When preparing an article on image restoration in astronomy, it is obvious that some topics have to be dropped to keep the work at reasonable length. We have decided to concentrate on image and noise models and on the algorithms to find the restoration. Topics like parameter estimation and stopping rules are also commented on. We start by describing the Bayesian paradigm and then proceed to study the noise and blur models used by the astronomical community. Then the prior models used to restore astronomical images are examined. We describe the algorithms used to find the restoration for the most common combinations of degradation and image models. Then we comment on important issues such as acceleration of algorithms, stopping rules, and parameter estimation. We also comment on the huge amount of information available to, and made available by, the astronomical community.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

High-resolution side scan sonar has been used for mapping the seafloor of the Ría de Pontevedra. Four backscatter patterns have been mapped within the Ría: (1) Pattern with isolated reflections, correlated with granite and metamorphic outcrops and located close to the coastal prominence and Ons and Onza Islands. (2) Pattern of strong reflectivity usually located around the basement outcrops and near the coastline and produced by coarse-grained sediment. (3) Pattern of weak backscatter is correlated with fine sand to mud and comprising large areas in the central and deep part of the Ría, where the bottom currents are weak. It is generally featureless, except where pockmarks and anthropogenic features are present. (4) Patches of strong and weak backscatter are located in the boundary between coarse and fine-grained sediments and they are due to the effect of strong bottom currents. The presence of megaripples associated to both patterns of strong reflectivity and sedimentary patches indicate bedload transport of sediment during high energy conditions (storms). Side scan sonar records and supplementary bathymetry, bottom samples and hydrodynamic data reveal that the distribution of seafloor sediment is strongly related to oceanographic processes and the particular morphology and topography of the Ría.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Aim: When planning SIRT using 90Y microspheres, the partition model is used to refine the activity calculated by the body surface area (BSA) method to potentially improve the safety and efficacy of treatment. For this partition model dosimetry, accurate determination of mean tumor-to-normal liver ratio (TNR) is critical since it directly impacts absorbed dose estimates. This work aimed at developing and assessing a reliable methodology for the calculation of 99mTc-MAA SPECT/CT-derived TNR ratios based on phantom studies. Materials and methods: IQ NEMA (6 hot spheres) and Kyoto liver phantoms with different hot/background activity concentration ratios were imaged on a SPECT/CT (GE Infinia Hawkeye 4). For each reconstruction with the IQ phantom, TNR quantification was assessed in terms of relative recovery coefficients (RC) and image noise was evaluated in terms of coefficient of variation (COV) in the filled background. RCs were compared using OSEM with Hann, Butterworth and Gaussian filters, as well as FBP reconstruction algorithms. Regarding OSEM, RCs were assessed by varying different parameters independently, such as the number of iterations (i) and subsets (s) and the cut-off frequency of the filter (fc). The influence of the attenuation and diffusion corrections was also investigated. Furthermore, both 2D-ROIs and 3D-VOIs contouring were compared. For this purpose, dedicated Matlab© routines were developed in-house for automatic 2D-ROI/3D-VOI determination to reduce intra-user and intra-slice variability. Best reconstruction parameters and RCs obtained with the IQ phantom were used to recover corrected TNR in case of the Kyoto phantom for arbitrary hot-lesion size. In addition, we computed TNR volume histograms to better assess uptake heterogeneityResults: The highest RCs were obtained with OSEM (i=2, s=10) coupled with the Butterworth filter (fc=0.8). Indeed, we observed a global 20% RC improvement over other OSEM settings and a 50% increase as compared to the best FBP reconstruction. In any case, both attenuation and diffusion corrections must be applied, thus improving RC while preserving good image noise (COV<10%). Both 2D-ROI and 3D-VOI analysis lead to similar results. Nevertheless, we recommend using 3D-VOI since tumor uptake regions are intrinsically 3D. RC-corrected TNR values lie within 17% around the true value, substantially improving the evaluation of small volume (<15 mL) regions. Conclusions: This study reports the multi-parameter optimization of 99mTc MAA SPECT/CT images reconstruction in planning 90Y dosimetry for SIRT. In phantoms, accurate quantification of TNR was obtained using OSEM coupled with Butterworth and RC correction.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Calculations of the binding energy of bound positron states in metal surfaces, with explicit inclusion of plasmon dispersion and single-particle effects, are presented. The binding energy is greatly reduced with respect to the undispersed case.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

A surface dielectric function of a semi-infinite plane-bounded metal is defined in the spirit of the plasmon-pole dielectric function of the bulk. It is modeled in such a way that the surface-plasmon dispersion relation is recovered for small momentum transfer. This function is employed to compute the image potential at all distances outside the surface. Interaction with bulk modes is neglected for simplicity and clarity. The interaction of a massive point charge with a metal surface is also considered in the context of a boson model for surface-plasmon excitation. We present a new definition of the image potential for this case.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

The cave by José Saramago has as a certain reference the image of the cave of book VII of Plato's Republic and, however, Saramago is not an idealistic or metaphysical writer. This article, taking advantage of the applicability with which Plato endowed his image, defends the urge to be open to the messages sent by the earth, by matter, the urge not to become prisoners in the golden caves of the Western society and, finally, the urge to find our freedom in Nature, phýsis, and not far or beyond, metá, it.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

In the search for high efficiency in root studies, computational systems have been developed to analyze digital images. ImageJ and Safira are public-domain systems that may be used for image analysis of washed roots. However, differences in root properties measured using ImageJ and Safira are supposed. This study compared values of root length and surface area obtained with public-domain systems with values obtained by a reference method. Root samples were collected in a banana plantation in an area of a shallower Typic Carbonatic Haplic Cambisol (CXk), and an area of a deeper Typic Haplic Ta Eutrophic Cambisol (CXve), at six depths in five replications. Root images were digitized and the systems ImageJ and Safira used to determine root length and surface area. The line-intersect method modified by Tennant was used as reference; values of root length and surface area measured with the different systems were analyzed by Pearson's correlation coefficient and compared by the confidence interval and t-test. Both systems ImageJ and Safira had positive correlation coefficients with the reference method for root length and surface area data in CXk and CXve. The correlation coefficient ranged from 0.54 to 0.80, with lowest value observed for ImageJ in the measurement of surface area of roots sampled in CXve. The IC (95 %) revealed that root length measurements with Safira did not differ from that with the reference method in CXk (-77.3 to 244.0 mm). Regarding surface area measurements, Safira did not differ from the reference method for samples collected in CXk (-530.6 to 565.8 mm²) as well as in CXve (-4231 to 612.1 mm²). However, measurements with ImageJ were different from those obtained by the reference method, underestimating length and surface area in samples collected in CXk and CXve. Both ImageJ and Safira allow an identification of increases or decreases in root length and surface area. However, Safira results for root length and surface area are closer to the results obtained with the reference method.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Résumé Suite aux recentes avancées technologiques, les archives d'images digitales ont connu une croissance qualitative et quantitative sans précédent. Malgré les énormes possibilités qu'elles offrent, ces avancées posent de nouvelles questions quant au traitement des masses de données saisies. Cette question est à la base de cette Thèse: les problèmes de traitement d'information digitale à très haute résolution spatiale et/ou spectrale y sont considérés en recourant à des approches d'apprentissage statistique, les méthodes à noyau. Cette Thèse étudie des problèmes de classification d'images, c'est à dire de catégorisation de pixels en un nombre réduit de classes refletant les propriétés spectrales et contextuelles des objets qu'elles représentent. L'accent est mis sur l'efficience des algorithmes, ainsi que sur leur simplicité, de manière à augmenter leur potentiel d'implementation pour les utilisateurs. De plus, le défi de cette Thèse est de rester proche des problèmes concrets des utilisateurs d'images satellite sans pour autant perdre de vue l'intéret des méthodes proposées pour le milieu du machine learning dont elles sont issues. En ce sens, ce travail joue la carte de la transdisciplinarité en maintenant un lien fort entre les deux sciences dans tous les développements proposés. Quatre modèles sont proposés: le premier répond au problème de la haute dimensionalité et de la redondance des données par un modèle optimisant les performances en classification en s'adaptant aux particularités de l'image. Ceci est rendu possible par un système de ranking des variables (les bandes) qui est optimisé en même temps que le modèle de base: ce faisant, seules les variables importantes pour résoudre le problème sont utilisées par le classifieur. Le manque d'information étiquétée et l'incertitude quant à sa pertinence pour le problème sont à la source des deux modèles suivants, basés respectivement sur l'apprentissage actif et les méthodes semi-supervisées: le premier permet d'améliorer la qualité d'un ensemble d'entraînement par interaction directe entre l'utilisateur et la machine, alors que le deuxième utilise les pixels non étiquetés pour améliorer la description des données disponibles et la robustesse du modèle. Enfin, le dernier modèle proposé considère la question plus théorique de la structure entre les outputs: l'intègration de cette source d'information, jusqu'à présent jamais considérée en télédétection, ouvre des nouveaux défis de recherche. Advanced kernel methods for remote sensing image classification Devis Tuia Institut de Géomatique et d'Analyse du Risque September 2009 Abstract The technical developments in recent years have brought the quantity and quality of digital information to an unprecedented level, as enormous archives of satellite images are available to the users. However, even if these advances open more and more possibilities in the use of digital imagery, they also rise several problems of storage and treatment. The latter is considered in this Thesis: the processing of very high spatial and spectral resolution images is treated with approaches based on data-driven algorithms relying on kernel methods. In particular, the problem of image classification, i.e. the categorization of the image's pixels into a reduced number of classes reflecting spectral and contextual properties, is studied through the different models presented. The accent is put on algorithmic efficiency and the simplicity of the approaches proposed, to avoid too complex models that would not be used by users. The major challenge of the Thesis is to remain close to concrete remote sensing problems, without losing the methodological interest from the machine learning viewpoint: in this sense, this work aims at building a bridge between the machine learning and remote sensing communities and all the models proposed have been developed keeping in mind the need for such a synergy. Four models are proposed: first, an adaptive model learning the relevant image features has been proposed to solve the problem of high dimensionality and collinearity of the image features. This model provides automatically an accurate classifier and a ranking of the relevance of the single features. The scarcity and unreliability of labeled. information were the common root of the second and third models proposed: when confronted to such problems, the user can either construct the labeled set iteratively by direct interaction with the machine or use the unlabeled data to increase robustness and quality of the description of data. Both solutions have been explored resulting into two methodological contributions, based respectively on active learning and semisupervised learning. Finally, the more theoretical issue of structured outputs has been considered in the last model, which, by integrating outputs similarity into a model, opens new challenges and opportunities for remote sensing image processing.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Three-dimensional imaging and quantification of myocardial function are essential steps in the evaluation of cardiac disease. We propose a tagged magnetic resonance imaging methodology called zHARP that encodes and automatically tracks myocardial displacement in three dimensions. Unlike other motion encoding techniques, zHARP encodes both in-plane and through-plane motion in a single image plane without affecting the acquisition speed. Postprocessing unravels this encoding in order to directly track the 3-D displacement of every point within the image plane throughout an entire image sequence. Experimental results include a phantom validation experiment, which compares zHARP to phase contrast imaging, and an in vivo study of a normal human volunteer. Results demonstrate that the simultaneous extraction of in-plane and through-plane displacements from tagged images is feasible.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

When dealing with multi-angular image sequences, problems of reflectance changes due either to illumination and acquisition geometry, or to interactions with the atmosphere, naturally arise. These phenomena interplay with the scene and lead to a modification of the measured radiance: for example, according to the angle of acquisition, tall objects may be seen from top or from the side and different light scatterings may affect the surfaces. This results in shifts in the acquired radiance, that make the problem of multi-angular classification harder and might lead to catastrophic results, since surfaces with the same reflectance return significantly different signals. In this paper, rather than performing atmospheric or bi-directional reflection distribution function (BRDF) correction, a non-linear manifold learning approach is used to align data structures. This method maximizes the similarity between the different acquisitions by deforming their manifold, thus enhancing the transferability of classification models among the images of the sequence.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

The goal of this work is to develop a method to objectively compare the performance of a digital and a screen-film mammography system in terms of image quality. The method takes into account the dynamic range of the image detector, the detection of high and low contrast structures, the visualisation of the images and the observer response. A test object, designed to represent a compressed breast, was constructed from various tissue equivalent materials ranging from purely adipose to purely glandular composition. Different areas within the test object permitted the evaluation of low and high contrast detection, spatial resolution and image noise. All the images (digital and conventional) were captured using a CCD camera to include the visualisation process in the image quality assessment. A mathematical model observer (non-prewhitening matched filter), that calculates the detectability of high and low contrast structures using spatial resolution, noise and contrast, was used to compare the two technologies. Our results show that for a given patient dose, the detection of high and low contrast structures is significantly better for the digital system than for the conventional screen-film system studied. The method of using a test object with a large tissue composition range combined with a camera to compare conventional and digital imaging modalities can be applied to other radiological imaging techniques. In particular it could be used to optimise the process of radiographic reading of soft copy images.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

We present MBIS (Multivariate Bayesian Image Segmentation tool), a clustering tool based on the mixture of multivariate normal distributions model. MBIS supports multichannel bias field correction based on a B-spline model. A second methodological novelty is the inclusion of graph-cuts optimization for the stationary anisotropic hidden Markov random field model. Along with MBIS, we release an evaluation framework that contains three different experiments on multi-site data. We first validate the accuracy of segmentation and the estimated bias field for each channel. MBIS outperforms a widely used segmentation tool in a cross-comparison evaluation. The second experiment demonstrates the robustness of results on atlas-free segmentation of two image sets from scan-rescan protocols on 21 healthy subjects. Multivariate segmentation is more replicable than the monospectral counterpart on T1-weighted images. Finally, we provide a third experiment to illustrate how MBIS can be used in a large-scale study of tissue volume change with increasing age in 584 healthy subjects. This last result is meaningful as multivariate segmentation performs robustly without the need for prior knowledge.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

In mammography, the image contrast and dose delivered to the patient are determined by the x-ray spectrum and the scatter to primary ratio S/P. Thus the quality of the mammographic procedure is highly dependent on the choice of anode and filter material and on the method used to reduce the amount of scattered radiation reaching the detector. Synchrotron radiation is a useful tool to study the effect of beam energy on the optimization of the mammographic process because it delivers a high flux of monochromatic photons. Moreover, because the beam is naturally flat collimated in one direction, a slot can be used instead of a grid for scatter reduction. We have measured the ratio S/P and the transmission factors for grids and slots for monoenergetic synchrotron radiation. In this way the effect of beam energy and scatter rejection method were separated, and their respective importance for image quality and dose analyzed. Our results show that conventional mammographic spectra are not far from optimum and that the use of a slot instead of a grid has an important effect on the optimization of the mammographic process. We propose a simple numerical model to quantify this effect.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

During conventional x-ray coronary angiography, multiple projections of the coronary arteries are acquired to define coronary anatomy precisely. Due to time constraints, coronary magnetic resonance angiography (MRA) usually provides only one or two views of the major coronary vessels. A coronary MRA approach that allowed for reconstruction of arbitrary isotropic orientations might therefore be desirable. The purpose of the study was to develop a three-dimensional (3D) coronary MRA technique with isotropic image resolution in a relatively short scanning time that allows for reconstruction of arbitrary views of the coronary arteries without constraints given by anisotropic voxel size. Eight healthy adult subjects were examined using a real-time navigator-gated and corrected free-breathing interleaved echoplanar (TFE-EPI) 3D-MRA sequence. Two 3D datasets were acquired for the left and right coronary systems in each subject, one with anisotropic (1.0 x 1.5 x 3.0 mm, 10 slices) and one with "near" isotropic (1.0 x 1.5 x 1.0 mm, 30 slices) image resolution. All other imaging parameters were maintained. In all cases, the entire left main (LM) and extensive portions of the left anterior descending (LAD) and the right coronary artery (RCA) were visualized. Objective assessment of coronary vessel sharpness was similar (41% +/- 5% vs. 42% +/- 5%; P = NS) between in-plane and through-plane views with "isotropic" voxel size but differed (32% +/- 7% vs. 23% +/- 4%; P < 0.001) with nonisotropic voxel size. In reconstructed views oriented in the through-plane direction, the vessel border was 86% more defined (P < 0.01) for isotropic compared with anisotropic images. A smaller (30%; P < 0.001) improvement was seen for in-plane reconstructions. Vessel diameter measurements were view independent (2.81 +/- 0.45 mm vs. 2.66 +/- 0.52 mm; P = NS) for isotropic, but differed (2.71 +/- 0.51 mm vs. 3.30 +/- 0.38 mm; P < 0.001) between anisotropic views. Average scanning time was 2:31 +/- 0:57 minutes for anisotropic and 7:11 +/- 3:02 minutes for isotropic image resolution (P < 0.001). We present a new approach for "near" isotropic 3D coronary artery imaging, which allows for reconstruction of arbitrary views of the coronary arteries. The good delineation of the coronary arteries in all views suggests that isotropic 3D coronary MRA might be a preferred technique for the assessment of coronary disease, although at the expense of prolonged scan times. Comparative studies with conventional x-ray angiography are needed to investigate the clinical utility of the isotropic strategy.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

In this paper, we present an efficient numerical scheme for the recently introduced geodesic active fields (GAF) framework for geometric image registration. This framework considers the registration task as a weighted minimal surface problem. Hence, the data-term and the regularization-term are combined through multiplication in a single, parametrization invariant and geometric cost functional. The multiplicative coupling provides an intrinsic, spatially varying and data-dependent tuning of the regularization strength, and the parametrization invariance allows working with images of nonflat geometry, generally defined on any smoothly parametrizable manifold. The resulting energy-minimizing flow, however, has poor numerical properties. Here, we provide an efficient numerical scheme that uses a splitting approach; data and regularity terms are optimized over two distinct deformation fields that are constrained to be equal via an augmented Lagrangian approach. Our approach is more flexible than standard Gaussian regularization, since one can interpolate freely between isotropic Gaussian and anisotropic TV-like smoothing. In this paper, we compare the geodesic active fields method with the popular Demons method and three more recent state-of-the-art algorithms: NL-optical flow, MRF image registration, and landmark-enhanced large displacement optical flow. Thus, we can show the advantages of the proposed FastGAF method. It compares favorably against Demons, both in terms of registration speed and quality. Over the range of example applications, it also consistently produces results not far from more dedicated state-of-the-art methods, illustrating the flexibility of the proposed framework.