977 resultados para image reconstruction
Resumo:
Three-dimensional imaging for the quantification of myocardial motion is a key step in the evaluation of cardiac disease. A tagged magnetic resonance imaging method that automatically tracks myocardial displacement in three dimensions is presented. Unlike other techniques, this method tracks both in-plane and through-plane motion from a single image plane without affecting the duration of image acquisition. A small z-encoding gradient is subsequently added to the refocusing lobe of the slice-selection gradient pulse in a slice following CSPAMM acquisition. An opposite polarity z-encoding gradient is added to the orthogonal tag direction. The additional z-gradients encode the instantaneous through plane position of the slice. The vertical and horizontal tags are used to resolve in-plane motion, while the added z-gradients is used to resolve through-plane motion. Postprocessing automatically decodes the acquired data and tracks the three-dimensional displacement of every material point within the image plane for each cine frame. Experiments include both a phantom and in vivo human validation. These studies demonstrate that the simultaneous extraction of both in-plane and through-plane displacements and pathlines from tagged images is achievable. This capability should open up new avenues for the automatic quantification of cardiac motion and strain for scientific and clinical purposes.
Resumo:
We present a novel approach for analyzing single-trial electroencephalography (EEG) data, using topographic information. The method allows for visualizing event-related potentials using all the electrodes of recordings overcoming the problem of previous approaches that required electrode selection and waveforms filtering. We apply this method to EEG data from an auditory object recognition experiment that we have previously analyzed at an ERP level. Temporally structured periods were statistically identified wherein a given topography predominated without any prior information about the temporal behavior. In addition to providing novel methods for EEG analysis, the data indicate that ERPs are reliably observable at a single-trial level when examined topographically.
Resumo:
The standard data fusion methods may not be satisfactory to merge a high-resolution panchromatic image and a low-resolution multispectral image because they can distort the spectral characteristics of the multispectral data. The authors developed a technique, based on multiresolution wavelet decomposition, for the merging and data fusion of such images. The method presented consists of adding the wavelet coefficients of the high-resolution image to the multispectral (low-resolution) data. They have studied several possibilities concluding that the method which produces the best results consists in adding the high order coefficients of the wavelet transform of the panchromatic image to the intensity component (defined as L=(R+G+B)/3) of the multispectral image. The method is, thus, an improvement on standard intensity-hue-saturation (IHS or LHS) mergers. They used the ¿a trous¿ algorithm which allows the use of a dyadic wavelet to merge nondyadic data in a simple and efficient scheme. They used the method to merge SPOT and LANDSATTM images. The technique presented is clearly better than the IHS and LHS mergers in preserving both spectral and spatial information.
Resumo:
When preparing an article on image restoration in astronomy, it is obvious that some topics have to be dropped to keep the work at reasonable length. We have decided to concentrate on image and noise models and on the algorithms to find the restoration. Topics like parameter estimation and stopping rules are also commented on. We start by describing the Bayesian paradigm and then proceed to study the noise and blur models used by the astronomical community. Then the prior models used to restore astronomical images are examined. We describe the algorithms used to find the restoration for the most common combinations of degradation and image models. Then we comment on important issues such as acceleration of algorithms, stopping rules, and parameter estimation. We also comment on the huge amount of information available to, and made available by, the astronomical community.
Resumo:
Usual image fusion methods inject features from a high spatial resolution panchromatic sensor into every low spatial resolution multispectral band trying to preserve spectral signatures and improve spatial resolution to that of the panchromatic sensor. The objective is to obtain the image that would be observed by a sensor with the same spectral response (i.e., spectral sensitivity and quantum efficiency) as the multispectral sensors and the spatial resolution of the panchromatic sensor. But in these methods, features from electromagnetic spectrum regions not covered by multispectral sensors are injected into them, and physical spectral responses of the sensors are not considered during this process. This produces some undesirable effects, such as resolution overinjection images and slightly modified spectral signatures in some features. The authors present a technique which takes into account the physical electromagnetic spectrum responses of sensors during the fusion process, which produces images closer to the image obtained by the ideal sensor than those obtained by usual wavelet-based image fusion methods. This technique is used to define a new wavelet-based image fusion method.
Resumo:
The purposes of this study were to characterize the performance of a 3-dimensional (3D) ordered-subset expectation maximization (OSEM) algorithm in the quantification of left ventricular (LV) function with (99m)Tc-labeled agent gated SPECT (G-SPECT), the QGS program, and a beating-heart phantom and to optimize the reconstruction parameters for clinical applications. METHODS: A G-SPECT image of a dynamic heart phantom simulating the beating left ventricle was acquired. The exact volumes of the phantom were known and were as follows: end-diastolic volume (EDV) of 112 mL, end-systolic volume (ESV) of 37 mL, and stroke volume (SV) of 75 mL; these volumes produced an LV ejection fraction (LVEF) of 67%. Tomographic reconstructions were obtained after 10-20 iterations (I) with 4, 8, and 16 subsets (S) at full width at half maximum (FWHM) gaussian postprocessing filter cutoff values of 8-15 mm. The QGS program was used for quantitative measurements. RESULTS: Measured values ranged from 72 to 92 mL for EDV, from 18 to 32 mL for ESV, and from 54 to 63 mL for SV, and the calculated LVEF ranged from 65% to 76%. Overall, the combination of 10 I, 8 S, and a cutoff filter value of 10 mm produced the most accurate results. The plot of the measures with respect to the expectation maximization-equivalent iterations (I x S product) revealed a bell-shaped curve for the LV volumes and a reverse distribution for the LVEF, with the best results in the intermediate range. In particular, FWHM cutoff values exceeding 10 mm affected the estimation of the LV volumes. CONCLUSION: The QGS program is able to correctly calculate the LVEF when used in association with an optimized 3D OSEM algorithm (8 S, 10 I, and FWHM of 10 mm) but underestimates the LV volumes. However, various combinations of technical parameters, including a limited range of I and S (80-160 expectation maximization-equivalent iterations) and low cutoff values (< or =10 mm) for the gaussian postprocessing filter, produced results with similar accuracies and without clinically relevant differences in the LV volumes and the estimated LVEF.
Resumo:
PURPOSE: To evaluate the cause of recurrent pathologic instability after anterior cruciate ligament (ACL) surgery and the effectiveness of revision reconstruction using a quadriceps tendon autograft using a 2-incision technique. TYPE OF STUDY: Retrospective follow-up study. METHODS: Between 1999 and 2001, 31 patients underwent ACL revision reconstruction because of recurrent pathologic instability during sports or daily activities. Twenty-eight patients were reviewed after a mean follow-up of 4.2 years (range, 3.3 to 5.6 years). The mean age at revision surgery was 27 years (range, 18 to 41 years). The average time from primary procedure to revision surgery was 26 months (range, 9 to 45 months). A clinical, functional, and radiographic evaluation was performed. Also magnetic resonance imaging (MRI) or computed tomography (CT) scanning was performed. The International Knee Documentation Committee (IKDC), Lysholm, and Tegner scales were used. A KT-1000 arthrometer measurement (MEDmetric, San Diego, CA) by an experienced physician was made. RESULTS: Of the failures, 79% had radiographic evidence of malposition of their tunnels. In only 6 cases (21%) was the radiologic anatomy of tunnel placement judged to be correct on both the femoral and tibial side. The MRI or CT showed, in 6 cases, a too-centrally placed femoral tunnel. After revision surgery, the position of tunnels was corrected. A significant improvement of Lachman and pivot-shift phenomenon was observed. In particular, 17 patients had a negative Lachman test, and 11 patients had a grade I Lachman with a firm end point. Preoperatively, the pivot-shift test was positive in all cases, and at last follow-up in 7 patients (25%) a grade 1+ was found. Postoperatively, KT-1000 testing showed a mean manual maximum translation of 8.6 mm (SD, 2.34) for the affected knee; 97% of patients had a maximum manual side-to-side translation <5 mm. At the final postoperative evaluation, 26 patients (93%) graded their knees as normal or nearly normal according to the IKDC score. The mean Lysholm score was 93.6 (SD, 8.77) and the mean Tegner activity score was 6.1 (SD, 1.37). No patient required further revision. Five patients (18%) complained of hypersensitive scars from the reconstructive surgery that made kneeling difficult. CONCLUSIONS: There were satisfactory results after ACL revision surgery using quadriceps tendon and a 2-incision technique at a minimum 3 years' follow-up; 93% of patients returned to sports activities. LEVEL OF EVIDENCE: Level IV, case series, no control group.
Resumo:
The figurative painter accesses very complex levels of knowledge. To produce a painting requires, first, a deep analysis of the image of the reality and, afterwards, the study of the reconstruction of this reality. This is not about a process of copying, but a process of the comprehension of the concepts that appear in the representation. The drawing guides us in the process of the production of the surface and in the distribution of the colours that, after all, are the data with which the vision mechanism builds the visual reality. Knowing the colour and its behaviour have always been a requirement for the figurative painter. From that knowledge we can draw wider conclusions.
Resumo:
A semisupervised support vector machine is presented for the classification of remote sensing images. The method exploits the wealth of unlabeled samples for regularizing the training kernel representation locally by means of cluster kernels. The method learns a suitable kernel directly from the image and thus avoids assuming a priori signal relations by using a predefined kernel structure. Good results are obtained in image classification examples when few labeled samples are available. The method scales almost linearly with the number of unlabeled samples and provides out-of-sample predictions.
Resumo:
Tractography algorithms provide us with the ability to non-invasively reconstruct fiber pathways in the white matter (WM) by exploiting the directional information described with diffusion magnetic resonance. These methods could be divided into two major classes, local and global. Local methods reconstruct each fiber tract iteratively by considering only directional information at the voxel level and its neighborhood. Global methods, on the other hand, reconstruct all the fiber tracts of the whole brain simultaneously by solving a global energy minimization problem. The latter have shown improvements compared to previous techniques but these algorithms still suffer from an important shortcoming that is crucial in the context of brain connectivity analyses. As no anatomical priors are usually considered during the reconstruction process, the recovered fiber tracts are not guaranteed to connect cortical regions and, as a matter of fact, most of them stop prematurely in the WM; this violates important properties of neural connections, which are known to originate in the gray matter (GM) and develop in the WM. Hence, this shortcoming poses serious limitations for the use of these techniques for the assessment of the structural connectivity between brain regions and, de facto, it can potentially bias any subsequent analysis. Moreover, the estimated tracts are not quantitative, every fiber contributes with the same weight toward the predicted diffusion signal. In this work, we propose a novel approach for global tractography that is specifically designed for connectivity analysis applications which: (i) explicitly enforces anatomical priors of the tracts in the optimization and (ii) considers the effective contribution of each of them, i.e., volume, to the acquired diffusion magnetic resonance imaging (MRI) image. We evaluated our approach on both a realistic diffusion MRI phantom and in vivo data, and also compared its performance to existing tractography algorithms.