342 resultados para image reconstruction


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Three-dimensional imaging for the quantification of myocardial motion is a key step in the evaluation of cardiac disease. A tagged magnetic resonance imaging method that automatically tracks myocardial displacement in three dimensions is presented. Unlike other techniques, this method tracks both in-plane and through-plane motion from a single image plane without affecting the duration of image acquisition. A small z-encoding gradient is subsequently added to the refocusing lobe of the slice-selection gradient pulse in a slice following CSPAMM acquisition. An opposite polarity z-encoding gradient is added to the orthogonal tag direction. The additional z-gradients encode the instantaneous through plane position of the slice. The vertical and horizontal tags are used to resolve in-plane motion, while the added z-gradients is used to resolve through-plane motion. Postprocessing automatically decodes the acquired data and tracks the three-dimensional displacement of every material point within the image plane for each cine frame. Experiments include both a phantom and in vivo human validation. These studies demonstrate that the simultaneous extraction of both in-plane and through-plane displacements and pathlines from tagged images is achievable. This capability should open up new avenues for the automatic quantification of cardiac motion and strain for scientific and clinical purposes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present a novel approach for analyzing single-trial electroencephalography (EEG) data, using topographic information. The method allows for visualizing event-related potentials using all the electrodes of recordings overcoming the problem of previous approaches that required electrode selection and waveforms filtering. We apply this method to EEG data from an auditory object recognition experiment that we have previously analyzed at an ERP level. Temporally structured periods were statistically identified wherein a given topography predominated without any prior information about the temporal behavior. In addition to providing novel methods for EEG analysis, the data indicate that ERPs are reliably observable at a single-trial level when examined topographically.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The purposes of this study were to characterize the performance of a 3-dimensional (3D) ordered-subset expectation maximization (OSEM) algorithm in the quantification of left ventricular (LV) function with (99m)Tc-labeled agent gated SPECT (G-SPECT), the QGS program, and a beating-heart phantom and to optimize the reconstruction parameters for clinical applications. METHODS: A G-SPECT image of a dynamic heart phantom simulating the beating left ventricle was acquired. The exact volumes of the phantom were known and were as follows: end-diastolic volume (EDV) of 112 mL, end-systolic volume (ESV) of 37 mL, and stroke volume (SV) of 75 mL; these volumes produced an LV ejection fraction (LVEF) of 67%. Tomographic reconstructions were obtained after 10-20 iterations (I) with 4, 8, and 16 subsets (S) at full width at half maximum (FWHM) gaussian postprocessing filter cutoff values of 8-15 mm. The QGS program was used for quantitative measurements. RESULTS: Measured values ranged from 72 to 92 mL for EDV, from 18 to 32 mL for ESV, and from 54 to 63 mL for SV, and the calculated LVEF ranged from 65% to 76%. Overall, the combination of 10 I, 8 S, and a cutoff filter value of 10 mm produced the most accurate results. The plot of the measures with respect to the expectation maximization-equivalent iterations (I x S product) revealed a bell-shaped curve for the LV volumes and a reverse distribution for the LVEF, with the best results in the intermediate range. In particular, FWHM cutoff values exceeding 10 mm affected the estimation of the LV volumes. CONCLUSION: The QGS program is able to correctly calculate the LVEF when used in association with an optimized 3D OSEM algorithm (8 S, 10 I, and FWHM of 10 mm) but underestimates the LV volumes. However, various combinations of technical parameters, including a limited range of I and S (80-160 expectation maximization-equivalent iterations) and low cutoff values (< or =10 mm) for the gaussian postprocessing filter, produced results with similar accuracies and without clinically relevant differences in the LV volumes and the estimated LVEF.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

PURPOSE: To evaluate the cause of recurrent pathologic instability after anterior cruciate ligament (ACL) surgery and the effectiveness of revision reconstruction using a quadriceps tendon autograft using a 2-incision technique. TYPE OF STUDY: Retrospective follow-up study. METHODS: Between 1999 and 2001, 31 patients underwent ACL revision reconstruction because of recurrent pathologic instability during sports or daily activities. Twenty-eight patients were reviewed after a mean follow-up of 4.2 years (range, 3.3 to 5.6 years). The mean age at revision surgery was 27 years (range, 18 to 41 years). The average time from primary procedure to revision surgery was 26 months (range, 9 to 45 months). A clinical, functional, and radiographic evaluation was performed. Also magnetic resonance imaging (MRI) or computed tomography (CT) scanning was performed. The International Knee Documentation Committee (IKDC), Lysholm, and Tegner scales were used. A KT-1000 arthrometer measurement (MEDmetric, San Diego, CA) by an experienced physician was made. RESULTS: Of the failures, 79% had radiographic evidence of malposition of their tunnels. In only 6 cases (21%) was the radiologic anatomy of tunnel placement judged to be correct on both the femoral and tibial side. The MRI or CT showed, in 6 cases, a too-centrally placed femoral tunnel. After revision surgery, the position of tunnels was corrected. A significant improvement of Lachman and pivot-shift phenomenon was observed. In particular, 17 patients had a negative Lachman test, and 11 patients had a grade I Lachman with a firm end point. Preoperatively, the pivot-shift test was positive in all cases, and at last follow-up in 7 patients (25%) a grade 1+ was found. Postoperatively, KT-1000 testing showed a mean manual maximum translation of 8.6 mm (SD, 2.34) for the affected knee; 97% of patients had a maximum manual side-to-side translation <5 mm. At the final postoperative evaluation, 26 patients (93%) graded their knees as normal or nearly normal according to the IKDC score. The mean Lysholm score was 93.6 (SD, 8.77) and the mean Tegner activity score was 6.1 (SD, 1.37). No patient required further revision. Five patients (18%) complained of hypersensitive scars from the reconstructive surgery that made kneeling difficult. CONCLUSIONS: There were satisfactory results after ACL revision surgery using quadriceps tendon and a 2-incision technique at a minimum 3 years' follow-up; 93% of patients returned to sports activities. LEVEL OF EVIDENCE: Level IV, case series, no control group.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A semisupervised support vector machine is presented for the classification of remote sensing images. The method exploits the wealth of unlabeled samples for regularizing the training kernel representation locally by means of cluster kernels. The method learns a suitable kernel directly from the image and thus avoids assuming a priori signal relations by using a predefined kernel structure. Good results are obtained in image classification examples when few labeled samples are available. The method scales almost linearly with the number of unlabeled samples and provides out-of-sample predictions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Tractography algorithms provide us with the ability to non-invasively reconstruct fiber pathways in the white matter (WM) by exploiting the directional information described with diffusion magnetic resonance. These methods could be divided into two major classes, local and global. Local methods reconstruct each fiber tract iteratively by considering only directional information at the voxel level and its neighborhood. Global methods, on the other hand, reconstruct all the fiber tracts of the whole brain simultaneously by solving a global energy minimization problem. The latter have shown improvements compared to previous techniques but these algorithms still suffer from an important shortcoming that is crucial in the context of brain connectivity analyses. As no anatomical priors are usually considered during the reconstruction process, the recovered fiber tracts are not guaranteed to connect cortical regions and, as a matter of fact, most of them stop prematurely in the WM; this violates important properties of neural connections, which are known to originate in the gray matter (GM) and develop in the WM. Hence, this shortcoming poses serious limitations for the use of these techniques for the assessment of the structural connectivity between brain regions and, de facto, it can potentially bias any subsequent analysis. Moreover, the estimated tracts are not quantitative, every fiber contributes with the same weight toward the predicted diffusion signal. In this work, we propose a novel approach for global tractography that is specifically designed for connectivity analysis applications which: (i) explicitly enforces anatomical priors of the tracts in the optimization and (ii) considers the effective contribution of each of them, i.e., volume, to the acquired diffusion magnetic resonance imaging (MRI) image. We evaluated our approach on both a realistic diffusion MRI phantom and in vivo data, and also compared its performance to existing tractography algorithms.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Aim: When planning SIRT using 90Y microspheres, the partition model is used to refine the activity calculated by the body surface area (BSA) method to potentially improve the safety and efficacy of treatment. For this partition model dosimetry, accurate determination of mean tumor-to-normal liver ratio (TNR) is critical since it directly impacts absorbed dose estimates. This work aimed at developing and assessing a reliable methodology for the calculation of 99mTc-MAA SPECT/CT-derived TNR ratios based on phantom studies. Materials and methods: IQ NEMA (6 hot spheres) and Kyoto liver phantoms with different hot/background activity concentration ratios were imaged on a SPECT/CT (GE Infinia Hawkeye 4). For each reconstruction with the IQ phantom, TNR quantification was assessed in terms of relative recovery coefficients (RC) and image noise was evaluated in terms of coefficient of variation (COV) in the filled background. RCs were compared using OSEM with Hann, Butterworth and Gaussian filters, as well as FBP reconstruction algorithms. Regarding OSEM, RCs were assessed by varying different parameters independently, such as the number of iterations (i) and subsets (s) and the cut-off frequency of the filter (fc). The influence of the attenuation and diffusion corrections was also investigated. Furthermore, both 2D-ROIs and 3D-VOIs contouring were compared. For this purpose, dedicated Matlab© routines were developed in-house for automatic 2D-ROI/3D-VOI determination to reduce intra-user and intra-slice variability. Best reconstruction parameters and RCs obtained with the IQ phantom were used to recover corrected TNR in case of the Kyoto phantom for arbitrary hot-lesion size. In addition, we computed TNR volume histograms to better assess uptake heterogeneityResults: The highest RCs were obtained with OSEM (i=2, s=10) coupled with the Butterworth filter (fc=0.8). Indeed, we observed a global 20% RC improvement over other OSEM settings and a 50% increase as compared to the best FBP reconstruction. In any case, both attenuation and diffusion corrections must be applied, thus improving RC while preserving good image noise (COV<10%). Both 2D-ROI and 3D-VOI analysis lead to similar results. Nevertheless, we recommend using 3D-VOI since tumor uptake regions are intrinsically 3D. RC-corrected TNR values lie within 17% around the true value, substantially improving the evaluation of small volume (<15 mL) regions. Conclusions: This study reports the multi-parameter optimization of 99mTc MAA SPECT/CT images reconstruction in planning 90Y dosimetry for SIRT. In phantoms, accurate quantification of TNR was obtained using OSEM coupled with Butterworth and RC correction.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND: Outcome following foot and ankle surgery can be assessed by disease- and region-specific scores. Many scoring systems exist, making comparison among studies difficult. The present study focused on outcome measures for a common foot and ankle abnormality and compared the results obtained by 2 disease-specific and 2 body region-specific scores. METHODS: We reviewed 41 patients who underwent lateral ankle ligament reconstruction. Four outcome scales were administered simultaneously: the Cumberland Ankle Instability Tool (CAIT) and the Chronic Ankle Instability Scale (CAIS), which are disease specific, and the American Orthopedic Foot & Ankle Society (AOFAS) hindfoot scale and the Foot and Ankle Ability Measure (FAAM), which are both body region-specific. The degree of correlation between scores was assessed by Pearson's correlation coefficient. Nonparametric tests, the Kruskal-Wallis and the Mann-Whitney test for pairwise comparison of the scores, were performed. RESULTS: A significant difference (P < .005) was observed between the CAIS and the AOFAS score (P = .0002), between the CAIS and the FAAM 1 (P = .0001), and between the CAIT and the AOFAS score (P = .0003). CONCLUSIONS: This study compared the performances of 4 disease- and body region-specific scoring systems. We demonstrated a correlation between the 4 administered scoring systems and notable differences between the results given by each of them. Disease-specific scores appeared more accurate than body region-specific scores. A strong correlation between the AOFAS score and the other scales was observed. The FAAM seemed a good compromise because it offered the possibility to evaluate the patient according to his or her own functional demand. CLINICAL RELEVANCE: The present study contributes to the development of more critical and accurate outcome assesment methods in foot and ankle surgery.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

PURPOSE: To determine the lower limit of dose reduction with hybrid and fully iterative reconstruction algorithms in detection of endoleaks and in-stent thrombus of thoracic aorta with computed tomographic (CT) angiography by applying protocols with different tube energies and automated tube current modulation. MATERIALS AND METHODS: The calcification insert of an anthropomorphic cardiac phantom was replaced with an aortic aneurysm model containing a stent, simulated endoleaks, and an intraluminal thrombus. CT was performed at tube energies of 120, 100, and 80 kVp with incrementally increasing noise indexes (NIs) of 16, 25, 34, 43, 52, 61, and 70 and a 2.5-mm section thickness. NI directly controls radiation exposure; a higher NI allows for greater image noise and decreases radiation. Images were reconstructed with filtered back projection (FBP) and hybrid and fully iterative algorithms. Five radiologists independently analyzed lesion conspicuity to assess sensitivity and specificity. Mean attenuation (in Hounsfield units) and standard deviation were measured in the aorta to calculate signal-to-noise ratio (SNR). Attenuation and SNR of different protocols and algorithms were analyzed with analysis of variance or Welch test depending on data distribution. RESULTS: Both sensitivity and specificity were 100% for simulated lesions on images with 2.5-mm section thickness and an NI of 25 (3.45 mGy), 34 (1.83 mGy), or 43 (1.16 mGy) at 120 kVp; an NI of 34 (1.98 mGy), 43 (1.23 mGy), or 61 (0.61 mGy) at 100 kVp; and an NI of 43 (1.46 mGy) or 70 (0.54 mGy) at 80 kVp. SNR values showed similar results. With the fully iterative algorithm, mean attenuation of the aorta decreased significantly in reduced-dose protocols in comparison with control protocols at 100 kVp (311 HU at 16 NI vs 290 HU at 70 NI, P ≤ .0011) and 80 kVp (400 HU at 16 NI vs 369 HU at 70 NI, P ≤ .0007). CONCLUSION: Endoleaks and in-stent thrombus of thoracic aorta were detectable to 1.46 mGy (80 kVp) with FBP, 1.23 mGy (100 kVp) with the hybrid algorithm, and 0.54 mGy (80 kVp) with the fully iterative algorithm.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Résumé de la thèse Le travail de thèse «VOIR LE MONDE COMME UNE IMAGE. Le schème de l'image mimétique dans la philosophie de Platon (Cratyle, Sophiste, Timée) » d'Alexandre NEVSKY étudie la conception philosophique de l'image chez Platon. En posant la question : qu'est-ce que l'image pour Platon? l'étude se propose, dans un premier temps, d'analyser la manière précise dont l'idée de l'image fonctionne dans l'articulation logique de l'enquête platonicienne, en se basant avant tout sur trois dialogues majeurs où cette idée est explicitement thématisée par Platon lui-même, à savoir le Cratyle, le Sophiste et le Timée. Par une analyse détaillée de ces textes, Alexandre Nevsky essaie de démontrer que l'idée de l'image fonctionne comme un schème euristique dont la logique interne détermine les moments clés dans le déroulement de chaque dialogue examiné, et constitue ainsi une véritable méthode d'investigation philosophique pour Platon. En suivant cette stratégie platonicienne, l'auteur nous montre quel rôle le schème de l'image joue selon Platon d'abord dans la constitution du langage (le Cratyle), puis, dans celle du discours (le Sophiste) et, enfin, dans celle du monde (le Timée). Une telle approche lui permet de revoir l'interprétation traditionnelle de certains passages clés, célèbres pour leurs difficultés, en mettant en évidence la façon dont la nouvelle perspective platonicienne, introduite grâce au schème de l'image, permet de formuler une solution philosophique originale du problème initial. On y trouve ainsi rediscutés, pour ne citer que quelques exemples, la théorie curieuse de l'imitation phonétique et le problème de la justesse propre du nom-image, la définition philosophique de la notion d'image et la distinction platonicienne entre limage-ressemblance et l'image-apparence, la logique paradoxale de l'introduction d'un troisième genre dans la structure ontologique de l'être et la question du sens exact à donner au «discours vraisemblable » de Platon sur la naissance de l'univers. Dans un deuxième temps, cette étude tente de dégager, derrière la méthode heuristique basée sur le schème de l'image, une véritable conception de l'image mimétique chez Platon. L'une des idées principales de la thèse est ici de montrer que cette conception présente une solution philosophique de Platon au problème de l'apparence archaïque. Car, face à la question sophistique : comment une chose - que ce soit le discours ou le monde - peut-elle être une apparence, quelque chose qui n'existe pas? Platon apporte une réponse tout à fait originale elle le peut en tant qu'image. Or, l'image n'est pas une simple apparence illusoire, elle est le reflet d'une autre réalité, indépendante et véritable, que l'on doit supposer, selon Platon, même quand sa nature exacte nous échappe encore. La conception platonicienne de l'image apparaît ainsi comme un pendant indispensable de la théorie des Formes intelligibles et aussi comme son étape préalable au niveau de laquelle l'âme du philosophe, dans son ascension vers la vérité, se retrouve dans un espace intermédiaire, déjà au-delà des illusions du monde phénoménal, mais encore en-deçà des engagements métaphysiques de la théorie des Formes.