885 resultados para CONTRAST SENSITIVITY
Resumo:
Computed tomography (CT) is one of the most valuable modalities for in vivo imaging because it is fast, high-resolution, cost-effective, and non-invasive. Moreover, CT is heavily used not only in the clinic (for both diagnostics and treatment planning) but also in preclinical research as micro-CT. Although CT is inherently effective for lung and bone imaging, soft tissue imaging requires the use of contrast agents. For small animal micro-CT, nanoparticle contrast agents are used in order to avoid rapid renal clearance. A variety of nanoparticles have been used for micro-CT imaging, but the majority of research has focused on the use of iodine-containing nanoparticles and gold nanoparticles. Both nanoparticle types can act as highly effective blood pool contrast agents or can be targeted using a wide variety of targeting mechanisms. CT imaging can be further enhanced by adding spectral capabilities to separate multiple co-injected nanoparticles in vivo. Spectral CT, using both energy-integrating and energy-resolving detectors, has been used with multiple contrast agents to enable functional and molecular imaging. This review focuses on new developments for in vivo small animal micro-CT using novel nanoparticle probes applied in preclinical research.
Resumo:
PURPOSE: X-ray computed tomography (CT) is widely used, both clinically and preclinically, for fast, high-resolution anatomic imaging; however, compelling opportunities exist to expand its use in functional imaging applications. For instance, spectral information combined with nanoparticle contrast agents enables quantification of tissue perfusion levels, while temporal information details cardiac and respiratory dynamics. The authors propose and demonstrate a projection acquisition and reconstruction strategy for 5D CT (3D+dual energy+time) which recovers spectral and temporal information without substantially increasing radiation dose or sampling time relative to anatomic imaging protocols. METHODS: The authors approach the 5D reconstruction problem within the framework of low-rank and sparse matrix decomposition. Unlike previous work on rank-sparsity constrained CT reconstruction, the authors establish an explicit rank-sparse signal model to describe the spectral and temporal dimensions. The spectral dimension is represented as a well-sampled time and energy averaged image plus regularly undersampled principal components describing the spectral contrast. The temporal dimension is represented as the same time and energy averaged reconstruction plus contiguous, spatially sparse, and irregularly sampled temporal contrast images. Using a nonlinear, image domain filtration approach, the authors refer to as rank-sparse kernel regression, the authors transfer image structure from the well-sampled time and energy averaged reconstruction to the spectral and temporal contrast images. This regularization strategy strictly constrains the reconstruction problem while approximately separating the temporal and spectral dimensions. Separability results in a highly compressed representation for the 5D data in which projections are shared between the temporal and spectral reconstruction subproblems, enabling substantial undersampling. The authors solved the 5D reconstruction problem using the split Bregman method and GPU-based implementations of backprojection, reprojection, and kernel regression. Using a preclinical mouse model, the authors apply the proposed algorithm to study myocardial injury following radiation treatment of breast cancer. RESULTS: Quantitative 5D simulations are performed using the MOBY mouse phantom. Twenty data sets (ten cardiac phases, two energies) are reconstructed with 88 μm, isotropic voxels from 450 total projections acquired over a single 360° rotation. In vivo 5D myocardial injury data sets acquired in two mice injected with gold and iodine nanoparticles are also reconstructed with 20 data sets per mouse using the same acquisition parameters (dose: ∼60 mGy). For both the simulations and the in vivo data, the reconstruction quality is sufficient to perform material decomposition into gold and iodine maps to localize the extent of myocardial injury (gold accumulation) and to measure cardiac functional metrics (vascular iodine). Their 5D CT imaging protocol represents a 95% reduction in radiation dose per cardiac phase and energy and a 40-fold decrease in projection sampling time relative to their standard imaging protocol. CONCLUSIONS: Their 5D CT data acquisition and reconstruction protocol efficiently exploits the rank-sparse nature of spectral and temporal CT data to provide high-fidelity reconstruction results without increased radiation dose or sampling time.
Resumo:
Background. In clinical practice and in clinical trials, echocardiography and scintigraphy are used the most for the evaluation of global left ejection fraction (LVEF) and left ventricular (LV) volumes. Actually, poor quality imaging and geometrical assumptions are the main limitations of LVEF measured by echocardiography. Contrast agents and 3D echocardiography are new methods that may alleviate these potential limitations. Methods. Therefore we sought to examine the accuracy of contrast 3D echocardiography for the evaluation of LV volumes and LVEF relative to MIBI gated SPECT as an independent reference. In 43 patients addressed for chest pain, contrast 3D echocardiography (RT3DE) and MIBI gated SPECT were prospectively performed on the same day. The accuracy and the variability of LV volumes and LVEF measurements were evaluated. Results. Due to good endocardial delineation, LV volumes and LVEF measurements by contrast RT3DE were feasible in 99% of the patients. The mean LV end-diastolic volume (LVEDV) of the group by scintigraphy was 143 65 mL and was underestimated by triplane contrast RT3DE (128 60 mL; p < 0.001) and less by full-volume contrast RT3DE (132 62 mL; p < 0.001). Limits of agreement with scintigraphy were similar for triplane andfull-volume, modalities with the best results for full-volume. Results were similar for calculation of LV end-systolic volume (LVESV). The mean LVEF was 44 16% with scintigraphy and was not significantly different with both triplane contrast RT3DE (45 15%) and full-volume contrast RT3DE (45 15%). There was an excellent correlation between two different observers for LVEDV, LVESV and LVEF measurements and inter observer agreement was also good for both contrast RT3DE techniques. Conclusion. Contrast RT3DE allows an accurate assessment of LVEF compared to the LVEF measured by SPECT, and shows low variability between observers. Although RT3DE triplane provides accurate evaluation of left ventricular function, RT3DE full-volume is superior to triplane modality in patients with suspected coronary artery disease. © 2009 Cosyns et al; licensee BioMed Central Ltd.
Resumo:
BACKGROUND: The detection of latent tuberculosis infection (LTBI) is a major component of tuberculosis (TB) control strategies. In addition to the tuberculosis skin test (TST), novel blood tests, based on in vitro release of IFN-gamma in response to Mycobacterium tuberculosis-specific antigens ESAT-6 and CFP-10 (IGRAs), are used for TB diagnosis. However, neither IGRAs nor the TST can separate acute TB from LTBI, and there is concern that responses in IGRAs may decline with time after infection. We have therefore evaluated the potential of the novel antigen heparin-binding hemagglutinin (HBHA) for in vitro detection of LTBI. METHODOLOGY AND PRINCIPAL FINDINGS: HBHA was compared to purified protein derivative (PPD) and ESAT-6 in IGRAs on lymphocytes drawn from 205 individuals living in Belgium, a country with low TB prevalence, where BCG vaccination is not routinely used. Among these subjects, 89 had active TB, 65 had LTBI, based on well-standardized TST reactions and 51 were negative controls. HBHA was significantly more sensitive than ESAT-6 and more specific than PPD for the detection of LTBI. PPD-based tests yielded 90.00% sensitivity and 70.00% specificity for the detection of LTBI, whereas the sensitivity and specificity for the ESAT-6-based tests were 40.74% and 90.91%, and those for the HBHA-based tests were 92.06% and 93.88%, respectively. The QuantiFERON-TB Gold In-Tube (QFT-IT) test applied on 20 LTBI subjects yielded 50% sensitivity. The HBHA IGRA was not influenced by prior BCG vaccination, and, in contrast to the QFT-IT test, remote (>2 years) infections were detected as well as recent (<2 years) infections by the HBHA-specific test. CONCLUSIONS: The use of ESAT-6- and CFP-10-based IGRAs may underestimate the incidence of LTBI, whereas the use of HBHA may combine the operational advantages of IGRAs with high sensitivity and specificity for latent infection.
Resumo:
Most of the air quality modelling work has been so far oriented towards deterministic simulations of ambient pollutant concentrations. This traditional approach, which is based on the use of one selected model and one data set of discrete input values, does not reflect the uncertainties due to errors in model formulation and input data. Given the complexities of urban environments and the inherent limitations of mathematical modelling, it is unlikely that a single model based on routinely available meteorological and emission data will give satisfactory short-term predictions. In this study, different methods involving the use of more than one dispersion model, in association with different emission simulation methodologies and meteorological data sets, were explored for predicting best CO and benzene estimates, and related confidence bounds. The different approaches were tested using experimental data obtained during intensive monitoring campaigns in busy street canyons in Paris, France. Three relative simple dispersion models (STREET, OSPM and AEOLIUS) that are likely to be used for regulatory purposes were selected for this application. A sensitivity analysis was conducted in order to identify internal model parameters that might significantly affect results. Finally, a probabilistic methodology for assessing urban air quality was proposed.
Resumo:
Three hundred participants, including volunteers from an obsessional support group, filled in questionnaires relating to disgust sensitivity, health anxiety, anxiety, fear of death, fear of contamination and obsessionality as part of an investigation into the involvement of disgust sensitivity in types of obsessions. Overall, the data supported the hypothesis that a relationship does exist between disgust sensitivity and the targeted variables. A significant predictive relationship was found between disgust sensitivity and total scores on the obsessive compulsive inventory (OCI; Psychological Assessment 10 (1998) 206) for both frequency and distress of symptomatology. Disgust sensitivity scores were significantly related to health anxiety scores and general anxiety scores and to all the obsessional subscales, with the exception of hoarding. Additionally, multiple regression analyses revealed that disgust sensitivity may be more specifically related to washing compulsions: frequency of washing behaviour was best predicted by disgust sensitivity scores. Washing distress scores were best predicted by health anxiety scores, though disgust sensitivity entered in the second model. It is suggested that further research on the relationship between disgust sensitivity and obsessionality could be helpful in refining the theoretical understanding of obsessions.
Resumo:
The design and development of a comprehensive computational model of a copper stockpile leach process is summarized. The computational fluid dynamic software framework PHYSICA+ and various phenomena were used to model transport phenomena, mineral reaction kinetics, bacterial effects, and heat, energy and acid balances for the overall leach process. In this paper, the performance of the model is investigated, in particular its sensitvity to particle size and ore permeability. A combination of literature and laboratory sources was used to parameterize the model. The simulation results from the leach model are compared with closely controlled column pilot scale tests. The main performance characteristics (e.g. copper recovery rate) predicted by the model compare reasonably well with the experimental data and clearly reflect the qualitiative behavior of the process in many respects. The model is used to provide a measure of the sensitivity of ore permeability on leach behavior, and simulation results are examined for several different particle size distributions.
Resumo:
This study examines the L2 acquisition of word order variation in Spanish by three groups of L1 English learners in an instructed setting. The three groups represent learners at three different L2 proficiencies: beginners, intermediate and advanced. The aim of the study is to analyse the acquisition of word order variation in a situation where the target input is highly ambiguous, since two apparent optional forms exist in the target grammar, in order to examine how the optionality is disambiguated by learners from the earlier stages of learning to the more advanced. Our results support the hypothesis that an account based on a discourse-pragmatics deficit cannot satisfactorily explain learners’ non-targetlike representations in the contexts analysed in our study.