319 resultados para Quantitative contrast
Resumo:
Des progrès significatifs ont été réalisés dans le domaine de l'intégration quantitative des données géophysique et hydrologique l'échelle locale. Cependant, l'extension à de plus grandes échelles des approches correspondantes constitue encore un défi majeur. Il est néanmoins extrêmement important de relever ce défi pour développer des modèles fiables de flux des eaux souterraines et de transport de contaminant. Pour résoudre ce problème, j'ai développé une technique d'intégration des données hydrogéophysiques basée sur une procédure bayésienne de simulation séquentielle en deux étapes. Cette procédure vise des problèmes à plus grande échelle. L'objectif est de simuler la distribution d'un paramètre hydraulique cible à partir, d'une part, de mesures d'un paramètre géophysique pertinent qui couvrent l'espace de manière exhaustive, mais avec une faible résolution (spatiale) et, d'autre part, de mesures locales de très haute résolution des mêmes paramètres géophysique et hydraulique. Pour cela, mon algorithme lie dans un premier temps les données géophysiques de faible et de haute résolution à travers une procédure de réduction déchelle. Les données géophysiques régionales réduites sont ensuite reliées au champ du paramètre hydraulique à haute résolution. J'illustre d'abord l'application de cette nouvelle approche dintégration des données à une base de données synthétiques réaliste. Celle-ci est constituée de mesures de conductivité hydraulique et électrique de haute résolution réalisées dans les mêmes forages ainsi que destimations des conductivités électriques obtenues à partir de mesures de tomographic de résistivité électrique (ERT) sur l'ensemble de l'espace. Ces dernières mesures ont une faible résolution spatiale. La viabilité globale de cette méthode est testée en effectuant les simulations de flux et de transport au travers du modèle original du champ de conductivité hydraulique ainsi que du modèle simulé. Les simulations sont alors comparées. Les résultats obtenus indiquent que la procédure dintégration des données proposée permet d'obtenir des estimations de la conductivité en adéquation avec la structure à grande échelle ainsi que des predictions fiables des caractéristiques de transports sur des distances de moyenne à grande échelle. Les résultats correspondant au scénario de terrain indiquent que l'approche d'intégration des données nouvellement mise au point est capable d'appréhender correctement les hétérogénéitées à petite échelle aussi bien que les tendances à gande échelle du champ hydraulique prévalent. Les résultats montrent également une flexibilté remarquable et une robustesse de cette nouvelle approche dintégration des données. De ce fait, elle est susceptible d'être appliquée à un large éventail de données géophysiques et hydrologiques, à toutes les gammes déchelles. Dans la deuxième partie de ma thèse, j'évalue en détail la viabilité du réechantillonnage geostatique séquentiel comme mécanisme de proposition pour les méthodes Markov Chain Monte Carlo (MCMC) appliquées à des probmes inverses géophysiques et hydrologiques de grande dimension . L'objectif est de permettre une quantification plus précise et plus réaliste des incertitudes associées aux modèles obtenus. En considérant une série dexemples de tomographic radar puits à puits, j'étudie deux classes de stratégies de rééchantillonnage spatial en considérant leur habilité à générer efficacement et précisément des réalisations de la distribution postérieure bayésienne. Les résultats obtenus montrent que, malgré sa popularité, le réechantillonnage séquentiel est plutôt inefficace à générer des échantillons postérieurs indépendants pour des études de cas synthétiques réalistes, notamment pour le cas assez communs et importants où il existe de fortes corrélations spatiales entre le modèle et les paramètres. Pour résoudre ce problème, j'ai développé un nouvelle approche de perturbation basée sur une déformation progressive. Cette approche est flexible en ce qui concerne le nombre de paramètres du modèle et lintensité de la perturbation. Par rapport au rééchantillonage séquentiel, cette nouvelle approche s'avère être très efficace pour diminuer le nombre requis d'itérations pour générer des échantillons indépendants à partir de la distribution postérieure bayésienne. - Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale. However, extending corresponding approaches beyond the local scale still represents a major challenge, yet is critically important for the development of reliable groundwater flow and contaminant transport models. To address this issue, I have developed a hydrogeophysical data integration technique based on a two-step Bayesian sequential simulation procedure that is specifically targeted towards larger-scale problems. The objective is to simulate the distribution of a target hydraulic parameter based on spatially exhaustive, but poorly resolved, measurements of a pertinent geophysical parameter and locally highly resolved, but spatially sparse, measurements of the considered geophysical and hydraulic parameters. To this end, my algorithm links the low- and high-resolution geophysical data via a downscaling procedure before relating the downscaled regional-scale geophysical data to the high-resolution hydraulic parameter field. I first illustrate the application of this novel data integration approach to a realistic synthetic database consisting of collocated high-resolution borehole measurements of the hydraulic and electrical conductivities and spatially exhaustive, low-resolution electrical conductivity estimates obtained from electrical resistivity tomography (ERT). The overall viability of this method is tested and verified by performing and comparing flow and transport simulations through the original and simulated hydraulic conductivity fields. The corresponding results indicate that the proposed data integration procedure does indeed allow for obtaining faithful estimates of the larger-scale hydraulic conductivity structure and reliable predictions of the transport characteristics over medium- to regional-scale distances. The approach is then applied to a corresponding field scenario consisting of collocated high- resolution measurements of the electrical conductivity, as measured using a cone penetrometer testing (CPT) system, and the hydraulic conductivity, as estimated from electromagnetic flowmeter and slug test measurements, in combination with spatially exhaustive low-resolution electrical conductivity estimates obtained from surface-based electrical resistivity tomography (ERT). The corresponding results indicate that the newly developed data integration approach is indeed capable of adequately capturing both the small-scale heterogeneity as well as the larger-scale trend of the prevailing hydraulic conductivity field. The results also indicate that this novel data integration approach is remarkably flexible and robust and hence can be expected to be applicable to a wide range of geophysical and hydrological data at all scale ranges. In the second part of my thesis, I evaluate in detail the viability of sequential geostatistical resampling as a proposal mechanism for Markov Chain Monte Carlo (MCMC) methods applied to high-dimensional geophysical and hydrological inverse problems in order to allow for a more accurate and realistic quantification of the uncertainty associated with the thus inferred models. Focusing on a series of pertinent crosshole georadar tomographic examples, I investigated two classes of geostatistical resampling strategies with regard to their ability to efficiently and accurately generate independent realizations from the Bayesian posterior distribution. The corresponding results indicate that, despite its popularity, sequential resampling is rather inefficient at drawing independent posterior samples for realistic synthetic case studies, notably for the practically common and important scenario of pronounced spatial correlation between model parameters. To address this issue, I have developed a new gradual-deformation-based perturbation approach, which is flexible with regard to the number of model parameters as well as the perturbation strength. Compared to sequential resampling, this newly proposed approach was proven to be highly effective in decreasing the number of iterations required for drawing independent samples from the Bayesian posterior distribution.
Resumo:
OBJECTIVE: Prospective studies have shown that quantitative ultrasound (QUS) techniques predict the risk of fracture of the proximal femur with similar standardised risk ratios to dual-energy x-ray absorptiometry (DXA). Few studies have investigated these devices for the prediction of vertebral fractures. The Basel Osteoporosis Study (BOS) is a population-based prospective study to assess the performance of QUS devices and DXA in predicting incident vertebral fractures. METHODS: 432 women aged 60-80 years were followed-up for 3 years. Incident vertebral fractures were assessed radiologically. Bone measurements using DXA (spine and hip) and QUS measurements (calcaneus and proximal phalanges) were performed. Measurements were assessed for their value in predicting incident vertebral fractures using logistic regression. RESULTS: QUS measurements at the calcaneus and DXA measurements discriminated between women with and without incident vertebral fracture, (20% height reduction). The relative risks (RRs) for vertebral fracture, adjusted for age, were 2.3 for the Stiffness Index (SI) and 2.8 for the Quantitative Ultrasound Index (QUI) at the calcaneus and 2.0 for bone mineral density at the lumbar spine. The predictive value (AUC (95% CI)) of QUS measurements at the calcaneus remained highly significant (0.70 for SI, 0.72 for the QUI, and 0.67 for DXA at the lumbar spine) even after adjustment for other confounding variables. CONCLUSIONS: QUS of the calcaneus and bone mineral density measurements were shown to be significant predictors of incident vertebral fracture. The RRs for QUS measurements at the calcaneus are of similar magnitude as for DXA measurements.
Resumo:
Despite the central role of quantitative PCR (qPCR) in the quantification of mRNA transcripts, most analyses of qPCR data are still delegated to the software that comes with the qPCR apparatus. This is especially true for the handling of the fluorescence baseline. This article shows that baseline estimation errors are directly reflected in the observed PCR efficiency values and are thus propagated exponentially in the estimated starting concentrations as well as 'fold-difference' results. Because of the unknown origin and kinetics of the baseline fluorescence, the fluorescence values monitored in the initial cycles of the PCR reaction cannot be used to estimate a useful baseline value. An algorithm that estimates the baseline by reconstructing the log-linear phase downward from the early plateau phase of the PCR reaction was developed and shown to lead to very reproducible PCR efficiency values. PCR efficiency values were determined per sample by fitting a regression line to a subset of data points in the log-linear phase. The variability, as well as the bias, in qPCR results was significantly reduced when the mean of these PCR efficiencies per amplicon was used in the calculation of an estimate of the starting concentration per sample.
Resumo:
Introduction Lesion detection in multiple sclerosis (MS) is an essential part of its clinical diagnosis. In addition, radiological characterisation of MS lesions is an important research field that aims at distinguishing different MS types, monitoring drug response and prognosis. To date, various MR protocols have been proposed to obtain optimal lesion contrast for early and comprehensive diagnosis of the MS disease. In this study, we compare the sensitivity of five different MR contrasts for lesion detection: (i) the DIR sequence (Double Inversion Recovery, [4]), (ii) the Dark-fluid SPACE acquisition schemes, a 3D variant of a 2D FLAIR sequence [1], (iii) the MP2RAGE [2], an MP-RAGE variant that provides homogeneous T1 contrast and quantitative T1-values, and the sequences currently used for clinical MS diagnosis (2D FLAIR, MP-RAGE). Furthermore, we investigate the T1 relaxation times of cortical and sub-cortical regions in the brain hemispheres and the cerebellum at 3T. Methods 10 early-stage female MS patients (age: 31.64.7y; disease duration: 3.81.9y; disability score, EDSS: 1.80.4) and 10 healthy controls (age and gender-matched: 31.25.8y) were included in the study after obtaining informed written consent according to the local ethic protocol. All experiments were performed at 3T (Magnetom Trio a Tim System, Siemens, Germany) using a 32-channel head coil [5]. The imaging protocol included the following sequences, (all except for axial FLAIR 2D with 1x1x1.2 mm3 voxel and 256x256x160 matrix): DIR (TI1/TI2/TR XX/3652/10000 ms, iPAT=2, TA 12:02 min), MP-RAGE (TI/TR 900/2300 ms, iPAT=3, TA 3:47 min); MP2RAGE (TI1/TI2/TR 700/2500/5000 ms, iPAT=3, TA 8:22 min, cf. [2]); 3D FLAIR SPACE (only for patient 4-6, TI/TR 1800/5000 ms, iPAT=2, TA=5;52 min, cf. [1]); Axial FLAIR (0.9x0.9x2.5 mm3, 256x256x44 matrix, TI/TR 2500/9000 ms, iPAT=2, TA 4:05 min). Lesions were identified by two experienced neurologist and radiologist, manually contoured and assigned to regional locations (s. table 1). Regional lesion masks (RLM) from each contrast were compared for number and volumes of lesions. In addition, RLM were merged in a single "master" mask, which represented the sum of the lesions of all contrasts. T1 values were derived for each location from this mask for patients 5-10 (3D FLAIR contrast was missing for patient 1-4). Results & Discussion The DIR sequence appears the most sensitive for total lesions count, followed by the MP2RAGE (table 1). The 3D FLAIR SPACE sequence turns out to be more sensitive than the 2D FLAIR, presumably due to reduced partial volume effects. Looking for sub-cortical hemispheric lesions, the DIR contrast appears to be equally sensitive to the MP2RAGE and SPACE, but most sensitive for cerebellar MS plaques. The DIR sequence is also the one that reveals cortical hemispheric lesions best. T1 relaxation times at 3T in the WM and GM of the hemispheres and the cerebellum, as obtained with the MP2RAGE sequence, are shown in table 2. Extending previous studies, we confirm overall longer T1-values in lesion tissue and higher standard deviations compared to the non-lesion tissue and control tissue in healthy controls. We hypothesize a biological (different degree of axonal loss and demyelination) rather than technical origin. Conclusion In this study, we applied 5 MR contrasts including two novel sequences to investigate the contrast of highest sensitivity for early MS diagnosis. In addition, we characterized for the first time the T1 relaxation time in cortical and sub-cortical regions of the hemispheres and the cerebellum. Results are in agreement with previous publications and meaningful biological interpretation of the data.
Resumo:
RATIONALE AND OBJECTIVES: Dose reduction may compromise patients because of a decrease of image quality. Therefore, the amount of dose savings in new dose-reduction techniques needs to be thoroughly assessed. To avoid repeated studies in one patient, chest computed tomography (CT) scans with different dose levels were performed in corpses comparing model-based iterative reconstruction (MBIR) as a tool to enhance image quality with current standard full-dose imaging. MATERIALS AND METHODS: Twenty-five human cadavers were scanned (CT HD750) after contrast medium injection at different, decreasing dose levels D0-D5 and respectively reconstructed with MBIR. The data at full-dose level, D0, have been additionally reconstructed with standard adaptive statistical iterative reconstruction (ASIR), which represented the full-dose baseline reference (FDBR). Two radiologists independently compared image quality (IQ) in 3-mm multiplanar reformations for soft-tissue evaluation of D0-D5 to FDBR (-2, diagnostically inferior; -1, inferior; 0, equal; +1, superior; and +2, diagnostically superior). For statistical analysis, the intraclass correlation coefficient (ICC) and the Wilcoxon test were used. RESULTS: Mean CT dose index values (mGy) were as follows: D0/FDBR = 10.1 ± 1.7, D1 = 6.2 ± 2.8, D2 = 5.7 ± 2.7, D3 = 3.5 ± 1.9, D4 = 1.8 ± 1.0, and D5 = 0.9 ± 0.5. Mean IQ ratings were as follows: D0 = +1.8 ± 0.2, D1 = +1.5 ± 0.3, D2 = +1.1 ± 0.3, D3 = +0.7 ± 0.5, D4 = +0.1 ± 0.5, and D5 = -1.2 ± 0.5. All values demonstrated a significant difference to baseline (P < .05), except mean IQ for D4 (P = .61). ICC was 0.91. CONCLUSIONS: Compared to ASIR, MBIR allowed for a significant dose reduction of 82% without impairment of IQ. This resulted in a calculated mean effective dose below 1 mSv.
Resumo:
Shigella, a Gram-negative invasive enteropathogenic bacterium responsible for bacillary dysentery, causes the rupture, invasion, and inflammatory destruction of the human colonic mucosa. We explored the mechanisms of protection mediated by Shigella LPS-specific secretory IgA (SIgA), the major mucosal Ab induced upon natural infection. Bacteria, SIgA, or SIgA-S. flexneri immune complexes were administered into rabbit ligated intestinal loops containing a Peyer's patch. After 8 h, localizations of bacteria, SIgA, and SIgA-S. flexneri immune complexes were examined by immunohistochemistry and confocal microscopy imaging. We found that anti-Shigella LPS SIgA, mainly via immune exclusion, prevented Shigella-induced inflammation responsible for the destruction of the intestinal barrier. Besides this luminal trapping, a small proportion of SIgA-S. flexneri immune complexes were shown to enter the rabbit Peyer's patch and were internalized by dendritic cells of the subepithelial dome region. Local inflammatory status was analyzed by quantitative RT-PCR using newly designed primers for rabbit pro- and anti-inflammatory mediator genes. In Peyer's patches exposed to immune complexes, limited up-regulation of the expression of proinflammatory genes, including TNF-alpha, IL-6, Cox-2, and IFN-gamma, was observed, consistent with preserved morphology. In contrast, in Peyer's patches exposed to Shigella alone, high expression of the same mediators was measured, indicating that neutralizing SIgA dampens the proinflammatory properties of Shigella. These results show that in the form of immune complexes, SIgA guarantees both immune exclusion and neutralization of translocated bacteria, thus preserving the intestinal barrier integrity by preventing bacterial-induced inflammation. These findings add to the multiple facets of the noninflammatory properties of SIgA.
Resumo:
Different interferometric techniques were developed last decade to obtain full field, quantitative, and absolute phase imaging, such as phase-shifting, Fourier phase microscopy, Hilbert phase microscopy or digital holographic microscopy (DHM). Although, these techniques are very similar, DHM combines several advantages. In contrast, to phase shifting, DHM is indeed capable of single-shot hologram recording allowing a real-time absolute phase imaging. On the other hand, unlike to Fourier phase or Hilbert phase microscopy, DHM does not require to record in focus images of the specimen on the digital detector (CCD or CMOS camera), because a numerical focalization adjustment can be performed by a numerical wavefront propagation. Consequently, the depth of view of high NA microscope objectives is numerically extended. For example, two different biological cells, floating at different depths in a liquid, can be focalized numerically from the same digital hologram. Moreover, the numerical propagation associated to digital optics and automatic fitting procedures, permits vibrations insensitive full- field phase imaging and the complete compensation for a priori any image distortion or/and phase aberrations introduced for example by imperfections of holders or perfusion chamber. Examples of real-time full-field phase images of biological cells have been demonstrated. ©2008 COPYRIGHT SPIE
Resumo:
Digital holographic microscopy (DHM) is a technique that allows obtaining, from a single recorded hologram, quantitative phase image of living cell with interferometric accuracy. Specifically the optical phase shift induced by the specimen on the transmitted wave front can be regarded as a powerful endogenous contrast agent, depending on both the thickness and the refractive index of the sample. Thanks to a decoupling procedure cell thickness and intracellular refractive index can be measured separately. Consequently, Mean corpuscular volume (MCV) and mean corpuscular hemoglobin concentration (MCHC), two highly relevant clinical parameters, have been measured non-invasively at a single cell level. The DHM nanometric axial and microsecond temporal sensitivities have permitted to measure the red blood cell membrane fluctuations (CMF) on the whole cell surface. ©2009 COPYRIGHT SPIE--The International Society for Optical Engineering.
Resumo:
The application of two approaches for high-throughput, high-resolution X-ray phase contrast tomographic imaging being used at the tomographic microscopy and coherent radiology experiments (TOMCAT) beamline of the SLS is discussed and illustrated. Differential phase contrast (DPC) imaging, using a grating interferometer and a phase-stepping technique, is integrated into the beamline environment at TOMCAT in terms of the fast acquisition and reconstruction of data and the availability to scan samples within an aqueous environment. A second phase contrast method is a modified transfer of intensity approach that can yield the 3D distribution of the decrement of the refractive index of a weakly absorbing object from a single tomographic dataset. The two methods are complementary to one another: the DPC method is characterised by a higher sensitivity and by moderate resolution with larger samples; the modified transfer of intensity approach is particularly suited for small specimens when high resolution (around 1 mu m) is required. Both are being applied to investigations in the biological and materials science fields.
Resumo:
Inter-individual differences in gene expression are likely to account for an important fraction of phenotypic differences, including susceptibility to common disorders. Recent studies have shown extensive variation in gene expression levels in humans and other organisms, and that a fraction of this variation is under genetic control. We investigated the patterns of gene expression variation in a 25 Mb region of human chromosome 21, which has been associated with many Down syndrome (DS) phenotypes. Taqman real-time PCR was used to measure expression variation of 41 genes in lymphoblastoid cells of 40 unrelated individuals. For 25 genes found to be differentially expressed, additional analysis was performed in 10 CEPH families to determine heritabilities and map loci harboring regulatory variation. Seventy-six percent of the differentially expressed genes had significant heritabilities, and genomewide linkage analysis led to the identification of significant eQTLs for nine genes. Most eQTLs were in trans, with the best result (P=7.46 x 10(-8)) obtained for TMEM1 on chromosome 12q24.33. A cis-eQTL identified for CCT8 was validated by performing an association study in 60 individuals from the HapMap project. SNP rs965951 located within CCT8 was found to be significantly associated with its expression levels (P=2.5 x 10(-5)) confirming cis-regulatory variation. The results of our study provide a representative view of expression variation of chromosome 21 genes, identify loci involved in their regulation and suggest that genes, for which expression differences are significantly larger than 1.5-fold in control samples, are unlikely to be involved in DS-phenotypes present in all affected individuals.
Resumo:
Nonagenarians and centenarians represent a quickly growing age group worldwide. In parallel, the prevalence of dementia increases substantially, but how to define dementia in this oldest-old age segment remains unclear. Although the idea that the risk of Alzheimer's disease (AD) decreases after age 90 has now been questioned, the oldest-old still represent a population relatively resistant to degenerative brain processes. Brain aging is characterised by the formation of neurofibrillary tangles (NFTs) and senile plaques (SPs) as well as neuronal and synaptic loss in both cognitively intact individuals and patients with AD. In nondemented cases NFTs are usually restricted to the hippocampal formation, whereas the progressive involvement of the association areas in the temporal neocortex parallels the development of overt clinical signs of dementia. In contrast, there is little correlation between the quantitative distribution of SP and AD severity. The pattern of lesion distribution and neuronal loss changes in extreme aging relative to the younger-old. In contrast to younger cases where dementia is mainly related to severe NFT formation within adjacent components of the medial and inferior aspects of the temporal cortex, oldest-old individuals display a preferential involvement of the anterior part of the CA1 field of the hippocampus whereas the inferior temporal and frontal association areas are relatively spared. This pattern suggests that both the extent of NFT development in the hippocampus as well as a displacement of subregional NFT distribution within the Cornu ammonis (CA) fields may be key determinants of dementia in the very old. Cortical association areas are relatively preserved. The progression of NFT formation across increasing cognitive impairment was significantly slower in nonagenarians and centenarians compared to younger cases in the CA1 field and entorhinal cortex. The total amount of amyloid and the neuronal loss in these regions were also significantly lower than those reported in younger AD cases. Overall, there is evidence that pathological substrates of cognitive deterioration in the oldest-old are different from those observed in the younger-old. Microvascular parameters such as mean capillary diameters may be key factors to consider for the prediction of cognitive decline in the oldest-old. Neuropathological particularities of the oldest-old may be related to "longevity-enabling" genes although little or nothing is known in this promising field of future research.