184 resultados para 3D quantitative findings
Resumo:
OBJECTIVE: The purpose of this study was to compare the use of different variables to measure the clinical wear of two denture tooth materials in two analysis centers. METHODS: Twelve edentulous patients were provided with full dentures. Two different denture tooth materials (experimental material and control) were placed randomly in accordance with the split-mouth design. For wear measurements, impressions were made after an adjustment phase of 1-2 weeks and after 6, 12, 18, and 24 months. The occlusal wear of the posterior denture teeth of 11 subjects was assessed in two study centers by use of plaster replicas and 3D laser-scanning methods. In both centers sequential scans of the occlusal surfaces were digitized and superimposed. Wear was described by use of four different variables. Statistical analysis was performed after log-transformation of the wear data by use of the Pearson and Lin correlation and by use of a mixed linear model. RESULTS: Mean occlusal vertical wear of the denture teeth after 24 months was between 120μm and 212μm, depending on wear variable and material. For three of the four variables, wear of the experimental material was statistically significantly less than that of the control. Comparison of the two study centers, however, revealed correlation of the wear variables was only moderate whereas strong correlation was observed among the different wear variables evaluated by each center. SIGNIFICANCE: Moderate correlation was observed for clinical wear measurements by optical 3D laser scanning in two different study centers. For the two denture tooth materials, wear measurements limited to the attrition zones led to the same qualitative assessment.
Resumo:
The feasibility of three-dimensional (3D) whole-heart imaging of the coronary venous (CV) system was investigated. The hypothesis that coronary magnetic resonance venography (CMRV) can be improved by using an intravascular contrast agent (CA) was tested. A simplified model of the contrast in T(2)-prepared steady-state free precession (SSFP) imaging was applied to calculate optimal T(2)-preparation durations for the various deoxygenation levels expected in venous blood. Non-contrast-agent (nCA)- and CA-enhanced images were compared for the delineation of the coronary sinus (CS) and its main tributaries. A quantitative analysis of the resulting contrast-to-noise ratio (CNR) and signal-to-noise ratio (SNR) in both approaches was performed. Precontrast visualization of the CV system was limited by the poor CNR between large portions of the venous blood and the surrounding tissue. Postcontrast, a significant increase in CNR between the venous blood and the myocardium (Myo) resulted in a clear delineation of the target vessels. The CNR improvement was 347% (P < 0.05) for the CS, 260% (P < 0.01) for the mid cardiac vein (MCV), and 430% (P < 0.05) for the great cardiac vein (GCV). The improvement in SNR was on average 155%, but was not statistically significant for the CS and the MCV. The signal of the Myo could be significantly reduced to about 25% (P < 0.001).
Resumo:
A factor limiting preliminary rockfall hazard mapping at regional scale is often the lack of knowledge of potential source areas. Nowadays, high resolution topographic data (LiDAR) can account for realistic landscape details even at large scale. With such fine-scale morphological variability, quantitative geomorphometric analyses become a relevant approach for delineating potential rockfall instabilities. Using digital elevation model (DEM)-based ?slope families? concept over areas of similar lithology and cliffs and screes zones available from the 1:25,000 topographic map, a susceptibility rockfall hazard map was drawn up in the canton of Vaud, Switzerland, in order to provide a relevant hazard overview. Slope surfaces over morphometrically-defined thresholds angles were considered as rockfall source zones. 3D modelling (CONEFALL) was then applied on each of the estimated source zones in order to assess the maximum runout length. Comparison with known events and other rockfall hazard assessments are in good agreement, showing that it is possible to assess rockfall activities over large areas from DEM-based parameters and topographical elements.
Resumo:
This article describes the composition of fingermark residue as being a complex system with numerous compounds coming from different sources and evolving over time from the initial composition (corresponding to the composition right after deposition) to the aged composition (corresponding to the evolution of the initial composition over time). This complex system will additionally vary due to effects of numerous influence factors grouped in five different classes: the donor characteristics, the deposition conditions, the substrate nature, the environmental conditions and the applied enhancement techniques. The initial and aged compositions as well as the influence factors are thus considered in this article to provide a qualitative and quantitative review of all compounds identified in fingermark residue up to now. The analytical techniques used to obtain these data are also enumerated. This review highlights the fact that despite the numerous analytical processes that have already been proposed and tested to elucidate fingermark composition, advanced knowledge is still missing. Thus, there is a real need to conduct future research on the composition of fingermark residue, focusing particularly on quantitative measurements, aging kinetics and effects of influence factors. The results of future research are particularly important for advances in fingermark enhancement and dating technique developments.
Resumo:
BACKGROUND: To report the clinical, histopathological and immunohistochemical findings of two novel mutations within the TGFBI gene. METHODS: The genotype of 41 affected members of 16 families and nine sporadic cases was investigated by direct sequencing of the TGFBI gene. Clinical, histological and immunohistochemical characteristics of corneal opacification were reported and compared with the coding region changes in the TGFBI gene. RESULTS: A novel mutation Leu509Pro was detected in one family with a geographic pattern-like clinical phenotype. Histopathologically we found amyloid together with non-amyloid deposits and immunohistochemical staining of Keratoepithelin (KE) KE2 and KE15 antibodies. In two families and one sporadic case the novel mutation Gly623Arg with a late-onset, map-like corneal dystrophy was identified. Here amyloid and immunohistochemical staining of only KE2 antibodies occurred. Further, five already known mutations are reported: Arg124Cys Arg555Trp Arg124His His626Arg, Ala546Asp in 13 families and five sporadic cases of German origin. The underlying gene defect within the TBFBI gene was not identified in any of the four probands with Thiel-Behnke corneal dystrophy. CONCLUSIONS: The two novel mutations within the TGFBI gene add another two phenotypes with atypical immunohistochemical and histopathological features to those so far reported.
Resumo:
Des progrès significatifs ont été réalisés dans le domaine de l'intégration quantitative des données géophysique et hydrologique l'échelle locale. Cependant, l'extension à de plus grandes échelles des approches correspondantes constitue encore un défi majeur. Il est néanmoins extrêmement important de relever ce défi pour développer des modèles fiables de flux des eaux souterraines et de transport de contaminant. Pour résoudre ce problème, j'ai développé une technique d'intégration des données hydrogéophysiques basée sur une procédure bayésienne de simulation séquentielle en deux étapes. Cette procédure vise des problèmes à plus grande échelle. L'objectif est de simuler la distribution d'un paramètre hydraulique cible à partir, d'une part, de mesures d'un paramètre géophysique pertinent qui couvrent l'espace de manière exhaustive, mais avec une faible résolution (spatiale) et, d'autre part, de mesures locales de très haute résolution des mêmes paramètres géophysique et hydraulique. Pour cela, mon algorithme lie dans un premier temps les données géophysiques de faible et de haute résolution à travers une procédure de réduction déchelle. Les données géophysiques régionales réduites sont ensuite reliées au champ du paramètre hydraulique à haute résolution. J'illustre d'abord l'application de cette nouvelle approche dintégration des données à une base de données synthétiques réaliste. Celle-ci est constituée de mesures de conductivité hydraulique et électrique de haute résolution réalisées dans les mêmes forages ainsi que destimations des conductivités électriques obtenues à partir de mesures de tomographic de résistivité électrique (ERT) sur l'ensemble de l'espace. Ces dernières mesures ont une faible résolution spatiale. La viabilité globale de cette méthode est testée en effectuant les simulations de flux et de transport au travers du modèle original du champ de conductivité hydraulique ainsi que du modèle simulé. Les simulations sont alors comparées. Les résultats obtenus indiquent que la procédure dintégration des données proposée permet d'obtenir des estimations de la conductivité en adéquation avec la structure à grande échelle ainsi que des predictions fiables des caractéristiques de transports sur des distances de moyenne à grande échelle. Les résultats correspondant au scénario de terrain indiquent que l'approche d'intégration des données nouvellement mise au point est capable d'appréhender correctement les hétérogénéitées à petite échelle aussi bien que les tendances à gande échelle du champ hydraulique prévalent. Les résultats montrent également une flexibilté remarquable et une robustesse de cette nouvelle approche dintégration des données. De ce fait, elle est susceptible d'être appliquée à un large éventail de données géophysiques et hydrologiques, à toutes les gammes déchelles. Dans la deuxième partie de ma thèse, j'évalue en détail la viabilité du réechantillonnage geostatique séquentiel comme mécanisme de proposition pour les méthodes Markov Chain Monte Carlo (MCMC) appliquées à des probmes inverses géophysiques et hydrologiques de grande dimension . L'objectif est de permettre une quantification plus précise et plus réaliste des incertitudes associées aux modèles obtenus. En considérant une série dexemples de tomographic radar puits à puits, j'étudie deux classes de stratégies de rééchantillonnage spatial en considérant leur habilité à générer efficacement et précisément des réalisations de la distribution postérieure bayésienne. Les résultats obtenus montrent que, malgré sa popularité, le réechantillonnage séquentiel est plutôt inefficace à générer des échantillons postérieurs indépendants pour des études de cas synthétiques réalistes, notamment pour le cas assez communs et importants où il existe de fortes corrélations spatiales entre le modèle et les paramètres. Pour résoudre ce problème, j'ai développé un nouvelle approche de perturbation basée sur une déformation progressive. Cette approche est flexible en ce qui concerne le nombre de paramètres du modèle et lintensité de la perturbation. Par rapport au rééchantillonage séquentiel, cette nouvelle approche s'avère être très efficace pour diminuer le nombre requis d'itérations pour générer des échantillons indépendants à partir de la distribution postérieure bayésienne. - Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale. However, extending corresponding approaches beyond the local scale still represents a major challenge, yet is critically important for the development of reliable groundwater flow and contaminant transport models. To address this issue, I have developed a hydrogeophysical data integration technique based on a two-step Bayesian sequential simulation procedure that is specifically targeted towards larger-scale problems. The objective is to simulate the distribution of a target hydraulic parameter based on spatially exhaustive, but poorly resolved, measurements of a pertinent geophysical parameter and locally highly resolved, but spatially sparse, measurements of the considered geophysical and hydraulic parameters. To this end, my algorithm links the low- and high-resolution geophysical data via a downscaling procedure before relating the downscaled regional-scale geophysical data to the high-resolution hydraulic parameter field. I first illustrate the application of this novel data integration approach to a realistic synthetic database consisting of collocated high-resolution borehole measurements of the hydraulic and electrical conductivities and spatially exhaustive, low-resolution electrical conductivity estimates obtained from electrical resistivity tomography (ERT). The overall viability of this method is tested and verified by performing and comparing flow and transport simulations through the original and simulated hydraulic conductivity fields. The corresponding results indicate that the proposed data integration procedure does indeed allow for obtaining faithful estimates of the larger-scale hydraulic conductivity structure and reliable predictions of the transport characteristics over medium- to regional-scale distances. The approach is then applied to a corresponding field scenario consisting of collocated high- resolution measurements of the electrical conductivity, as measured using a cone penetrometer testing (CPT) system, and the hydraulic conductivity, as estimated from electromagnetic flowmeter and slug test measurements, in combination with spatially exhaustive low-resolution electrical conductivity estimates obtained from surface-based electrical resistivity tomography (ERT). The corresponding results indicate that the newly developed data integration approach is indeed capable of adequately capturing both the small-scale heterogeneity as well as the larger-scale trend of the prevailing hydraulic conductivity field. The results also indicate that this novel data integration approach is remarkably flexible and robust and hence can be expected to be applicable to a wide range of geophysical and hydrological data at all scale ranges. In the second part of my thesis, I evaluate in detail the viability of sequential geostatistical resampling as a proposal mechanism for Markov Chain Monte Carlo (MCMC) methods applied to high-dimensional geophysical and hydrological inverse problems in order to allow for a more accurate and realistic quantification of the uncertainty associated with the thus inferred models. Focusing on a series of pertinent crosshole georadar tomographic examples, I investigated two classes of geostatistical resampling strategies with regard to their ability to efficiently and accurately generate independent realizations from the Bayesian posterior distribution. The corresponding results indicate that, despite its popularity, sequential resampling is rather inefficient at drawing independent posterior samples for realistic synthetic case studies, notably for the practically common and important scenario of pronounced spatial correlation between model parameters. To address this issue, I have developed a new gradual-deformation-based perturbation approach, which is flexible with regard to the number of model parameters as well as the perturbation strength. Compared to sequential resampling, this newly proposed approach was proven to be highly effective in decreasing the number of iterations required for drawing independent samples from the Bayesian posterior distribution.
Resumo:
OBJECTIVE: Prospective studies have shown that quantitative ultrasound (QUS) techniques predict the risk of fracture of the proximal femur with similar standardised risk ratios to dual-energy x-ray absorptiometry (DXA). Few studies have investigated these devices for the prediction of vertebral fractures. The Basel Osteoporosis Study (BOS) is a population-based prospective study to assess the performance of QUS devices and DXA in predicting incident vertebral fractures. METHODS: 432 women aged 60-80 years were followed-up for 3 years. Incident vertebral fractures were assessed radiologically. Bone measurements using DXA (spine and hip) and QUS measurements (calcaneus and proximal phalanges) were performed. Measurements were assessed for their value in predicting incident vertebral fractures using logistic regression. RESULTS: QUS measurements at the calcaneus and DXA measurements discriminated between women with and without incident vertebral fracture, (20% height reduction). The relative risks (RRs) for vertebral fracture, adjusted for age, were 2.3 for the Stiffness Index (SI) and 2.8 for the Quantitative Ultrasound Index (QUI) at the calcaneus and 2.0 for bone mineral density at the lumbar spine. The predictive value (AUC (95% CI)) of QUS measurements at the calcaneus remained highly significant (0.70 for SI, 0.72 for the QUI, and 0.67 for DXA at the lumbar spine) even after adjustment for other confounding variables. CONCLUSIONS: QUS of the calcaneus and bone mineral density measurements were shown to be significant predictors of incident vertebral fracture. The RRs for QUS measurements at the calcaneus are of similar magnitude as for DXA measurements.
Resumo:
Despite the central role of quantitative PCR (qPCR) in the quantification of mRNA transcripts, most analyses of qPCR data are still delegated to the software that comes with the qPCR apparatus. This is especially true for the handling of the fluorescence baseline. This article shows that baseline estimation errors are directly reflected in the observed PCR efficiency values and are thus propagated exponentially in the estimated starting concentrations as well as 'fold-difference' results. Because of the unknown origin and kinetics of the baseline fluorescence, the fluorescence values monitored in the initial cycles of the PCR reaction cannot be used to estimate a useful baseline value. An algorithm that estimates the baseline by reconstructing the log-linear phase downward from the early plateau phase of the PCR reaction was developed and shown to lead to very reproducible PCR efficiency values. PCR efficiency values were determined per sample by fitting a regression line to a subset of data points in the log-linear phase. The variability, as well as the bias, in qPCR results was significantly reduced when the mean of these PCR efficiencies per amplicon was used in the calculation of an estimate of the starting concentration per sample.
Resumo:
PURPOSE: To evaluate the effects of recent advances in magnetic resonance imaging (MRI) radiofrequency (RF) coil and parallel imaging technology on brain volume measurement consistency. MATERIALS AND METHODS: In all, 103 whole-brain MRI volumes were acquired at a clinical 3T MRI, equipped with a 12- and 32-channel head coil, using the T1-weighted protocol as employed in the Alzheimer's Disease Neuroimaging Initiative study with parallel imaging accelerations ranging from 1 to 5. An experienced reader performed qualitative ratings of the images. For quantitative analysis, differences in composite width (CW, a measure of image similarity) and boundary shift integral (BSI, a measure of whole-brain atrophy) were calculated. RESULTS: Intra- and intersession comparisons of CW and BSI measures from scans with equal acceleration demonstrated excellent scan-rescan accuracy, even at the highest acceleration applied. Pairs-of-scans acquired with different accelerations exhibited poor scan-rescan consistency only when differences in the acceleration factor were maximized. A change in the coil hardware between compared scans was found to bias the BSI measure. CONCLUSION: The most important findings are that the accelerated acquisitions appear to be compatible with the assessment of high-quality quantitative information and that for highest scan-rescan accuracy in serial scans the acquisition protocol should be kept as consistent as possible over time. J. Magn. Reson. Imaging 2012;36:1234-1240. ©2012 Wiley Periodicals, Inc.
Resumo:
OBJECTIVE: To report the biopsy findings of osteoid osteoma (OO) and OO-mimicking lesions, assess their distinctive multidetector computed tomography (MDCT) features and evaluate treatment by radiofrequency ablation (RFA). METHODS: In this multicentric retrospective study, 80 patients (54 male, 26 female, mean age 24.1 years, range 5-48) with presumed (clinical and MDCT features) OO were treated by percutaneous RFA between May 2002 and June 2009. Per-procedural biopsies were always performed. The following MDCT features were assessed: skeletal distribution and location within the bone, size, central calcification, surrounding osteosclerosis and periosteal reaction. Clinical success of RFA was evaluated. RESULTS: Histopathological diagnoses were: 54 inconclusive biopsies, 16 OO, 10 OO-mimicking lesions (5 chronic osteomyelitis, 3 chondroblastoma, 1 eosinophilic granuloma, 1 fibrous dysplasia). OO-mimicking lesions were significantly greater in size (p = 0.001) and presented non-significant trends towards medullary location (p = 0.246), moderate surrounding osteosclerosis (p = 0.189) and less periosteal reaction (p = 0.197), compared with OO. Primary success for ablation of OO-mimicking lesions was 100% at 1 month, 85.7% at 6 and 12 months, and 66.7% at 24 months. Secondary success was 100%. CONCLUSION: Larger size, medullary location, less surrounding osteosclerosis and periosteal reaction on MDCT may help differentiate OO-mimicking lesions from OO. OO-mimicking lesions are safely and successfully treated by RFA.
Resumo:
Introduction Lesion detection in multiple sclerosis (MS) is an essential part of its clinical diagnosis. In addition, radiological characterisation of MS lesions is an important research field that aims at distinguishing different MS types, monitoring drug response and prognosis. To date, various MR protocols have been proposed to obtain optimal lesion contrast for early and comprehensive diagnosis of the MS disease. In this study, we compare the sensitivity of five different MR contrasts for lesion detection: (i) the DIR sequence (Double Inversion Recovery, [4]), (ii) the Dark-fluid SPACE acquisition schemes, a 3D variant of a 2D FLAIR sequence [1], (iii) the MP2RAGE [2], an MP-RAGE variant that provides homogeneous T1 contrast and quantitative T1-values, and the sequences currently used for clinical MS diagnosis (2D FLAIR, MP-RAGE). Furthermore, we investigate the T1 relaxation times of cortical and sub-cortical regions in the brain hemispheres and the cerebellum at 3T. Methods 10 early-stage female MS patients (age: 31.64.7y; disease duration: 3.81.9y; disability score, EDSS: 1.80.4) and 10 healthy controls (age and gender-matched: 31.25.8y) were included in the study after obtaining informed written consent according to the local ethic protocol. All experiments were performed at 3T (Magnetom Trio a Tim System, Siemens, Germany) using a 32-channel head coil [5]. The imaging protocol included the following sequences, (all except for axial FLAIR 2D with 1x1x1.2 mm3 voxel and 256x256x160 matrix): DIR (TI1/TI2/TR XX/3652/10000 ms, iPAT=2, TA 12:02 min), MP-RAGE (TI/TR 900/2300 ms, iPAT=3, TA 3:47 min); MP2RAGE (TI1/TI2/TR 700/2500/5000 ms, iPAT=3, TA 8:22 min, cf. [2]); 3D FLAIR SPACE (only for patient 4-6, TI/TR 1800/5000 ms, iPAT=2, TA=5;52 min, cf. [1]); Axial FLAIR (0.9x0.9x2.5 mm3, 256x256x44 matrix, TI/TR 2500/9000 ms, iPAT=2, TA 4:05 min). Lesions were identified by two experienced neurologist and radiologist, manually contoured and assigned to regional locations (s. table 1). Regional lesion masks (RLM) from each contrast were compared for number and volumes of lesions. In addition, RLM were merged in a single "master" mask, which represented the sum of the lesions of all contrasts. T1 values were derived for each location from this mask for patients 5-10 (3D FLAIR contrast was missing for patient 1-4). Results & Discussion The DIR sequence appears the most sensitive for total lesions count, followed by the MP2RAGE (table 1). The 3D FLAIR SPACE sequence turns out to be more sensitive than the 2D FLAIR, presumably due to reduced partial volume effects. Looking for sub-cortical hemispheric lesions, the DIR contrast appears to be equally sensitive to the MP2RAGE and SPACE, but most sensitive for cerebellar MS plaques. The DIR sequence is also the one that reveals cortical hemispheric lesions best. T1 relaxation times at 3T in the WM and GM of the hemispheres and the cerebellum, as obtained with the MP2RAGE sequence, are shown in table 2. Extending previous studies, we confirm overall longer T1-values in lesion tissue and higher standard deviations compared to the non-lesion tissue and control tissue in healthy controls. We hypothesize a biological (different degree of axonal loss and demyelination) rather than technical origin. Conclusion In this study, we applied 5 MR contrasts including two novel sequences to investigate the contrast of highest sensitivity for early MS diagnosis. In addition, we characterized for the first time the T1 relaxation time in cortical and sub-cortical regions of the hemispheres and the cerebellum. Results are in agreement with previous publications and meaningful biological interpretation of the data.
Resumo:
Navigator-gated and corrected 3D coronary MR angiography (MRA) allows submillimeter image acquisition during free breathing. However, cranial diaphragmatic drift and relative phase shifts of chest-wall motion are limiting factors for image quality and scanning duration. We hypothesized that image acquisition in the prone position would minimize artifacts related to chest-wall motion and suppress diaphragmatic drift. Twelve patients with radiographically-confirmed coronary artery disease and six healthy adult volunteers were studied in both the prone and the supine position during free-breathing navigator-gated and corrected 3D coronary MRA. Image quality and the diaphragmatic positions were objectively compared. In the prone position, there was a 36% improvement in signal-to-noise ratio (SNR; 15.5 +/- 2.7 vs. 11.4 +/- 2.6; P < 0.01) and a 34% improvement in CNR (12.5 +/- 3.3 vs. 9.3 +/- 2.5, P < 0.01). The prone position also resulted in a 17% improvement in coronary vessel definition (P < 0.01). Cranial end-expiratory diaphragmatic drift occurred less frequently in the prone position (23% +/- 17% vs. 40% +/- 26% supine; P <0.05), and navigator efficiency was higher. Prone coronary MRA results in improved SNR and CNR with enhanced coronary vessel definition. Cranial end-expiratory diaphragmatic drift also was reduced, and navigator efficiency was enhanced. When feasible, prone imaging is recommended for free-breathing coronary MRA.