237 resultados para Quantitative micrographic parameters
Resumo:
Time periods composing stance phase of gait can be clinically meaningful parameters to reveal differences between normal and pathological gait. This study aimed, first, to describe a novel method for detecting stance and inner-stance temporal events based on foot-worn inertial sensors; second, to extract and validate relevant metrics from those events; and third, to investigate their suitability as clinical outcome for gait evaluations. 42 subjects including healthy subjects and patients before and after surgical treatments for ankle osteoarthritis performed 50-m walking trials while wearing foot-worn inertial sensors and pressure insoles as a reference system. Several hypotheses were evaluated to detect heel-strike, toe-strike, heel-off, and toe-off based on kinematic features. Detected events were compared with the reference system on 3193 gait cycles and showed good accuracy and precision. Absolute and relative stance periods, namely loading response, foot-flat, and push-off were then estimated, validated, and compared statistically between populations. Besides significant differences observed in stance duration, the analysis revealed differing tendencies with notably a shorter foot-flat in healthy subjects. The result indicated which features in inertial sensors' signals should be preferred for detecting precisely and accurately temporal events against a reference standard. The system is suitable for clinical evaluations and provides temporal analysis of gait beyond the common swing/stance decomposition, through a quantitative estimation of inner-stance phases such as foot-flat.
Resumo:
We review methods to estimate the average crystal (grain) size and the crystal (grain) size distribution in solid rocks. Average grain sizes often provide the base for stress estimates or rheological calculations requiring the quantification of grain sizes in a rock's microstructure. The primary data for grain size data are either 1D (i.e. line intercept methods), 2D (area analysis) or 3D (e.g., computed tomography, serial sectioning). These data have been used for different data treatments over the years, whereas several studies assume a certain probability function (e.g., logarithm, square root) to calculate statistical parameters as the mean, median, mode or the skewness of a crystal size distribution. The finally calculated average grain sizes have to be compatible between the different grain size estimation approaches in order to be properly applied, for example, in paleo-piezometers or grain size sensitive flow laws. Such compatibility is tested for different data treatments using one- and two-dimensional measurements. We propose an empirical conversion matrix for different datasets. These conversion factors provide the option to make different datasets compatible with each other, although the primary calculations were obtained in different ways. In order to present an average grain size, we propose to use the area-weighted and volume-weighted mean in the case of unimodal grain size distributions, respectively, for 2D and 3D measurements. The shape of the crystal size distribution is important for studies of nucleation and growth of minerals. The shape of the crystal size distribution of garnet populations is compared between different 2D and 3D measurements, which are serial sectioning and computed tomography. The comparison of different direct measured 3D data; stereological data and direct presented 20 data show the problems of the quality of the smallest grain sizes and the overestimation of small grain sizes in stereological tools, depending on the type of CSD. (C) 2011 Published by Elsevier Ltd.
Resumo:
The purposes of this study were to characterize the performance of a 3-dimensional (3D) ordered-subset expectation maximization (OSEM) algorithm in the quantification of left ventricular (LV) function with (99m)Tc-labeled agent gated SPECT (G-SPECT), the QGS program, and a beating-heart phantom and to optimize the reconstruction parameters for clinical applications. METHODS: A G-SPECT image of a dynamic heart phantom simulating the beating left ventricle was acquired. The exact volumes of the phantom were known and were as follows: end-diastolic volume (EDV) of 112 mL, end-systolic volume (ESV) of 37 mL, and stroke volume (SV) of 75 mL; these volumes produced an LV ejection fraction (LVEF) of 67%. Tomographic reconstructions were obtained after 10-20 iterations (I) with 4, 8, and 16 subsets (S) at full width at half maximum (FWHM) gaussian postprocessing filter cutoff values of 8-15 mm. The QGS program was used for quantitative measurements. RESULTS: Measured values ranged from 72 to 92 mL for EDV, from 18 to 32 mL for ESV, and from 54 to 63 mL for SV, and the calculated LVEF ranged from 65% to 76%. Overall, the combination of 10 I, 8 S, and a cutoff filter value of 10 mm produced the most accurate results. The plot of the measures with respect to the expectation maximization-equivalent iterations (I x S product) revealed a bell-shaped curve for the LV volumes and a reverse distribution for the LVEF, with the best results in the intermediate range. In particular, FWHM cutoff values exceeding 10 mm affected the estimation of the LV volumes. CONCLUSION: The QGS program is able to correctly calculate the LVEF when used in association with an optimized 3D OSEM algorithm (8 S, 10 I, and FWHM of 10 mm) but underestimates the LV volumes. However, various combinations of technical parameters, including a limited range of I and S (80-160 expectation maximization-equivalent iterations) and low cutoff values (< or =10 mm) for the gaussian postprocessing filter, produced results with similar accuracies and without clinically relevant differences in the LV volumes and the estimated LVEF.
Resumo:
A fundamental tenet of neuroscience is that cortical functional differentiation is related to the cross-areal differences in cyto-, receptor-, and myeloarchitectonics that are observed in ex-vivo preparations. An ongoing challenge is to create noninvasive magnetic resonance (MR) imaging techniques that offer sufficient resolution, tissue contrast, accuracy and precision to allow for characterization of cortical architecture over an entire living human brain. One exciting development is the advent of fast, high-resolution quantitative mapping of basic MR parameters that reflect cortical myeloarchitecture. Here, we outline some of the theoretical and technical advances underlying this technique, particularly in terms of measuring and correcting for transmit and receive radio frequency field inhomogeneities. We also discuss new directions in analytic techniques, including higher resolution reconstructions of the cortical surface. We then discuss two recent applications of this technique. The first compares individual and group myelin maps to functional retinotopic maps in the same individuals, demonstrating a close relationship between functionally and myeloarchitectonically defined areal boundaries (as well as revealing an interesting disparity in a highly studied visual area). The second combines tonotopic and myeloarchitectonic mapping to localize primary auditory areas in individual healthy adults, using a similar strategy as combined electrophysiological and post-mortem myeloarchitectonic studies in non-human primates.
Resumo:
The ability to determine the location and relative strength of all transcription-factor binding sites in a genome is important both for a comprehensive understanding of gene regulation and for effective promoter engineering in biotechnological applications. Here we present a bioinformatically driven experimental method to accurately define the DNA-binding sequence specificity of transcription factors. A generalized profile was used as a predictive quantitative model for binding sites, and its parameters were estimated from in vitro-selected ligands using standard hidden Markov model training algorithms. Computer simulations showed that several thousand low- to medium-affinity sequences are required to generate a profile of desired accuracy. To produce data on this scale, we applied high-throughput genomics methods to the biochemical problem addressed here. A method combining systematic evolution of ligands by exponential enrichment (SELEX) and serial analysis of gene expression (SAGE) protocols was coupled to an automated quality-controlled sequence extraction procedure based on Phred quality scores. This allowed the sequencing of a database of more than 10,000 potential DNA ligands for the CTF/NFI transcription factor. The resulting binding-site model defines the sequence specificity of this protein with a high degree of accuracy not achieved earlier and thereby makes it possible to identify previously unknown regulatory sequences in genomic DNA. A covariance analysis of the selected sites revealed non-independent base preferences at different nucleotide positions, providing insight into the binding mechanism.
Resumo:
Purpose: Previous studies of the visual outcome in bilateral non-arteritic anterior ischemic optic neuropathy (NAION) have yielded conflicting results, specifically regarding congruity between fellow eyes. Prior studies have used measures of acuity and computerized perimetry but none has compared Goldmann visual field outcomes between fellow eyes. In order to better define the concordance of visual loss in this condition, we reviewed our cases of bilateral sequential NAION, including measures of visual acuity, pupillary function and both pattern and severity of visual field loss.Methods: We performed a retrospective chart review of 102 patients with a diagnosis of bilateral sequential NAION. Of the 102 patients, 86 were included in the study for analysis of final visual outcome between the affected eyes. Visual function was assessed using visual acuity, Goldmann visual fields, color vision and RAPD. A quantitative total visual field score and score per quadrant was analyzed for each eye using the numerical Goldmann visual field scoring method previously described by Esterman and colleagues. Based upon these scores, we calculated the total deviation and pattern deviation between fellow eyes and between eyes of different patients. Statistical significance was determined using nonparametric tests.Results: A statistically significant correlation was found between fellow eyes for multiple parameters, including logMAR visual acuity (P = 0.0101), global visual field (P = 0.0001), superior visual field (P = 0.0001), and inferior visual field (P = 0.0001). In addition, the mean deviation of both total (P = 0.0000000007) and pattern (P = 0.000000004) deviation analyses was significantly less between fellow eyes ("intra"-eyes) than between eyes of different patients ("inter"-eyes).Conclusions: Visual function between fellow eyes showed a fair to moderate correlation that was statistically significant. The pattern of vision loss was also more similar in fellow eyes than between eyes of different patients. These results may help allow better prediction of visual outcome for the second eye in patients with NAION. These findings may also be useful for evaluating efficacy of therapeutic interventions.
Resumo:
Supplementation of elderly institutionalized women with vitamin D and calcium decreased hip fractures and increased hip bone mineral density. Quantitative ultrasound (QUS) measurements can be performed in nursing homes, and easily repeated for follow-up. However, the effect of the correction of vitamin D deficiency on QUS parameters is not known. Therefore, 248 institutionalized women aged 62-98 years were included in a 2-year open controlled study. They were randomized into a treated group (n = 124), receiving 440 IU of vitamin D3 combined with 500 mg calcium (1250 mg calcium carbonate, Novartis) twice daily, and a control group (n = 124). One hundred and three women (42%), aged 84.5 +/- 7.5 years, completed the study: 50 in the treated group, 53 in the controls. QUS of the calcaneus, which measures BUA (broadband ultrasound attenuation) and SOS (speed of sound), and biochemical analysis were performed before and after 1 and 2 years of treatment. Only the results of the women with a complete follow-up were taken into account. Both groups had low initial mean serum 25-hydroxyvitamin D levels (11.9 +/- 1.2 and 11.7 +/- 1.2 micrograms/l; normal range 6.4-40.2 micrograms/l) and normal mean serum parathyroid hormone (PTH) levels (43.1 +/- 3.2 and 44.6 +/- 3.5 ng/l; normal range 10-70 ng/l, normal mean 31.8 +/- 2.3 ng/l). The treatment led to a correction of the metabolic disturbances, with an increase in 25-hydroxyvitamin D by 123% (p < 0.01) and a decrease in PTH by 18% (p < 0.05) and of alkaline phosphatase by 15% (p < 0.01). In the controls there was a worsening of the hypovitaminosis D, with a decrease of 25-hydroxyvitamin D by 51% (p < 0.01) and an increase in PTH by 51% (p < 0.01), while the serum calcium level decreased by only 2% (p < 0.01). After 2 years of treatment BUA increased significantly by 1.6% in the treated group (p < 0.05), and decreased by 2.3% in the controls (p < 0.01). Therefore, the difference in BUA between the treated subjects and the controls (3.9%) was significant after 2 years (p < 0.01). However, SOS decreased by the same amount in both groups (approximately 0.5%). In conclusion, BUA, but not SOS, reflected the positive effect on bone of supplementation with calcium and vitamin D3 in a population of elderly institutionalized women.
Resumo:
We hypothesized that combining clinical risk factors (CRF) with the heel stiffness index (SI) measured via quantitative ultrasound (QUS) would improve the detection of women both at low and high risk for hip fracture. Categorizing women by risk score improved the specificity of detection to 42.4%, versus 33.8% using CRF alone and 38.4% using the SI alone. This combined CRF-SI score could be used wherever and whenever DXA is not readily accessible. INTRODUCTION AND HYPOTHESIS: Several strategies have been proposed to identify women at high risk for osteoporosis-related fractures; we wanted to investigate whether combining clinical risk factors (CRF) and heel QUS parameters could provide a more accurate tool to identify women at both low and high risk for hip fracture than either CRF or QUS alone. METHODS: We pooled two Caucasian cohorts, EPIDOS and SEMOF, into a large database named "EPISEM", in which 12,064 women, 70 to 100 years old, were analyzed. Amongst all the CRF available in EPISEM, we used only the ones which were statistically significant in a Cox multivariate model. Then, we constructed a risk score, by combining the QUS-derived heel stiffness index (SI) and the following seven CRF: patient age, body mass index (BMI), fracture history, fall history, diabetes history, chair-test results, and past estrogen treatment. RESULTS: Using the composite SI-CRF score, 42% of the women who did not report a hip fracture were found to be at low risk at baseline, and 57% of those who subsequently sustained a fracture were at high risk. Using the SI alone, corresponding percentages were 38% and 52%; using CRF alone, 34% and 53%. The number of subjects in the intermediate group was reduced from 5,400 (including 112 hip fractures) and 5,032 (including 111 hip fractures) to 4,549 (including 100 including fractures) for the CRF and QUS alone versus the combination score. CONCLUSIONS: Combining clinical risk factors to heel bone ultrasound appears to correctly identify more women at low risk for hip fracture than either the stiffness index or the CRF alone; it improves the detection of women both at low and high risk.
Resumo:
Dual-energy X-ray absorptiometry (DXA) is commonly used in the care of patients for diagnostic classification of osteoporosis, low bone mass (osteopenia), or normal bone density; assessment of fracture risk; and monitoring changes in bone density over time. The development of other technologies for the evaluation of skeletal health has been associated with uncertainties regarding their applications in clinical practice. Quantitative ultrasound (QUS), a technology for measuring properties of bone at peripheral skeletal sites, is more portable and less expensive than DXA, without the use of ionizing radiation. The proliferation of QUS devices that are technologically diverse, measuring and reporting variable bone parameters in different ways, examining different skeletal sites, and having differing levels of validating data for association with DXA-measured bone density and fracture risk, has created many challenges in applying QUS for use in clinical practice. The International Society for Clinical Densitometry (ISCD) 2007 Position Development Conference (PDC) addressed clinical applications of QUS for fracture risk assessment, diagnosis of osteoporosis, treatment initiation, monitoring of treatment, and quality assurance/quality control. The ISCD Official Positions on QUS resulting from this PDC, the rationale for their establishment, and recommendations for further study are presented here.
Resumo:
Geophysical techniques can help to bridge the inherent gap with regard to spatial resolution and the range of coverage that plagues classical hydrological methods. This has lead to the emergence of the new and rapidly growing field of hydrogeophysics. Given the differing sensitivities of various geophysical techniques to hydrologically relevant parameters and their inherent trade-off between resolution and range the fundamental usefulness of multi-method hydrogeophysical surveys for reducing uncertainties in data analysis and interpretation is widely accepted. A major challenge arising from such endeavors is the quantitative integration of the resulting vast and diverse database in order to obtain a unified model of the probed subsurface region that is internally consistent with all available data. To address this problem, we have developed a strategy towards hydrogeophysical data integration based on Monte-Carlo-type conditional stochastic simulation that we consider to be particularly suitable for local-scale studies characterized by high-resolution and high-quality datasets. Monte-Carlo-based optimization techniques are flexible and versatile, allow for accounting for a wide variety of data and constraints of differing resolution and hardness and thus have the potential of providing, in a geostatistical sense, highly detailed and realistic models of the pertinent target parameter distributions. Compared to more conventional approaches of this kind, our approach provides significant advancements in the way that the larger-scale deterministic information resolved by the hydrogeophysical data can be accounted for, which represents an inherently problematic, and as of yet unresolved, aspect of Monte-Carlo-type conditional simulation techniques. We present the results of applying our algorithm to the integration of porosity log and tomographic crosshole georadar data to generate stochastic realizations of the local-scale porosity structure. Our procedure is first tested on pertinent synthetic data and then applied to corresponding field data collected at the Boise Hydrogeophysical Research Site near Boise, Idaho, USA.
Resumo:
A pressing need exists to disentangle age-related changes from pathologic neurodegeneration. This study aims to characterize the spatial pattern and age-related differences of biologically relevant measures in vivo over the course of normal aging. Quantitative multiparameter maps that provide neuroimaging biomarkers for myelination and iron levels, parameters sensitive to aging, were acquired from 138 healthy volunteers (age range: 19-75 years). Whole-brain voxel-wise analysis revealed a global pattern of age-related degeneration. Significant demyelination occurred principally in the white matter. The observed age-related differences in myelination were anatomically specific. In line with invasive histologic reports, higher age-related differences were seen in the genu of the corpus callosum than the splenium. Iron levels were significantly increased in the basal ganglia, red nucleus, and extensive cortical regions but decreased along the superior occipitofrontal fascicle and optic radiation. This whole-brain pattern of age-associated microstructural differences in the asymptomatic population provides insight into the neurobiology of aging. The results help build a quantitative baseline from which to examine and draw a dividing line between healthy aging and pathologic neurodegeneration.
Resumo:
BACKGROUND: PCR has the potential to detect and precisely quantify specific DNA sequences, but it is not yet often used as a fully quantitative method. A number of data collection and processing strategies have been described for the implementation of quantitative PCR. However, they can be experimentally cumbersome, their relative performances have not been evaluated systematically, and they often remain poorly validated statistically and/or experimentally. In this study, we evaluated the performance of known methods, and compared them with newly developed data processing strategies in terms of resolution, precision and robustness. RESULTS: Our results indicate that simple methods that do not rely on the estimation of the efficiency of the PCR amplification may provide reproducible and sensitive data, but that they do not quantify DNA with precision. Other evaluated methods based on sigmoidal or exponential curve fitting were generally of both poor resolution and precision. A statistical analysis of the parameters that influence efficiency indicated that it depends mostly on the selected amplicon and to a lesser extent on the particular biological sample analyzed. Thus, we devised various strategies based on individual or averaged efficiency values, which were used to assess the regulated expression of several genes in response to a growth factor. CONCLUSION: Overall, qPCR data analysis methods differ significantly in their performance, and this analysis identifies methods that provide DNA quantification estimates of high precision, robustness and reliability. These methods allow reliable estimations of relative expression ratio of two-fold or higher, and our analysis provides an estimation of the number of biological samples that have to be analyzed to achieve a given precision.
Resumo:
Résumé Des développements antérieurs, au sein de l'Institut de Géophysique de Lausanne, ont permis de développer des techniques d'acquisition sismique et de réaliser l'interprétation des données sismique 2D et 3D pour étudier la géologie de la région et notamment les différentes séquences sédimentaires du Lac Léman. Pour permettre un interprétation quantitative de la sismique en déterminant des paramètres physiques des sédiments la méthode AVO (Amplitude Versus Offset) a été appliquée. Deux campagnes sismiques lacustres, 2D et 3D, ont été acquises afin de tester la méthode AVO dans le Grand Lac sur les deltas des rivières. La géométrie d'acquisition a été repensée afin de pouvoir enregistrer les données à grands déports. Les flûtes sismiques, mises bout à bout, ont permis d'atteindre des angles d'incidence d'environ 40˚ . Des récepteurs GPS spécialement développés à cet effet, et disposés le long de la flûte, ont permis, après post-traitement des données, de déterminer la position de la flûte avec précision (± 0.5 m). L'étalonnage de nos hydrophones, réalisé dans une chambre anéchoïque, a permis de connaître leur réponse en amplitude en fonction de la fréquence. Une variation maximale de 10 dB a été mis en évidence entre les capteurs des flûtes et le signal de référence. Un traitement sismique dont l'amplitude a été conservée a été appliqué sur les données du lac. L'utilisation de l'algorithme en surface en consistante a permis de corriger les variations d'amplitude des tirs du canon à air. Les sections interceptes et gradients obtenues sur les deltas de l'Aubonne et de la Dranse ont permis de produire des cross-plots. Cette représentation permet de classer les anomalies d'amplitude en fonction du type de sédiments et de leur contenu potentiel en gaz. L'un des attributs qui peut être extrait des données 3D, est l'amplitude de la réflectivité d'une interface sismique. Ceci ajoute une composante quantitative à l'interprétation géologique d'une interface. Le fond d'eau sur le delta de l'Aubonne présente des anomalies en amplitude qui caractérisent les chenaux. L'inversion de l'équation de Zoeppritz par l'algorithme de Levenberg-Marquardt a été programmée afin d'extraire les paramètres physiques des sédiments sur ce delta. Une étude statistique des résultats de l'inversion permet de simuler la variation de l'amplitude en fonction du déport. On a obtenu un modèle dont la première couche est l'eau et dont la seconde est une couche pour laquelle V P = 1461 m∕s, ρ = 1182 kg∕m3 et V S = 383 m∕s. Abstract A system to record very high resolution (VHR) seismic data on lakes in 2D and 3D was developed at the Institute of Geophysics, University of Lausanne. Several seismic surveys carried out on Lake Geneva helped us to better understand the geology of the area and to identify sedimentary sequences. However, more sophisticated analysis of the data such as the AVO (Amplitude Versus Offset) method provides means of deciphering the detailed structure of the complex Quaternary sedimentary fill of the Lake Geneva trough. To study the physical parameters we applied the AVO method at some selected places of sediments. These areas are the Aubonne and Dranse River deltas where the configurations of the strata are relatively smooth and the discontinuities between them easy to pick. A specific layout was developed to acquire large incidence angle. 2D and 3D seismic data were acquired with streamers, deployed end to end, providing incidence angle up to 40˚ . One or more GPS antennas attached to the streamer enabled us to calculate individual hydrophone positions with an accuracy of 50 cm after post-processing of the navigation data. To ensure that our system provides correct amplitude information, our streamer sensors were calibrated in an anechoic chamber using a loudspeaker as a source. Amplitude variations between the each hydrophone were of the order of 10 dB. An amplitude correction for each hydrophone was computed and applied before processing. Amplitude preserving processing was then carried out. Intercept vs. gradient cross-plots enable us to determine that both geological discontinuities (lacustrine sediments/moraine and moraine/molasse) have well defined trends. A 3D volume collected on the Aubonne river delta was processed in order ro obtain AVO attributes. Quantitative interpretation using amplitude maps were produced and amplitude maps revealed high reflectivity in channels. Inversion of the water bottom of the Zoeppritz equation using the Levenberg-Marquadt algorithm was carried out to estimate V P , V S and ρ of sediments immediately under the lake bottom. Real-data inversion gave, under the water layer, a mud layer with V P = 1461 m∕s, ρ = 1182 kg∕m3 et V S = 383 m∕s.
Resumo:
For the detection and management of osteoporosis and osteoporosis-related fractures, quantitative ultrasound (QUS) is emerging as a relatively low-cost and readily accessible alternative to dual-energy X-ray absorptiometry (DXA) measurement of bone mineral density (BMD) in certain circumstances. The following is a brief, but thorough review of the existing literature with respect to the use of QUS in 6 settings: 1) assessing fragility fracture risk; 2) diagnosing osteoporosis; 3) initiating osteoporosis treatment; 4) monitoring osteoporosis treatment; 5) osteoporosis case finding; and 6) quality assurance and control. Many QUS devices exist that are quite different with respect to the parameters they measure and the strength of empirical evidence supporting their use. In general, heel QUS appears to be most tested and most effective. Overall, some, but not all, heel QUS devices are effective assessing fracture risk in some, but not all, populations, the evidence being strongest for Caucasian females over 55 years old. Otherwise, the evidence is fair with respect to certain devices allowing for the accurate diagnosis of likelihood of osteoporosis, and generally fair to poor in terms of QUS use when initiating or monitoring osteoporosis treatment. A reasonable protocol is proposed herein for case-finding purposes, which relies on a combined assessment of clinical risk factors (CR.F) and heel QUS. Finally, several recommendations are made for quality assurance and control.
Resumo:
Multi-center studies using magnetic resonance imaging facilitate studying small effect sizes, global population variance and rare diseases. The reliability and sensitivity of these multi-center studies crucially depend on the comparability of the data generated at different sites and time points. The level of inter-site comparability is still controversial for conventional anatomical T1-weighted MRI data. Quantitative multi-parameter mapping (MPM) was designed to provide MR parameter measures that are comparable across sites and time points, i.e., 1 mm high-resolution maps of the longitudinal relaxation rate (R1 = 1/T1), effective proton density (PD(*)), magnetization transfer saturation (MT) and effective transverse relaxation rate (R2(*) = 1/T2(*)). MPM was validated at 3T for use in multi-center studies by scanning five volunteers at three different sites. We determined the inter-site bias, inter-site and intra-site coefficient of variation (CoV) for typical morphometric measures [i.e., gray matter (GM) probability maps used in voxel-based morphometry] and the four quantitative parameters. The inter-site bias and CoV were smaller than 3.1 and 8%, respectively, except for the inter-site CoV of R2(*) (<20%). The GM probability maps based on the MT parameter maps had a 14% higher inter-site reproducibility than maps based on conventional T1-weighted images. The low inter-site bias and variance in the parameters and derived GM probability maps confirm the high comparability of the quantitative maps across sites and time points. The reliability, short acquisition time, high resolution and the detailed insights into the brain microstructure provided by MPM makes it an efficient tool for multi-center imaging studies.