40 resultados para estimated parameter
em Université de Lausanne, Switzerland
Resumo:
In radionuclide metrology, Monte Carlo (MC) simulation is widely used to compute parameters associated with primary measurements or calibration factors. Although MC methods are used to estimate uncertainties, the uncertainty associated with radiation transport in MC calculations is usually difficult to estimate. Counting statistics is the most obvious component of MC uncertainty and has to be checked carefully, particularly when variance reduction is used. However, in most cases fluctuations associated with counting statistics can be reduced using sufficient computing power. Cross-section data have intrinsic uncertainties that induce correlations when apparently independent codes are compared. Their effect on the uncertainty of the estimated parameter is difficult to determine and varies widely from case to case. Finally, the most significant uncertainty component for radionuclide applications is usually that associated with the detector geometry. Recent 2D and 3D x-ray imaging tools may be utilized, but comparison with experimental data as well as adjustments of parameters are usually inevitable.
Resumo:
Biochemical systems are commonly modelled by systems of ordinary differential equations (ODEs). A particular class of such models called S-systems have recently gained popularity in biochemical system modelling. The parameters of an S-system are usually estimated from time-course profiles. However, finding these estimates is a difficult computational problem. Moreover, although several methods have been recently proposed to solve this problem for ideal profiles, relatively little progress has been reported for noisy profiles. We describe a special feature of a Newton-flow optimisation problem associated with S-system parameter estimation. This enables us to significantly reduce the search space, and also lends itself to parameter estimation for noisy data. We illustrate the applicability of our method by applying it to noisy time-course data synthetically produced from previously published 4- and 30-dimensional S-systems. In addition, we propose an extension of our method that allows the detection of network topologies for small S-systems. We introduce a new method for estimating S-system parameters from time-course profiles. We show that the performance of this method compares favorably with competing methods for ideal profiles, and that it also allows the determination of parameters for noisy profiles.
Resumo:
Although polychlorinated biphenyls (PCBs) have been banned in many countries for more than three decades, exposures to PCBs continue to be of concern due to their long half-lives and carcinogenic effects. In National Institute for Occupational Safety and Health studies, we are using semiquantitative plant-specific job exposure matrices (JEMs) to estimate historical PCB exposures for workers (n = 24,865) exposed to PCBs from 1938 to 1978 at three capacitor manufacturing plants. A subcohort of these workers (n = 410) employed in two of these plants had serum PCB concentrations measured at up to four times between 1976 and 1989. Our objectives were to evaluate the strength of association between an individual worker's measured serum PCB levels and the same worker's cumulative exposure estimated through 1977 with the (1) JEM and (2) duration of employment, and to calculate the explained variance the JEM provides for serum PCB levels using (3) simple linear regression. Consistent strong and statistically significant associations were observed between the cumulative exposures estimated with the JEM and serum PCB concentrations for all years. The strength of association between duration of employment and serum PCBs was good for highly chlorinated (Aroclor 1254/HPCB) but not less chlorinated (Aroclor 1242/LPCB) PCBs. In the simple regression models, cumulative occupational exposure estimated using the JEMs explained 14-24% of the variance of the Aroclor 1242/LPCB and 22-39% for Aroclor 1254/HPCB serum concentrations. We regard the cumulative exposure estimated with the JEM as a better estimate of PCB body burdens than serum concentrations quantified as Aroclor 1242/LPCB and Aroclor 1254/HPCB.
Resumo:
BACKGROUND AND AIM: There is an ongoing debate on which obesity marker better predicts cardiovascular disease (CVD). In this study, the relationships between obesity markers and high (>5%) 10-year risk of fatal CVD were assessed. METHODS AND RESULTS: A cross-sectional study was conducted including 3047 women and 2689 men aged 35-75years. Body fat percentage was assessed by tetrapolar bioimpedance. CVD risk was assessed using the SCORE risk function and gender- and age-specific cut points for body fat were derived. The diagnostic accuracy of each obesity marker was evaluated through receiver operating characteristics (ROC) analysis. In men, body fat presented a higher correlation (r=0.31) with 10-year CVD risk than waist/hip ratio (WHR, r=0.22), waist (r=0.22) or BMI (r=0.19); the corresponding values in women were 0.18, 0.15, 0.11 and 0.05, respectively (all p<0.05). In both genders, body fat showed the highest area under the ROC curve (AUC): in men, the AUC (95% confidence interval) were 76.0 (73.8-78.2), 67.3 (64.6-69.9), 65.8 (63.1-68.5) and 60.6 (57.9-63.5) for body fat, WHR, waist and BMI, respectively. In women, the corresponding values were 72.3 (69.2-75.3), 66.6 (63.1-70.2), 64.1 (60.6-67.6) and 58.8 (55.2-62.4). The use of the body fat percentage criterion enabled the capture of three times more subjects with high CVD risk than the BMI criterion, and almost twice as much as the WHR criterion. CONCLUSION: Obesity defined by body fat percentage is more related with 10-year risk of fatal CVD than obesity markers based on WHR, waist or BMI.
Resumo:
Des progrès significatifs ont été réalisés dans le domaine de l'intégration quantitative des données géophysique et hydrologique l'échelle locale. Cependant, l'extension à de plus grandes échelles des approches correspondantes constitue encore un défi majeur. Il est néanmoins extrêmement important de relever ce défi pour développer des modèles fiables de flux des eaux souterraines et de transport de contaminant. Pour résoudre ce problème, j'ai développé une technique d'intégration des données hydrogéophysiques basée sur une procédure bayésienne de simulation séquentielle en deux étapes. Cette procédure vise des problèmes à plus grande échelle. L'objectif est de simuler la distribution d'un paramètre hydraulique cible à partir, d'une part, de mesures d'un paramètre géophysique pertinent qui couvrent l'espace de manière exhaustive, mais avec une faible résolution (spatiale) et, d'autre part, de mesures locales de très haute résolution des mêmes paramètres géophysique et hydraulique. Pour cela, mon algorithme lie dans un premier temps les données géophysiques de faible et de haute résolution à travers une procédure de réduction déchelle. Les données géophysiques régionales réduites sont ensuite reliées au champ du paramètre hydraulique à haute résolution. J'illustre d'abord l'application de cette nouvelle approche dintégration des données à une base de données synthétiques réaliste. Celle-ci est constituée de mesures de conductivité hydraulique et électrique de haute résolution réalisées dans les mêmes forages ainsi que destimations des conductivités électriques obtenues à partir de mesures de tomographic de résistivité électrique (ERT) sur l'ensemble de l'espace. Ces dernières mesures ont une faible résolution spatiale. La viabilité globale de cette méthode est testée en effectuant les simulations de flux et de transport au travers du modèle original du champ de conductivité hydraulique ainsi que du modèle simulé. Les simulations sont alors comparées. Les résultats obtenus indiquent que la procédure dintégration des données proposée permet d'obtenir des estimations de la conductivité en adéquation avec la structure à grande échelle ainsi que des predictions fiables des caractéristiques de transports sur des distances de moyenne à grande échelle. Les résultats correspondant au scénario de terrain indiquent que l'approche d'intégration des données nouvellement mise au point est capable d'appréhender correctement les hétérogénéitées à petite échelle aussi bien que les tendances à gande échelle du champ hydraulique prévalent. Les résultats montrent également une flexibilté remarquable et une robustesse de cette nouvelle approche dintégration des données. De ce fait, elle est susceptible d'être appliquée à un large éventail de données géophysiques et hydrologiques, à toutes les gammes déchelles. Dans la deuxième partie de ma thèse, j'évalue en détail la viabilité du réechantillonnage geostatique séquentiel comme mécanisme de proposition pour les méthodes Markov Chain Monte Carlo (MCMC) appliquées à des probmes inverses géophysiques et hydrologiques de grande dimension . L'objectif est de permettre une quantification plus précise et plus réaliste des incertitudes associées aux modèles obtenus. En considérant une série dexemples de tomographic radar puits à puits, j'étudie deux classes de stratégies de rééchantillonnage spatial en considérant leur habilité à générer efficacement et précisément des réalisations de la distribution postérieure bayésienne. Les résultats obtenus montrent que, malgré sa popularité, le réechantillonnage séquentiel est plutôt inefficace à générer des échantillons postérieurs indépendants pour des études de cas synthétiques réalistes, notamment pour le cas assez communs et importants où il existe de fortes corrélations spatiales entre le modèle et les paramètres. Pour résoudre ce problème, j'ai développé un nouvelle approche de perturbation basée sur une déformation progressive. Cette approche est flexible en ce qui concerne le nombre de paramètres du modèle et lintensité de la perturbation. Par rapport au rééchantillonage séquentiel, cette nouvelle approche s'avère être très efficace pour diminuer le nombre requis d'itérations pour générer des échantillons indépendants à partir de la distribution postérieure bayésienne. - Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale. However, extending corresponding approaches beyond the local scale still represents a major challenge, yet is critically important for the development of reliable groundwater flow and contaminant transport models. To address this issue, I have developed a hydrogeophysical data integration technique based on a two-step Bayesian sequential simulation procedure that is specifically targeted towards larger-scale problems. The objective is to simulate the distribution of a target hydraulic parameter based on spatially exhaustive, but poorly resolved, measurements of a pertinent geophysical parameter and locally highly resolved, but spatially sparse, measurements of the considered geophysical and hydraulic parameters. To this end, my algorithm links the low- and high-resolution geophysical data via a downscaling procedure before relating the downscaled regional-scale geophysical data to the high-resolution hydraulic parameter field. I first illustrate the application of this novel data integration approach to a realistic synthetic database consisting of collocated high-resolution borehole measurements of the hydraulic and electrical conductivities and spatially exhaustive, low-resolution electrical conductivity estimates obtained from electrical resistivity tomography (ERT). The overall viability of this method is tested and verified by performing and comparing flow and transport simulations through the original and simulated hydraulic conductivity fields. The corresponding results indicate that the proposed data integration procedure does indeed allow for obtaining faithful estimates of the larger-scale hydraulic conductivity structure and reliable predictions of the transport characteristics over medium- to regional-scale distances. The approach is then applied to a corresponding field scenario consisting of collocated high- resolution measurements of the electrical conductivity, as measured using a cone penetrometer testing (CPT) system, and the hydraulic conductivity, as estimated from electromagnetic flowmeter and slug test measurements, in combination with spatially exhaustive low-resolution electrical conductivity estimates obtained from surface-based electrical resistivity tomography (ERT). The corresponding results indicate that the newly developed data integration approach is indeed capable of adequately capturing both the small-scale heterogeneity as well as the larger-scale trend of the prevailing hydraulic conductivity field. The results also indicate that this novel data integration approach is remarkably flexible and robust and hence can be expected to be applicable to a wide range of geophysical and hydrological data at all scale ranges. In the second part of my thesis, I evaluate in detail the viability of sequential geostatistical resampling as a proposal mechanism for Markov Chain Monte Carlo (MCMC) methods applied to high-dimensional geophysical and hydrological inverse problems in order to allow for a more accurate and realistic quantification of the uncertainty associated with the thus inferred models. Focusing on a series of pertinent crosshole georadar tomographic examples, I investigated two classes of geostatistical resampling strategies with regard to their ability to efficiently and accurately generate independent realizations from the Bayesian posterior distribution. The corresponding results indicate that, despite its popularity, sequential resampling is rather inefficient at drawing independent posterior samples for realistic synthetic case studies, notably for the practically common and important scenario of pronounced spatial correlation between model parameters. To address this issue, I have developed a new gradual-deformation-based perturbation approach, which is flexible with regard to the number of model parameters as well as the perturbation strength. Compared to sequential resampling, this newly proposed approach was proven to be highly effective in decreasing the number of iterations required for drawing independent samples from the Bayesian posterior distribution.
Resumo:
Historically, it has been difficult to monitor the acute impact of anticancer therapies on hematopoietic organs on a whole-body scale. Deeper understanding of the effect of treatments on bone marrow would be of great potential value in the rational design of intensive treatment regimens. 3'-deoxy-3'-(18)F-fluorothymidine ((18)F-FLT) is a functional radiotracer used to study cellular proliferation. It is trapped in cells in proportion to thymidine-kinase 1 enzyme expression, which is upregulated during DNA synthesis. This study investigates the potential of (18)F-FLT to monitor acute effects of chemotherapy on cellular proliferation and its recovery in bone marrow, spleen, and liver during treatment with 2 different chemotherapy regimens.
Resumo:
PURPOSE: All kinds of blood manipulations aim to increase the total hemoglobin mass (tHb-mass). To establish tHb-mass as an effective screening parameter for detecting blood doping, the knowledge of its normal variation over time is necessary. The aim of the present study, therefore, was to determine the intraindividual variance of tHb-mass in elite athletes during a training year emphasizing off, training, and race seasons at sea level. METHODS: tHb-mass and hemoglobin concentration ([Hb]) were determined in 24 endurance athletes five times during a year and were compared with a control group (n = 6). An analysis of covariance was used to test the effects of training phases, age, gender, competition level, body mass, and training volume. Three error models, based on 1) a total percentage error of measurement, 2) the combination of a typical percentage error (TE) of analytical origin with an absolute SD of biological origin, and 3) between-subject and within-subject variance components as obtained by an analysis of variance, were tested. RESULTS: In addition to the expected influence of performance status, the main results were that the effects of training volume (P = 0.20) and training phases (P = 0.81) on tHb-mass were not significant. We found that within-subject variations mainly have an analytical origin (TE approximately 1.4%) and a very small SD (7.5 g) of biological origin. CONCLUSION: tHb-mass shows very low individual oscillations during a training year (<6%), and these oscillations are below the expected changes in tHb-mass due to Herythropoetin (EPO) application or blood infusion (approximately 10%). The high stability of tHb-mass over a period of 1 year suggests that it should be included in an athlete's biological passport and analyzed by recently developed probabilistic inference techniques that define subject-based reference ranges.
Resumo:
X-ray is a technology that is used for numerous applications in the medical field. The process of X-ray projection gives a 2-dimension (2D) grey-level texture from a 3- dimension (3D) object. Until now no clear demonstration or correlation has positioned the 2D texture analysis as a valid indirect evaluation of the 3D microarchitecture. TBS is a new texture parameter based on the measure of the experimental variogram. TBS evaluates the variation between 2D image grey-levels. The aim of this study was to evaluate existing correlations between 3D bone microarchitecture parameters - evaluated from μCT reconstructions - and the TBS value, calculated on 2D projected images. 30 dried human cadaveric vertebrae were acquired on a micro-scanner (eXplorer Locus, GE) at isotropic resolution of 93 μm. 3D vertebral body models were used. The following 3D microarchitecture parameters were used: Bone volume fraction (BV/TV), Trabecular thickness (TbTh), trabecular space (TbSp), trabecular number (TbN) and connectivity density (ConnD). 3D/2D projections has been done by taking into account the Beer-Lambert Law at X-ray energy of 50, 100, 150 KeV. TBS was assessed on 2D projected images. Correlations between TBS and the 3D microarchitecture parameters were evaluated using a linear regression analysis. Paired T-test is used to assess the X-ray energy effects on TBS. Multiple linear regressions (backward) were used to evaluate relationships between TBS and 3D microarchitecture parameters using a bootstrap process. BV/TV of the sample ranged from 18.5 to 37.6% with an average value at 28.8%. Correlations' analysis showedthat TBSwere strongly correlatedwith ConnD(0.856≤r≤0.862; p<0.001),with TbN (0.805≤r≤0.810; p<0.001) and negatively with TbSp (−0.714≤r≤−0.726; p<0.001), regardless X-ray energy. Results show that lower TBS values are related to "degraded" microarchitecture, with low ConnD, low TbN and a high TbSp. The opposite is also true. X-ray energy has no effect onTBS neither on the correlations betweenTBS and the 3Dmicroarchitecture parameters. In this study, we demonstrated that TBS was significantly correlated with 3D microarchitecture parameters ConnD and TbN, and negatively with TbSp, no matter what X-ray energy has been used. This article is part of a Special Issue entitled ECTS 2011. Disclosure of interest: None declared.
Resumo:
BACKGROUND: Despite recent advances in acute stroke treatment, basilar artery occlusion (BAO) is associated with a death or disability rate of close to 70%. Randomised trials have shown the safety and efficacy of intravenous thrombolysis (IVT) given within 4.5 h and have shown promising results of intra-arterial thrombolysis given within 6 h of symptom onset of acute ischaemic stroke, but these results do not directly apply to patients with an acute BAO because only few, if any, of these patients were included in randomised acute stroke trials.Recently the results of the Basilar Artery International Cooperation Study (BASICS), a prospective registry of patients with acute symptomatic BAO challenged the often-held assumption that intra-arterial treatment (IAT) is superior to IVT. Our observations in the BASICS registry underscore that we continue to lack a proven treatment modality for patients with an acute BAO and that current clinical practice varies widely. DESIGN: BASICS is a randomised controlled, multicentre, open label, phase III intervention trial with blinded outcome assessment, investigating the efficacy and safety of additional IAT after IVT in patients with BAO. The trial targets to include 750 patients, aged 18 to 85 years, with CT angiography or MR angiography confirmed BAO treated with IVT. Patients will be randomised between additional IAT followed by optimal medical care versus optimal medical care alone. IVT has to be initiated within 4.5 h from estimated time of BAO and IAT within 6 h. The primary outcome parameter will be favourable outcome at day 90 defined as a modified Rankin Scale score of 0-3. DISCUSSION: The BASICS registry was observational and has all the limitations of a non-randomised study. As the IAT approach becomes increasingly available and frequently utilised an adequately powered randomised controlled phase III trial investigating the added value of this therapy in patients with an acute symptomatic BAO is needed (clinicaltrials.gov: NCT01717755).
Resumo:
In the context of Systems Biology, computer simulations of gene regulatory networks provide a powerful tool to validate hypotheses and to explore possible system behaviors. Nevertheless, modeling a system poses some challenges of its own: especially the step of model calibration is often difficult due to insufficient data. For example when considering developmental systems, mostly qualitative data describing the developmental trajectory is available while common calibration techniques rely on high-resolution quantitative data. Focusing on the calibration of differential equation models for developmental systems, this study investigates different approaches to utilize the available data to overcome these difficulties. More specifically, the fact that developmental processes are hierarchically organized is exploited to increase convergence rates of the calibration process as well as to save computation time. Using a gene regulatory network model for stem cell homeostasis in Arabidopsis thaliana the performance of the different investigated approaches is evaluated, documenting considerable gains provided by the proposed hierarchical approach.
Resumo:
Models of codon evolution have attracted particular interest because of their unique capabilities to detect selection forces and their high fit when applied to sequence evolution. We described here a novel approach for modeling codon evolution, which is based on Kronecker product of matrices. The 61 × 61 codon substitution rate matrix is created using Kronecker product of three 4 × 4 nucleotide substitution matrices, the equilibrium frequency of codons, and the selection rate parameter. The entities of the nucleotide substitution matrices and selection rate are considered as parameters of the model, which are optimized by maximum likelihood. Our fully mechanistic model allows the instantaneous substitution matrix between codons to be fully estimated with only 19 parameters instead of 3,721, by using the biological interdependence existing between positions within codons. We illustrate the properties of our models using computer simulations and assessed its relevance by comparing the AICc measures of our model and other models of codon evolution on simulations and a large range of empirical data sets. We show that our model fits most biological data better compared with the current codon models. Furthermore, the parameters in our model can be interpreted in a similar way as the exchangeability rates found in empirical codon models.
Resumo:
Part I of this series of articles focused on the construction of graphical probabilistic inference procedures, at various levels of detail, for assessing the evidential value of gunshot residue (GSR) particle evidence. The proposed models - in the form of Bayesian networks - address the issues of background presence of GSR particles, analytical performance (i.e., the efficiency of evidence searching and analysis procedures) and contamination. The use and practical implementation of Bayesian networks for case pre-assessment is also discussed. This paper, Part II, concentrates on Bayesian parameter estimation. This topic complements Part I in that it offers means for producing estimates useable for the numerical specification of the proposed probabilistic graphical models. Bayesian estimation procedures are given a primary focus of attention because they allow the scientist to combine (his/her) prior knowledge about the problem of interest with newly acquired experimental data. The present paper also considers further topics such as the sensitivity of the likelihood ratio due to uncertainty in parameters and the study of likelihood ratio values obtained for members of particular populations (e.g., individuals with or without exposure to GSR).
Resumo:
Aims: In a head-to-head study, we compared the effects of strontium ranelate (SrRan) and alendronate (ALN), anti-osteoporotic agents with antifracture efficacy, on bone microstructure, a component of bone quality, hence of bone strength. Methods: In a randomised, double-dummy, double-blind controlled trial, 88 postmenopausal osteoporotic women were randomised to SrRan 2g/day or ALN 70mg/week for 2 years. Microstructure of the distal radius and distal tibia were assessed by HR-pQCT after 3,6,12,18 and 24 months of treatment. Primary endpoint was HR-pQCT variables relative changes from baseline. An ITT analysis was applied. Results: Baseline characteristics were similar in both groups (mean ±SD): age: 63.6±7.5 vs. 63.7±7.6 yrs; L1-L4T Score: -2.7±0.8 vs. -2.8±0.8g/cm², Cortical Thickness (CTh), trabecular bone fraction (BV/TV) and cortical density=721±242 vs. 753±263μm, 9.5±2.5 vs. 9.3±2.7%, and 750±87 vs. 745±78mg/cm3 respectively. Over 2 yrs, distal radius values changes were within 1 to 2% without significant differences except cortical density. In contrast distal tibia CTh, BV/TV, trabecular and cortical densities increased significantly more in the SrRan group than in the ALN group (Table). No significant between-group differences were observed for the remaining measured parameter (trabecular number, trabecular spacing, and trabecular thickness). After 2 years, L1- L4 and hip aBMD increases were similar to results from pivotal trials (L1-L4:+6.5% and +5.6%;total hip:+4.1% and +2.9%, in Sr- Ran and ALN groups, respectively). In the SrRan group, bALP increased by a median of 18% (p<0.001) and sCTX decreased by a median of -16% (p=0.005) while in the ALN group, bALP and CTX decreased by median of -31% (p<0.001) and -59% (p<0.001) respectively. Relative changes from baseline to last observation (%) SrRan ALN Estimated between group difference p value CTh (μm) 6.29±9.53 0.93±6.23 5.411±1.836 0.004 BV/TV (%) 2.48±5.13 0.84±3.81 1.783±0.852 0.040 Trabecular density (mgHA/cm3) 2.47±5.07 0.88±4.00 1.729±0.859 0.048 Cortical density (mgHA/cm3) 1.43±2.77 0.36±2.14 1.137±0.530 0.045 The two treatments were well tolerated. Conclusions: Within the constraints related to HRpQCT technology, it appears that strontium ranelate has greater effects than alendronate on distal tibia cortical thickness, trabecular and cortical bone densities in women with postmenopausal osteoporosis after two years of treatment. A concomitant significant increase in bone formation marker is observed in the SrRan group.
Resumo:
In Quantitative Microbial Risk Assessment, it is vital to understand how lag times of individual cells are distributed over a bacterial population. Such identified distributions can be used to predict the time by which, in a growth-supporting environment, a few pathogenic cells can multiply to a poisoning concentration level. We model the lag time of a single cell, inoculated into a new environment, by the delay of the growth function characterizing the generated subpopulation. We introduce an easy-to-implement procedure, based on the method of moments, to estimate the parameters of the distribution of single cell lag times. The advantage of the method is especially apparent for cases where the initial number of cells is small and random, and the culture is detectable only in the exponential growth phase.