81 resultados para estimation of parameters
Resumo:
Every year, debris flows cause huge damage in mountainous areas. Due to population pressure in hazardous zones, the socio-economic impact is much higher than in the past. Therefore, the development of indicative susceptibility hazard maps is of primary importance, particularly in developing countries. However, the complexity of the phenomenon and the variability of local controlling factors limit the use of processbased models for a first assessment. A debris flow model has been developed for regional susceptibility assessments using digital elevation model (DEM) with a GIS-based approach.. The automatic identification of source areas and the estimation of debris flow spreading, based on GIS tools, provide a substantial basis for a preliminary susceptibility assessment at a regional scale. One of the main advantages of this model is its workability. In fact, everything is open to the user, from the data choice to the selection of the algorithms and their parameters. The Flow-R model was tested in three different contexts: two in Switzerland and one in Pakistan, for indicative susceptibility hazard mapping. It was shown that the quality of the DEM is the most important parameter to obtain reliable results for propagation, but also to identify the potential debris flows sources.
Resumo:
Objectives: The study objective was to derive reference pharmacokinetic curves of antiretroviral drugs (ART) based on available population pharmacokinetic (Pop-PK) studies that can be used to optimize therapeutic drug monitoring guided dosage adjustment.¦Methods: A systematic search of Pop-PK studies of 8 ART in adults was performed in PubMed. To simulate reference PK curves, a summary of the PK parameters was obtained for each drug based on meta-analysis approach. Most models used one-compartment model, thus chosen as reference model. Models using bi-exponential disposition were simplified to one-compartment, since the first distribution phase was rapid and not determinant for the description of the terminal elimination phase, mostly relevant for this project. Different absorption were standardized for first-order absorption processes.¦Apparent clearance (CL), apparent volume of distribution of the terminal phase (Vz) and absorption rate constant (ka) and inter-individual variability were pooled into summary mean value, weighted by number of plasma levels; intra-individual variability was weighted by number of individuals in each study.¦Simulations based on summary PK parameters served to construct concentration PK percentiles (NONMEM®).¦Concordance between individual and summary parameters was assessed graphically using Forest-plots. To test robustness, difference in simulated curves based on published and summary parameters was calculated using efavirenz as probe drug.¦Results: CL was readily accessible from all studies. For studies with one-compartment, Vz was central volume of distribution; for two-compartment, Vz was CL/λz. ka was directly used or derived based on the mean absorption time (MAT) for more complicated absorption models, assuming MAT=1/ka.¦The value of CL for each drug was in excellent agreement throughout all Pop-PK models, suggesting that minimal concentration derived from summary models was adequately characterized. The comparison of the concentration vs. time profile for efavirenz between published and summary PK parameters revealed not more than 20% difference. Although our approach appears adequate for estimation of elimination phase, the simplification of absorption phase might lead to small bias shortly after drug intake.¦Conclusions: Simulated reference percentile curves based on such an approach represent a useful tool for interpretating drug concentrations. This Pop-PK meta-analysis approach should be further validated and could be extended to elaborate more sophisticated computerized tool for the Bayesian TDM of ART.
Resumo:
A variety of behavioural traits have substantial effects on the gene dynamics and genetic structure of local populations. The mating system is a plastic trait that varies with environmental conditions in the domestic cat (Felis catus) allowing an intraspecific comparison of the impact of this feature on genetic characteristics of the population. To assess the potential effect of the heterogenity of males' contribution to the next generation on variance effective size, we applied the ecological approach of Nunney & Elam (1994) based upon a demographic and behavioural study, and the genetic 'temporal methods' of Waples (1989) and Berthier et al. (2002) using microsatellite markers. The two cat populations studied were nearly closed, similar in size and survival parameters, but differed in their mating system. Immigration appeared extremely restricted in both cases due to environmental and social constraints. As expected, the ratio of effective size to census number (Ne/N) was higher in the promiscuous cat population (harmonic mean = 42%) than in the polygynous one (33%), when Ne was calculated from the ecological method. Only the genetic results based on Waples' estimator were consistent with the ecological results, but failed to evidence an effect of the mating system. Results based on the estimation of Berthier et al. (2002) were extremely variable, with Ne sometimes exceeding census size. Such low reliability in the genetic results should retain attention for conservation purposes.
Resumo:
Time-lapse geophysical data acquired during transient hydrological experiments are being increasingly employed to estimate subsurface hydraulic properties at the field scale. In particular, crosshole ground-penetrating radar (GPR) data, collected while water infiltrates into the subsurface either by natural or artificial means, have been demonstrated in a number of studies to contain valuable information concerning the hydraulic properties of the unsaturated zone. Previous work in this domain has considered a variety of infiltration conditions and different amounts of time-lapse GPR data in the estimation procedure. However, the particular benefits and drawbacks of these different strategies as well as the impact of a variety of key and common assumptions remain unclear. Using a Bayesian Markov-chain-Monte-Carlo stochastic inversion methodology, we examine in this paper the information content of time-lapse zero-offset-profile (ZOP) GPR traveltime data, collected under three different infiltration conditions, for the estimation of van Genuchten-Mualem (VGM) parameters in a layered subsurface medium. Specifically, we systematically analyze synthetic and field GPR data acquired under natural loading and two rates of forced infiltration, and we consider the value of incorporating different amounts of time-lapse measurements into the estimation procedure. Our results confirm that, for all infiltration scenarios considered, the ZOP GPR traveltime data contain important information about subsurface hydraulic properties as a function of depth, with forced infiltration offering the greatest potential for VGM parameter refinement because of the higher stressing of the hydrological system. Considering greater amounts of time-lapse data in the inversion procedure is also found to help refine VGM parameter estimates. Quite importantly, however, inconsistencies observed in the field results point to the strong possibility that posterior uncertainties are being influenced by model structural errors, which in turn underlines the fundamental importance of a systematic analysis of such errors in future related studies.
Resumo:
Ground-penetrating radar (GPR) has the potential to provide valuable information on hydrological properties of the vadose zone because of their strong sensitivity to soil water content. In particular, recent evidence has suggested that the stochastic inversion of crosshole GPR data within a coupled geophysical-hydrological framework may allow for effective estimation of subsurface van-Genuchten-Mualem (VGM) parameters and their corresponding uncertainties. An important and still unresolved issue, however, is how to best integrate GPR data into a stochastic inversion in order to estimate the VGM parameters and their uncertainties, thus improving hydrological predictions. Recognizing the importance of this issue, the aim of the research presented in this thesis was to first introduce a fully Bayesian inversion called Markov-chain-Monte-carlo (MCMC) strategy to perform the stochastic inversion of steady-state GPR data to estimate the VGM parameters and their uncertainties. Within this study, the choice of the prior parameter probability distributions from which potential model configurations are drawn and tested against observed data was also investigated. Analysis of both synthetic and field data collected at the Eggborough (UK) site indicates that the geophysical data alone contain valuable information regarding the VGM parameters. However, significantly better results are obtained when these data are combined with a realistic, informative prior. A subsequent study explore in detail the dynamic infiltration case, specifically to what extent time-lapse ZOP GPR data, collected during a forced infiltration experiment at the Arrenaes field site (Denmark), can help to quantify VGM parameters and their uncertainties using the MCMC inversion strategy. The findings indicate that the stochastic inversion of time-lapse GPR data does indeed allow for a substantial refinement in the inferred posterior VGM parameter distributions. In turn, this significantly improves knowledge of the hydraulic properties, which are required to predict hydraulic behaviour. Finally, another aspect that needed to be addressed involved the comparison of time-lapse GPR data collected under different infiltration conditions (i.e., natural loading and forced infiltration conditions) to estimate the VGM parameters using the MCMC inversion strategy. The results show that for the synthetic example, considering data collected during a forced infiltration test helps to better refine soil hydraulic properties compared to data collected under natural infiltration conditions. When investigating data collected at the Arrenaes field site, further complications arised due to model error and showed the importance of also including a rigorous analysis of the propagation of model error with time and depth when considering time-lapse data. Although the efforts in this thesis were focused on GPR data, the corresponding findings are likely to have general applicability to other types of geophysical data and field environments. Moreover, the obtained results allow to have confidence for future developments in integration of geophysical data with stochastic inversions to improve the characterization of the unsaturated zone but also reveal important issues linked with stochastic inversions, namely model errors, that should definitely be addressed in future research.
Resumo:
BACKGROUND: Guidelines for the prevention of coronary heart disease (CHD) recommend use of Framingham-based risk scores that were developed in white middle-aged populations. It remains unclear whether and how CHD risk prediction might be improved among older adults. We aimed to compare the prognostic performance of the Framingham risk score (FRS), directly and after recalibration, with refit functions derived from the present cohort, as well as to assess the utility of adding other routinely available risk parameters to FRS.¦METHODS: Among 2193 black and white older adults (mean age, 73.5 years) without pre-existing cardiovascular disease from the Health ABC cohort, we examined adjudicated CHD events, defined as incident myocardial infarction, CHD death, and hospitalization for angina or coronary revascularization.¦RESULTS: During 8-year follow-up, 351 participants experienced CHD events. The FRS poorly discriminated between persons who experienced CHD events vs. not (C-index: 0.577 in women; 0.583 in men) and underestimated absolute risk prediction by 51% in women and 8% in men. Recalibration of the FRS improved absolute risk prediction, particulary for women. For both genders, refitting these functions substantially improved absolute risk prediction, with similar discrimination to the FRS. Results did not differ between whites and blacks. The addition of lifestyle variables, waist circumference and creatinine did not improve risk prediction beyond risk factors of the FRS.¦CONCLUSIONS: The FRS underestimates CHD risk in older adults, particularly in women, although traditional risk factors remain the best predictors of CHD. Re-estimated risk functions using these factors improve accurate estimation of absolute risk.
Resumo:
Biochemical systems are commonly modelled by systems of ordinary differential equations (ODEs). A particular class of such models called S-systems have recently gained popularity in biochemical system modelling. The parameters of an S-system are usually estimated from time-course profiles. However, finding these estimates is a difficult computational problem. Moreover, although several methods have been recently proposed to solve this problem for ideal profiles, relatively little progress has been reported for noisy profiles. We describe a special feature of a Newton-flow optimisation problem associated with S-system parameter estimation. This enables us to significantly reduce the search space, and also lends itself to parameter estimation for noisy data. We illustrate the applicability of our method by applying it to noisy time-course data synthetically produced from previously published 4- and 30-dimensional S-systems. In addition, we propose an extension of our method that allows the detection of network topologies for small S-systems. We introduce a new method for estimating S-system parameters from time-course profiles. We show that the performance of this method compares favorably with competing methods for ideal profiles, and that it also allows the determination of parameters for noisy profiles.
Resumo:
Richer and healthier agents tend to hold riskier portfolios and spend proportionally less on health expenditures. Potential explanations include health and wealth effects on preferences, expected longevity or disposable total wealth. Using HRS data, we perform a structural estimation of a dynamic model of consumption, portfolio and health expenditure choices with recursive utility, as well as health-dependent income and mortality risk. Our estimates of the deep parameters highlight the importance of health capital, mortality risk control, convex health and mortality adjustment costs and binding liquidity constraints to rationalize the stylized facts. They also provide new perspectives on expected longevity and on the values of life and health.
Resumo:
Background: Alcohol is a major risk factor for burden of disease and injuries globally. This paper presents a systematic method to compute the 95% confidence intervals of alcohol-attributable fractions (AAFs) with exposure and risk relations stemming from different sources.Methods: The computation was based on previous work done on modelling drinking prevalence using the gamma distribution and the inherent properties of this distribution. The Monte Carlo approach was applied to derive the variance for each AAF by generating random sets of all the parameters. A large number of random samples were thus created for each AAF to estimate variances. The derivation of the distributions of the different parameters is presented as well as sensitivity analyses which give an estimation of the number of samples required to determine the variance with predetermined precision, and to determine which parameter had the most impact on the variance of the AAFs.Results: The analysis of the five Asian regions showed that 150 000 samples gave a sufficiently accurate estimation of the 95% confidence intervals for each disease. The relative risk functions accounted for most of the variance in the majority of cases.Conclusions: Within reasonable computation time, the method yielded very accurate values for variances of AAFs.
Resumo:
Abstract Accurate characterization of the spatial distribution of hydrological properties in heterogeneous aquifers at a range of scales is a key prerequisite for reliable modeling of subsurface contaminant transport, and is essential for designing effective and cost-efficient groundwater management and remediation strategies. To this end, high-resolution geophysical methods have shown significant potential to bridge a critical gap in subsurface resolution and coverage between traditional hydrological measurement techniques such as borehole log/core analyses and tracer or pumping tests. An important and still largely unresolved issue, however, is how to best quantitatively integrate geophysical data into a characterization study in order to estimate the spatial distribution of one or more pertinent hydrological parameters, thus improving hydrological predictions. Recognizing the importance of this issue, the aim of the research presented in this thesis was to first develop a strategy for the assimilation of several types of hydrogeophysical data having varying degrees of resolution, subsurface coverage, and sensitivity to the hydrologic parameter of interest. In this regard a novel simulated annealing (SA)-based conditional simulation approach was developed and then tested in its ability to generate realizations of porosity given crosshole ground-penetrating radar (GPR) and neutron porosity log data. This was done successfully for both synthetic and field data sets. A subsequent issue that needed to be addressed involved assessing the potential benefits and implications of the resulting porosity realizations in terms of groundwater flow and contaminant transport. This was investigated synthetically assuming first that the relationship between porosity and hydraulic conductivity was well-defined. Then, the relationship was itself investigated in the context of a calibration procedure using hypothetical tracer test data. Essentially, the relationship best predicting the observed tracer test measurements was determined given the geophysically derived porosity structure. Both of these investigations showed that the SA-based approach, in general, allows much more reliable hydrological predictions than other more elementary techniques considered. Further, the developed calibration procedure was seen to be very effective, even at the scale of tomographic resolution, for predictions of transport. This also held true at locations within the aquifer where only geophysical data were available. This is significant because the acquisition of hydrological tracer test measurements is clearly more complicated and expensive than the acquisition of geophysical measurements. Although the above methodologies were tested using porosity logs and GPR data, the findings are expected to remain valid for a large number of pertinent combinations of geophysical and borehole log data of comparable resolution and sensitivity to the hydrological target parameter. Moreover, the obtained results allow us to have confidence for future developments in integration methodologies for geophysical and hydrological data to improve the 3-D estimation of hydrological properties.
Resumo:
Hydrological models developed for extreme precipitation of PMP type are difficult to calibrate because of the scarcity of available data for these events. This article presents the process and results of calibration for a distributed hydrological model at fine scale developed for the estimation of probable maximal floods in the case of a PMP. This calibration is done on two Swiss catchments for two events of summer storms. The calculation done is concentrated on the estimation of the parameters of the model, divided in two parts. The first is necessary for the computation of flow speeds while the second is required for the determination of the initial and final infiltration capacities for each terrain type. The results, validated with the Nash equation show a good correlation between the simulated and observed flows. We also apply this model on two Romanian catchments, showing the river network and estimated flow.
Resumo:
The purposes of this study were to characterize the performance of a 3-dimensional (3D) ordered-subset expectation maximization (OSEM) algorithm in the quantification of left ventricular (LV) function with (99m)Tc-labeled agent gated SPECT (G-SPECT), the QGS program, and a beating-heart phantom and to optimize the reconstruction parameters for clinical applications. METHODS: A G-SPECT image of a dynamic heart phantom simulating the beating left ventricle was acquired. The exact volumes of the phantom were known and were as follows: end-diastolic volume (EDV) of 112 mL, end-systolic volume (ESV) of 37 mL, and stroke volume (SV) of 75 mL; these volumes produced an LV ejection fraction (LVEF) of 67%. Tomographic reconstructions were obtained after 10-20 iterations (I) with 4, 8, and 16 subsets (S) at full width at half maximum (FWHM) gaussian postprocessing filter cutoff values of 8-15 mm. The QGS program was used for quantitative measurements. RESULTS: Measured values ranged from 72 to 92 mL for EDV, from 18 to 32 mL for ESV, and from 54 to 63 mL for SV, and the calculated LVEF ranged from 65% to 76%. Overall, the combination of 10 I, 8 S, and a cutoff filter value of 10 mm produced the most accurate results. The plot of the measures with respect to the expectation maximization-equivalent iterations (I x S product) revealed a bell-shaped curve for the LV volumes and a reverse distribution for the LVEF, with the best results in the intermediate range. In particular, FWHM cutoff values exceeding 10 mm affected the estimation of the LV volumes. CONCLUSION: The QGS program is able to correctly calculate the LVEF when used in association with an optimized 3D OSEM algorithm (8 S, 10 I, and FWHM of 10 mm) but underestimates the LV volumes. However, various combinations of technical parameters, including a limited range of I and S (80-160 expectation maximization-equivalent iterations) and low cutoff values (< or =10 mm) for the gaussian postprocessing filter, produced results with similar accuracies and without clinically relevant differences in the LV volumes and the estimated LVEF.
Resumo:
With the trend in molecular epidemiology towards both genome-wide association studies and complex modelling, the need for large sample sizes to detect small effects and to allow for the estimation of many parameters within a model continues to increase. Unfortunately, most methods of association analysis have been restricted to either a family-based or a case-control design, resulting in the lack of synthesis of data from multiple studies. Transmission disequilibrium-type methods for detecting linkage disequilibrium from family data were developed as an effective way of preventing the detection of association due to population stratification. Because these methods condition on parental genotype, however, they have precluded the joint analysis of family and case-control data, although methods for case-control data may not protect against population stratification and do not allow for familial correlations. We present here an extension of a family-based association analysis method for continuous traits that will simultaneously test for, and if necessary control for, population stratification. We further extend this method to analyse binary traits (and therefore family and case-control data together) and accurately to estimate genetic effects in the population, even when using an ascertained family sample. Finally, we present the power of this binary extension for both family-only and joint family and case-control data, and demonstrate the accuracy of the association parameter and variance components in an ascertained family sample.
Resumo:
Abstract Traditionally, the common reserving methods used by the non-life actuaries are based on the assumption that future claims are going to behave in the same way as they did in the past. There are two main sources of variability in the processus of development of the claims: the variability of the speed with which the claims are settled and the variability between the severity of the claims from different accident years. High changes in these processes will generate distortions in the estimation of the claims reserves. The main objective of this thesis is to provide an indicator which firstly identifies and quantifies these two influences and secondly to determine which model is adequate for a specific situation. Two stochastic models were analysed and the predictive distributions of the future claims were obtained. The main advantage of the stochastic models is that they provide measures of variability of the reserves estimates. The first model (PDM) combines one conjugate family Dirichlet - Multinomial with the Poisson distribution. The second model (NBDM) improves the first one by combining two conjugate families Poisson -Gamma (for distribution of the ultimate amounts) and Dirichlet Multinomial (for distribution of the incremental claims payments). It was found that the second model allows to find the speed variability in the reporting process and development of the claims severity as function of two above mentioned distributions' parameters. These are the shape parameter of the Gamma distribution and the Dirichlet parameter. Depending on the relation between them we can decide on the adequacy of the claims reserve estimation method. The parameters have been estimated by the Methods of Moments and Maximum Likelihood. The results were tested using chosen simulation data and then using real data originating from the three lines of business: Property/Casualty, General Liability, and Accident Insurance. These data include different developments and specificities. The outcome of the thesis shows that when the Dirichlet parameter is greater than the shape parameter of the Gamma, resulting in a model with positive correlation between the past and future claims payments, suggests the Chain-Ladder method as appropriate for the claims reserve estimation. In terms of claims reserves, if the cumulated payments are high the positive correlation will imply high expectations for the future payments resulting in high claims reserves estimates. The negative correlation appears when the Dirichlet parameter is lower than the shape parameter of the Gamma, meaning low expected future payments for the same high observed cumulated payments. This corresponds to the situation when claims are reported rapidly and fewer claims remain expected subsequently. The extreme case appears in the situation when all claims are reported at the same time leading to expectations for the future payments of zero or equal to the aggregated amount of the ultimate paid claims. For this latter case, the Chain-Ladder is not recommended.
Resumo:
A method is proposed for the estimation of absolute binding free energy of interaction between proteins and ligands. Conformational sampling of the protein-ligand complex is performed by molecular dynamics (MD) in vacuo and the solvent effect is calculated a posteriori by solving the Poisson or the Poisson-Boltzmann equation for selected frames of the trajectory. The binding free energy is written as a linear combination of the buried surface upon complexation, SASbur, the electrostatic interaction energy between the ligand and the protein, Eelec, and the difference of the solvation free energies of the complex and the isolated ligand and protein, deltaGsolv. The method uses the buried surface upon complexation to account for the non-polar contribution to the binding free energy because it is less sensitive to the details of the structure than the van der Waals interaction energy. The parameters of the method are developed for a training set of 16 HIV-1 protease-inhibitor complexes of known 3D structure. A correlation coefficient of 0.91 was obtained with an unsigned mean error of 0.8 kcal/mol. When applied to a set of 25 HIV-1 protease-inhibitor complexes of unknown 3D structures, the method provides a satisfactory correlation between the calculated binding free energy and the experimental pIC5o without reparametrization.