76 resultados para Parameter Estimation
em University of Queensland eSpace - Australia
Resumo:
Optimal sampling times are found for a study in which one of the primary purposes is to develop a model of the pharmacokinetics of itraconazole in patients with cystic fibrosis for both capsule and solution doses. The optimal design is expected to produce reliable estimates of population parameters for two different structural PK models. Data collected at these sampling times are also expected to provide the researchers with sufficient information to reasonably discriminate between the two competing structural models.
Resumo:
We describe methods for estimating the parameters of Markovian population processes in continuous time, thus increasing their utility in modelling real biological systems. A general approach, applicable to any finite-state continuous-time Markovian model, is presented, and this is specialised to a computationally more efficient method applicable to a class of models called density-dependent Markov population processes. We illustrate the versatility of both approaches by estimating the parameters of the stochastic SIS logistic model from simulated data. This model is also fitted to data from a population of Bay checkerspot butterfly (Euphydryas editha bayensis), allowing us to assess the viability of this population. (c) 2006 Elsevier Inc. All rights reserved.
Resumo:
We consider the task of estimating the randomly fluctuating phase of a continuous-wave beam of light. Using the theory of quantum parameter estimation, we show that this can be done more accurately when feedback is used (adaptive phase estimation) than by any scheme not involving feedback (nonadaptive phase estimation) in which the beam is measured as it arrives at the detector. Such schemes not involving feedback include all those based on heterodyne detection or instantaneous canonical phase measurements. We also demonstrate that the superior accuracy of adaptive phase estimation is present in a regime conducive to observing it experimentally.
Resumo:
A generic method for the estimation of parameters for Stochastic Ordinary Differential Equations (SODEs) is introduced and developed. This algorithm, called the GePERs method, utilises a genetic optimisation algorithm to minimise a stochastic objective function based on the Kolmogorov-Smirnov statistic. Numerical simulations are utilised to form the KS statistic. Further, the examination of some of the factors that improve the precision of the estimates is conducted. This method is used to estimate parameters of diffusion equations and jump-diffusion equations. It is also applied to the problem of model selection for the Queensland electricity market. (C) 2003 Elsevier B.V. All rights reserved.
Resumo:
Subsequent to the influential paper of [Chan, K.C., Karolyi, G.A., Longstaff, F.A., Sanders, A.B., 1992. An empirical comparison of alternative models of the short-term interest rate. Journal of Finance 47, 1209-1227], the generalised method of moments (GMM) has been a popular technique for estimation and inference relating to continuous-time models of the short-term interest rate. GMM has been widely employed to estimate model parameters and to assess the goodness-of-fit of competing short-rate specifications. The current paper conducts a series of simulation experiments to document the bias and precision of GMM estimates of short-rate parameters, as well as the size and power of [Hansen, L.P., 1982. Large sample properties of generalised method of moments estimators. Econometrica 50, 1029-1054], J-test of over-identifying restrictions. While the J-test appears to have appropriate size and good power in sample sizes commonly encountered in the short-rate literature, GMM estimates of the speed of mean reversion are shown to be severely biased. Consequently, it is dangerous to draw strong conclusions about the strength of mean reversion using GMM. In contrast, the parameter capturing the levels effect, which is important in differentiating between competing short-rate specifications, is estimated with little bias. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
The water retention curve (WRC) is a hydraulic characteristic of concrete required for advanced modeling of water (and thus solute) transport in variably saturated, heterogeneous concrete. Unfortunately, determination by a direct experimental method (for example, measuring equilibrium moisture levels of large samples stored in constant humidity cells) is a lengthy process, taking over 2 years for large samples. A surrogate approach is presented in which the WRC is conveniently estimated from mercury intrusion porosimetry (MIP) and validated by water sorption isotherms: The well-known Barrett, Joyner and Halenda (BJH) method of estimating the pore size distribution (PSD) from the water sorption isotherm is shown to complement the PSD derived from conventional MIP. This provides a basis for predicting the complete WRC from MIP data alone. The van Genuchten equation is used to model the combined water sorption and MIP results. It is a convenient tool for describing water retention characteristics over the full moisture content range. The van Genuchten parameter estimation based solely on MIP is shown to give a satisfactory approximation to the WRC, with a simple restriction on one. of the parameters.
Resumo:
CXTANNEAL is a program for analysing contaminant transport in soils. The code, written in Fortran 77, is a modified version of CXTFIT, a commonly used package for estimating solute transport parameters in soils. The improvement of the present code is that it includes simulated annealing as the optimization technique for curve fitting. Tests with hypothetical data show that CXTANNEAL performs better than the original code in searching for optimal parameter estimates. To reduce the computational time, a parallel version of CXTANNEAL (CXTANNEAL_P) was also developed. (C) 1999 Elsevier Science Ltd. All rights reserved.
Resumo:
We propose a simulated-annealing-based genetic algorithm for solving model parameter estimation problems. The algorithm incorporates advantages of both genetic algorithms and simulated annealing. Tests on computer-generated synthetic data that closely resemble optical constants of a metal were performed to compare the efficiency of plain genetic algorithms against the simulated-annealing-based genetic algorithms. These tests assess the ability of the algorithms to and the global minimum and the accuracy of values obtained for model parameters. Finally, the algorithm with the best performance is used to fit the model dielectric function to data for platinum and aluminum. (C) 1997 Optical Society of America.
Resumo:
The catalytic properties of enzymes are usually evaluated by measuring and analyzing reaction rates. However, analyzing the complete time course can be advantageous because it contains additional information about the properties of the enzyme. Moreover, for systems that are not at steady state, the analysis of time courses is the preferred method. One of the major barriers to the wide application of time courses is that it may be computationally more difficult to extract information from these experiments. Here the basic approach to analyzing time courses is described, together with some examples of the essential computer code to implement these analyses. A general method that can be applied to both steady state and non-steady-state systems is recommended. (C) 2001 academic Press.
Resumo:
In this paper we study the n-fold multiplicative model involving Weibull distributions and examine some properties of the model. These include the shapes for the density and failure rate functions and the WPP plot. These allow one to decide if a given data set can be adequately modelled by the model. We also discuss the estimation of model parameters based on the WPP plot. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
1. There are a variety of methods that could be used to increase the efficiency of the design of experiments. However, it is only recently that such methods have been considered in the design of clinical pharmacology trials. 2. Two such methods, termed data-dependent (e.g. simulation) and data-independent (e.g. analytical evaluation of the information in a particular design), are becoming increasingly used as efficient methods for designing clinical trials. These two design methods have tended to be viewed as competitive, although a complementary role in design is proposed here. 3. The impetus for the use of these two methods has been the need for a more fully integrated approach to the drug development process that specifically allows for sequential development (i.e. where the results of early phase studies influence later-phase studies). 4. The present article briefly presents the background and theory that underpins both the data-dependent and -independent methods with the use of illustrative examples from the literature. In addition, the potential advantages and disadvantages of each method are discussed.
Resumo:
Objectives: The aims of this study were to investigate the population pharmacokinetics of tacrolimus in adult kidney transplant recipients and to identify factors that explain variability. Methods: Population analysis was performed on retrospective data from 70 patients who received oral tacrolimus twice daily. Morning blood trough concentrations were measured by liquid chromatography-tandem mass spectrometry. Maximum likelihood estimates were sought for apparent clearance (CL/F) and apparent volume of distribution (V/F), with the use of NONMEM (GloboMax LLC, Hanover, Md). Factors screened for influence on these parameters were weight, age, gender, postoperative day, days of tacrolimus therapy, liver function tests, creatinine clearance, hematocrit fraction, corticosteroid dose, and potential interacting drugs. Results. CL/F was greater in patients with abnormally low hematocrit fraction (data from 21 patients only), and it decreased with increasing days of therapy and AST concentrations (P