867 resultados para Bayesian hierarchical model
Resumo:
We used a light-use efficiency model of photosynthesis coupled with a dynamic carbon allocation and tree-growth model to simulate annual growth of the gymnosperm Callitris columellaris in the semi-arid Great Western Woodlands, Western Australia, over the past 100 years. Parameter values were derived from independent observations except for sapwood specific respiration rate, fine-root turnover time, fine-root specific respiration rate and the ratio of fine-root mass to foliage area, which were estimated by Bayesian optimization. The model reproduced the general pattern of interannual variability in radial growth (tree-ring width), including the response to the shift in precipitation regimes that occurred in the 1960s. Simulated and observed responses to climate were consistent. Both showed a significant positive response of tree-ring width to total photosynthetically active radiation received and to the ratio of modeled actual to equilibrium evapotranspiration, and a significant negative response to vapour pressure deficit. However, the simulations showed an enhancement of radial growth in response to increasing atmospheric CO2 concentration (ppm) ([CO2]) during recent decades that is not present in the observations. The discrepancy disappeared when the model was recalibrated on successive 30-year windows. Then the ratio of fine-root mass to foliage area increases by 14% (from 0.127 to 0.144 kg C m-2) as [CO2] increased while the other three estimated parameters remained constant. The absence of a signal of increasing [CO2] has been noted in many tree-ring records, despite the enhancement of photosynthetic rates and water-use efficiency resulting from increasing [CO2]. Our simulations suggest that this behaviour could be explained as a consequence of a shift towards below-ground carbon allocation.
Resumo:
This paper investigates the feasibility of using approximate Bayesian computation (ABC) to calibrate and evaluate complex individual-based models (IBMs). As ABC evolves, various versions are emerging, but here we only explore the most accessible version, rejection-ABC. Rejection-ABC involves running models a large number of times, with parameters drawn randomly from their prior distributions, and then retaining the simulations closest to the observations. Although well-established in some fields, whether ABC will work with ecological IBMs is still uncertain. Rejection-ABC was applied to an existing 14-parameter earthworm energy budget IBM for which the available data consist of body mass growth and cocoon production in four experiments. ABC was able to narrow the posterior distributions of seven parameters, estimating credible intervals for each. ABC’s accepted values produced slightly better fits than literature values do. The accuracy of the analysis was assessed using cross-validation and coverage, currently the best available tests. Of the seven unnarrowed parameters, ABC revealed that three were correlated with other parameters, while the remaining four were found to be not estimable given the data available. It is often desirable to compare models to see whether all component modules are necessary. Here we used ABC model selection to compare the full model with a simplified version which removed the earthworm’s movement and much of the energy budget. We are able to show that inclusion of the energy budget is necessary for a good fit to the data. We show how our methodology can inform future modelling cycles, and briefly discuss how more advanced versions of ABC may be applicable to IBMs. We conclude that ABC has the potential to represent uncertainty in model structure, parameters and predictions, and to embed the often complex process of optimizing an IBM’s structure and parameters within an established statistical framework, thereby making the process more transparent and objective.
Resumo:
Approximate Bayesian computation (ABC) is a popular family of algorithms which perform approximate parameter inference when numerical evaluation of the likelihood function is not possible but data can be simulated from the model. They return a sample of parameter values which produce simulations close to the observed dataset. A standard approach is to reduce the simulated and observed datasets to vectors of summary statistics and accept when the difference between these is below a specified threshold. ABC can also be adapted to perform model choice. In this article, we present a new software package for R, abctools which provides methods for tuning ABC algorithms. This includes recent dimension reduction algorithms to tune the choice of summary statistics, and coverage methods to tune the choice of threshold. We provide several illustrations of these routines on applications taken from the ABC literature.
Resumo:
The benefits of breastfeeding for the children`s health have been highlighted in many studies. The innovative aspect of the present study lies in its use of a multilevel model, a technique that has rarely been applied to studies on breastfeeding. The data reported were collected from a larger study, the Family Budget Survey-Pesquisa de Orcamentos Familiares, carried out between 2002 and 2003 in Brazil that involved a sample of 48 470 households. A representative national sample of 1477 infants aged 0-6 months was used. The statistical analysis was performed using a multilevel model, with two levels grouped by region. In Brazil, breastfeeding prevalence was 58%. The factors that bore a negative influence on breastfeeding were over four residents living in the same household [odds ratio (OR) = 0.68, 90% confidence interval (CI) = 0.51-0.89] and mothers aged 30 years or more (OR = 0.68, 90% CI = 0.53-0.89). The factors that positively influenced breastfeeding were the following: higher socio-economic levels (OR = 1.37, 90% CI = 1.01-1.88), families with over two infants under 5 years (OR = 1.25, 90% CI = 1.00-1.58) and being a resident in rural areas (OR = 1.25, 90% CI = 1.00-1.58). Although majority of the mothers was aware of the value of maternal milk and breastfed their babies, the prevalence of breastfeeding remains lower than the rate advised by the World Health Organization, and the number of residents living in the same household along with mothers aged 30 years or older were both factors associated with early cessation of infant breastfeeding before 6 months.
Resumo:
The kinematic expansion history of the universe is investigated by using the 307 supernovae type Ia from the Union Compilation set. Three simple model parameterizations for the deceleration parameter ( constant, linear and abrupt transition) and two different models that are explicitly parametrized by the cosmic jerk parameter ( constant and variable) are considered. Likelihood and Bayesian analyses are employed to find best fit parameters and compare models among themselves and with the flat Lambda CDM model. Analytical expressions and estimates for the deceleration and cosmic jerk parameters today (q(0) and j(0)) and for the transition redshift (z(t)) between a past phase of cosmic deceleration to a current phase of acceleration are given. All models characterize an accelerated expansion for the universe today and largely indicate that it was decelerating in the past, having a transition redshift around 0.5. The cosmic jerk is not strongly constrained by the present supernovae data. For the most realistic kinematic models the 1 sigma confidence limits imply the following ranges of values: q(0) is an element of [-0.96, -0.46], j(0) is an element of [-3.2,-0.3] and z(t) is an element of [0.36, 0.84], which are compatible with the Lambda CDM predictions, q(0) = -0.57 +/- 0.04, j(0) = -1 and z(t) = 0.71 +/- 0.08. We find that even very simple kinematic models are equally good to describe the data compared to the concordance Lambda CDM model, and that the current observations are not powerful enough to discriminate among all of them.
Resumo:
In this paper, we present different ofrailtyo models to analyze longitudinal data in the presence of covariates. These models incorporate the extra-Poisson variability and the possible correlation among the repeated counting data for each individual. Assuming a CD4 counting data set in HIV-infected patients, we develop a hierarchical Bayesian analysis considering the different proposed models and using Markov Chain Monte Carlo methods. We also discuss some Bayesian discrimination aspects for the choice of the best model.
Resumo:
In this paper, we introduce a Bayesian analysis for bioequivalence data assuming multivariate pharmacokinetic measures. With the introduction of correlation parameters between the pharmacokinetic measures or between the random effects in the bioequivalence models, we observe a good improvement in the bioequivalence results. These results are of great practical interest since they can yield higher accuracy and reliability for the bioequivalence tests, usually assumed by regulatory offices. An example is introduced to illustrate the proposed methodology by comparing the usual univariate bioequivalence methods with multivariate bioequivalence. We also consider some usual existing discrimination Bayesian methods to choose the best model to be used in bioequivalence studies.
Resumo:
In this paper, we consider the problem of estimating the number of times an air quality standard is exceeded in a given period of time. A non-homogeneous Poisson model is proposed to analyse this issue. The rate at which the Poisson events occur is given by a rate function lambda(t), t >= 0. This rate function also depends on some parameters that need to be estimated. Two forms of lambda(t), t >= 0 are considered. One of them is of the Weibull form and the other is of the exponentiated-Weibull form. The parameters estimation is made using a Bayesian formulation based on the Gibbs sampling algorithm. The assignation of the prior distributions for the parameters is made in two stages. In the first stage, non-informative prior distributions are considered. Using the information provided by the first stage, more informative prior distributions are used in the second one. The theoretical development is applied to data provided by the monitoring network of Mexico City. The rate function that best fit the data varies according to the region of the city and/or threshold that is considered. In some cases the best fit is the Weibull form and in other cases the best option is the exponentiated-Weibull. Copyright (C) 2007 John Wiley & Sons, Ltd.
Resumo:
In this paper we present a hierarchical Bayesian analysis for a predator-prey model applied to ecology considering the use of Markov Chain Monte Carlo methods. We consider the introduction of a random effect in the model and the presence of a covariate vector. An application to ecology is considered using a data set related to the plankton dynamics of lake Geneva for the year 1990. We also discuss some aspects of discrimination of the proposed models.
Resumo:
In this paper we have discussed inference aspects of the skew-normal nonlinear regression models following both, a classical and Bayesian approach, extending the usual normal nonlinear regression models. The univariate skew-normal distribution that will be used in this work was introduced by Sahu et al. (Can J Stat 29:129-150, 2003), which is attractive because estimation of the skewness parameter does not present the same degree of difficulty as in the case with Azzalini (Scand J Stat 12:171-178, 1985) one and, moreover, it allows easy implementation of the EM-algorithm. As illustration of the proposed methodology, we consider a data set previously analyzed in the literature under normality.
Resumo:
The purpose of this paper is to develop a Bayesian approach for log-Birnbaum-Saunders Student-t regression models under right-censored survival data. Markov chain Monte Carlo (MCMC) methods are used to develop a Bayesian procedure for the considered model. In order to attenuate the influence of the outlying observations on the parameter estimates, we present in this paper Birnbaum-Saunders models in which a Student-t distribution is assumed to explain the cumulative damage. Also, some discussions on the model selection to compare the fitted models are given and case deletion influence diagnostics are developed for the joint posterior distribution based on the Kullback-Leibler divergence. The developed procedures are illustrated with a real data set. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
In interval-censored survival data, the event of interest is not observed exactly but is only known to occur within some time interval. Such data appear very frequently. In this paper, we are concerned only with parametric forms, and so a location-scale regression model based on the exponentiated Weibull distribution is proposed for modeling interval-censored data. We show that the proposed log-exponentiated Weibull regression model for interval-censored data represents a parametric family of models that include other regression models that are broadly used in lifetime data analysis. Assuming the use of interval-censored data, we employ a frequentist analysis, a jackknife estimator, a parametric bootstrap and a Bayesian analysis for the parameters of the proposed model. We derive the appropriate matrices for assessing local influences on the parameter estimates under different perturbation schemes and present some ways to assess global influences. Furthermore, for different parameter settings, sample sizes and censoring percentages, various simulations are performed; in addition, the empirical distribution of some modified residuals are displayed and compared with the standard normal distribution. These studies suggest that the residual analysis usually performed in normal linear regression models can be straightforwardly extended to a modified deviance residual in log-exponentiated Weibull regression models for interval-censored data. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
Considering the Wald, score, and likelihood ratio asymptotic test statistics, we analyze a multivariate null intercept errors-in-variables regression model, where the explanatory and the response variables are subject to measurement errors, and a possible structure of dependency between the measurements taken within the same individual are incorporated, representing a longitudinal structure. This model was proposed by Aoki et al. (2003b) and analyzed under the bayesian approach. In this article, considering the classical approach, we analyze asymptotic test statistics and present a simulation study to compare the behavior of the three test statistics for different sample sizes, parameter values and nominal levels of the test. Also, closed form expressions for the score function and the Fisher information matrix are presented. We consider two real numerical illustrations, the odontological data set from Hadgu and Koch (1999), and a quality control data set.
Resumo:
Hierarchical assemblies of CaMoO4 (CM) nano-octahedrons were obtained by microwave-assisted hydrothemial synthesis at 120 degrees C for different times. These structures were structurally, morphologically and optically characterized by X-ray diffraction, micro-Raman spectroscopy, field-emission gun scanning electron microscopy, ultraviolet-visible absorption spectroscopy and photoluminescence measurements. First-principle calculations have been carried out to understand the structural and electronic order-disorder effects as a function of the particle/region size. Supercells of different dimensions were constructed to simulate the geometric distortions along both they and z planes of the scheelite structure. Based on these experimental results and with the help of detailed structural simulations, we were able to model the nature of the order-disorder in this important class of materials and discuss the consequent implications on its physical properties, in particular, the photoluminescence properties of CM nanocrystals.
Resumo:
The main object of this paper is to discuss the Bayes estimation of the regression coefficients in the elliptically distributed simple regression model with measurement errors. The posterior distribution for the line parameters is obtained in a closed form, considering the following: the ratio of the error variances is known, informative prior distribution for the error variance, and non-informative prior distributions for the regression coefficients and for the incidental parameters. We proved that the posterior distribution of the regression coefficients has at most two real modes. Situations with a single mode are more likely than those with two modes, especially in large samples. The precision of the modal estimators is studied by deriving the Hessian matrix, which although complicated can be computed numerically. The posterior mean is estimated by using the Gibbs sampling algorithm and approximations by normal distributions. The results are applied to a real data set and connections with results in the literature are reported. (C) 2011 Elsevier B.V. All rights reserved.