10 resultados para Bayesian models
em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo
Resumo:
Changepoint regression models have originally been developed in connection with applications in quality control, where a change from the in-control to the out-of-control state has to be detected based on the avaliable random observations. Up to now various changepoint models have been suggested for differents applications like reliability, econometrics or medicine. In many practical situations the covariate cannot be measured precisely and an alternative model are the errors in variable regression models. In this paper we study the regression model with errors in variables with changepoint from a Bayesian approach. From the simulation study we found that the proposed procedure produces estimates suitable for the changepoint and all other model parameters.
Resumo:
The objective of this paper is to model variations in test-day milk yields of first lactations of Holstein cows by RR using B-spline functions and Bayesian inference in order to fit adequate and parsimonious models for the estimation of genetic parameters. They used 152,145 test day milk yield records from 7317 first lactations of Holstein cows. The model established in this study was additive, permanent environmental and residual random effects. In addition, contemporary group and linear and quadratic effects of the age of cow at calving were included as fixed effects. Authors modeled the average lactation curve of the population with a fourth-order orthogonal Legendre polynomial. They concluded that a cubic B-spline with seven random regression coefficients for both the additive genetic and permanent environment effects was to be the best according to residual mean square and residual variance estimates. Moreover they urged a lower order model (quadratic B-spline with seven random regression coefficients for both random effects) could be adopted because it yielded practically the same genetic parameter estimates with parsimony. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
In this work we compared the estimates of the parameters of ARCH models using a complete Bayesian method and an empirical Bayesian method in which we adopted a non-informative prior distribution and informative prior distribution, respectively. We also considered a reparameterization of those models in order to map the space of the parameters into real space. This procedure permits choosing prior normal distributions for the transformed parameters. The posterior summaries were obtained using Monte Carlo Markov chain methods (MCMC). The methodology was evaluated by considering the Telebras series from the Brazilian financial market. The results show that the two methods are able to adjust ARCH models with different numbers of parameters. The empirical Bayesian method provided a more parsimonious model to the data and better adjustment than the complete Bayesian method.
Resumo:
The purpose of this paper is to develop a Bayesian analysis for the right-censored survival data when immune or cured individuals may be present in the population from which the data is taken. In our approach the number of competing causes of the event of interest follows the Conway-Maxwell-Poisson distribution which generalizes the Poisson distribution. Markov chain Monte Carlo (MCMC) methods are used to develop a Bayesian procedure for the proposed model. Also, some discussions on the model selection and an illustration with a real data set are considered.
Resumo:
Coexistence of sympatric species is mediated by resource partitioning. Pumas occur sympatrically with jaguars throughout most of the jaguar's range but few studies have investigated space partitioning between both species. Here, camera trapping and occupancy models accounting for imperfect detection were employed in a Bayesian framework to investigate space partitioning between the jaguar and puma in Emas National Park (ENP), central Brazil. Jaguars were estimated to occupy 54.1% and pumas 39.3% of the sample sites. Jaguar occupancy was negatively correlated with distance to water and positively correlated with the amount of dense habitat surrounding the camera trap. Puma occupancy only showed a weak negative correlation with distance to water and with jaguar presence. Both species were less often present at the same site than expected under independent distributions. Jaguars had a significantly higher detection probability at cameras on roads than at off-road locations. For pumas, detection was similar on and off-road. Results indicate that both differences in habitat use and active avoidance shape space partitioning between jaguars and pumas in ENP. Considering its size, the jaguar is likely the competitively dominant of the two species. Owing to its habitat preferences, suitable jaguar habitat outside the park is probably sparse. Consequently, the jaguar population is likely largely confined to the park, while the puma population is known to extend into ENP's surroundings. (C) 2011 Deutsche Gesellschaft fur Saugetierkunde. Published by Elsevier GmbH. All rights reserved.
Resumo:
In this paper we use Markov chain Monte Carlo (MCMC) methods in order to estimate and compare GARCH models from a Bayesian perspective. We allow for possibly heavy tailed and asymmetric distributions in the error term. We use a general method proposed in the literature to introduce skewness into a continuous unimodal and symmetric distribution. For each model we compute an approximation to the marginal likelihood, based on the MCMC output. From these approximations we compute Bayes factors and posterior model probabilities. (C) 2012 IMACS. Published by Elsevier B.V. All rights reserved.
Weibull and generalised exponential overdispersion models with an application to ozone air pollution
Resumo:
We consider the problem of estimating the mean and variance of the time between occurrences of an event of interest (inter-occurrences times) where some forms of dependence between two consecutive time intervals are allowed. Two basic density functions are taken into account. They are the Weibull and the generalised exponential density functions. In order to capture the dependence between two consecutive inter-occurrences times, we assume that either the shape and/or the scale parameters of the two density functions are given by auto-regressive models. The expressions for the mean and variance of the inter-occurrences times are presented. The models are applied to the ozone data from two regions of Mexico City. The estimation of the parameters is performed using a Bayesian point of view via Markov chain Monte Carlo (MCMC) methods.
Resumo:
Item response theory (IRT) comprises a set of statistical models which are useful in many fields, especially when there is an interest in studying latent variables (or latent traits). Usually such latent traits are assumed to be random variables and a convenient distribution is assigned to them. A very common choice for such a distribution has been the standard normal. Recently, Azevedo et al. [Bayesian inference for a skew-normal IRT model under the centred parameterization, Comput. Stat. Data Anal. 55 (2011), pp. 353-365] proposed a skew-normal distribution under the centred parameterization (SNCP) as had been studied in [R. B. Arellano-Valle and A. Azzalini, The centred parametrization for the multivariate skew-normal distribution, J. Multivariate Anal. 99(7) (2008), pp. 1362-1382], to model the latent trait distribution. This approach allows one to represent any asymmetric behaviour concerning the latent trait distribution. Also, they developed a Metropolis-Hastings within the Gibbs sampling (MHWGS) algorithm based on the density of the SNCP. They showed that the algorithm recovers all parameters properly. Their results indicated that, in the presence of asymmetry, the proposed model and the estimation algorithm perform better than the usual model and estimation methods. Our main goal in this paper is to propose another type of MHWGS algorithm based on a stochastic representation (hierarchical structure) of the SNCP studied in [N. Henze, A probabilistic representation of the skew-normal distribution, Scand. J. Statist. 13 (1986), pp. 271-275]. Our algorithm has only one Metropolis-Hastings step, in opposition to the algorithm developed by Azevedo et al., which has two such steps. This not only makes the implementation easier but also reduces the number of proposal densities to be used, which can be a problem in the implementation of MHWGS algorithms, as can be seen in [R.J. Patz and B.W. Junker, A straightforward approach to Markov Chain Monte Carlo methods for item response models, J. Educ. Behav. Stat. 24(2) (1999), pp. 146-178; R. J. Patz and B. W. Junker, The applications and extensions of MCMC in IRT: Multiple item types, missing data, and rated responses, J. Educ. Behav. Stat. 24(4) (1999), pp. 342-366; A. Gelman, G.O. Roberts, and W.R. Gilks, Efficient Metropolis jumping rules, Bayesian Stat. 5 (1996), pp. 599-607]. Moreover, we consider a modified beta prior (which generalizes the one considered in [3]) and a Jeffreys prior for the asymmetry parameter. Furthermore, we study the sensitivity of such priors as well as the use of different kernel densities for this parameter. Finally, we assess the impact of the number of examinees, number of items and the asymmetry level on the parameter recovery. Results of the simulation study indicated that our approach performed equally as well as that in [3], in terms of parameter recovery, mainly using the Jeffreys prior. Also, they indicated that the asymmetry level has the highest impact on parameter recovery, even though it is relatively small. A real data analysis is considered jointly with the development of model fitting assessment tools. The results are compared with the ones obtained by Azevedo et al. The results indicate that using the hierarchical approach allows us to implement MCMC algorithms more easily, it facilitates diagnosis of the convergence and also it can be very useful to fit more complex skew IRT models.
Resumo:
To estimate causal relationships, time series econometricians must be aware of spurious correlation, a problem first mentioned by Yule (1926). To deal with this problem, one can work either with differenced series or multivariate models: VAR (VEC or VECM) models. These models usually include at least one cointegration relation. Although the Bayesian literature on VAR/VEC is quite advanced, Bauwens et al. (1999) highlighted that "the topic of selecting the cointegrating rank has not yet given very useful and convincing results". The present article applies the Full Bayesian Significance Test (FBST), especially designed to deal with sharp hypotheses, to cointegration rank selection tests in VECM time series models. It shows the FBST implementation using both simulated and available (in the literature) data sets. As illustration, standard non informative priors are used.
Resumo:
In this article, we propose a new Bayesian flexible cure rate survival model, which generalises the stochastic model of Klebanov et al. [Klebanov LB, Rachev ST and Yakovlev AY. A stochastic-model of radiation carcinogenesis - latent time distributions and their properties. Math Biosci 1993; 113: 51-75], and has much in common with the destructive model formulated by Rodrigues et al. [Rodrigues J, de Castro M, Balakrishnan N and Cancho VG. Destructive weighted Poisson cure rate models. Technical Report, Universidade Federal de Sao Carlos, Sao Carlos-SP. Brazil, 2009 (accepted in Lifetime Data Analysis)]. In our approach, the accumulated number of lesions or altered cells follows a compound weighted Poisson distribution. This model is more flexible than the promotion time cure model in terms of dispersion. Moreover, it possesses an interesting and realistic interpretation of the biological mechanism of the occurrence of the event of interest as it includes a destructive process of tumour cells after an initial treatment or the capacity of an individual exposed to irradiation to repair altered cells that results in cancer induction. In other words, what is recorded is only the damaged portion of the original number of altered cells not eliminated by the treatment or repaired by the repair system of an individual. Markov Chain Monte Carlo (MCMC) methods are then used to develop Bayesian inference for the proposed model. Also, some discussions on the model selection and an illustration with a cutaneous melanoma data set analysed by Rodrigues et al. [Rodrigues J, de Castro M, Balakrishnan N and Cancho VG. Destructive weighted Poisson cure rate models. Technical Report, Universidade Federal de Sao Carlos, Sao Carlos-SP. Brazil, 2009 (accepted in Lifetime Data Analysis)] are presented.