915 resultados para Bayesian p-values
Resumo:
In this paper we make use of some stochastic volatility models to analyse the behaviour of a weekly ozone average measurements series. The models considered here have been used previously in problems related to financial time series. Two models are considered and their parameters are estimated using a Bayesian approach based on Markov chain Monte Carlo (MCMC) methods. Both models are applied to the data provided by the monitoring network of the Metropolitan Area of Mexico City. The selection of the best model for that specific data set is performed using the Deviance Information Criterion and the Conditional Predictive Ordinate method.
Resumo:
In this paper we deal with a Bayesian analysis for right-censored survival data suitable for populations with a cure rate. We consider a cure rate model based on the negative binomial distribution, encompassing as a special case the promotion time cure model. Bayesian analysis is based on Markov chain Monte Carlo (MCMC) methods. We also present some discussion on model selection and an illustration with a real dataset.
Resumo:
The purpose of this paper is to develop a Bayesian analysis for nonlinear regression models under scale mixtures of skew-normal distributions. This novel class of models provides a useful generalization of the symmetrical nonlinear regression models since the error distributions cover both skewness and heavy-tailed distributions such as the skew-t, skew-slash and the skew-contaminated normal distributions. The main advantage of these class of distributions is that they have a nice hierarchical representation that allows the implementation of Markov chain Monte Carlo (MCMC) methods to simulate samples from the joint posterior distribution. In order to examine the robust aspects of this flexible class, against outlying and influential observations, we present a Bayesian case deletion influence diagnostics based on the Kullback-Leibler divergence. Further, some discussions on the model selection criteria are given. The newly developed procedures are illustrated considering two simulations study, and a real data previously analyzed under normal and skew-normal nonlinear regression models. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
The multivariate skew-t distribution (J Multivar Anal 79:93-113, 2001; J R Stat Soc, Ser B 65:367-389, 2003; Statistics 37:359-363, 2003) includes the Student t, skew-Cauchy and Cauchy distributions as special cases and the normal and skew-normal ones as limiting cases. In this paper, we explore the use of Markov Chain Monte Carlo (MCMC) methods to develop a Bayesian analysis of repeated measures, pretest/post-test data, under multivariate null intercept measurement error model (J Biopharm Stat 13(4):763-771, 2003) where the random errors and the unobserved value of the covariate (latent variable) follows a Student t and skew-t distribution, respectively. The results and methods are numerically illustrated with an example in the field of dentistry.
Resumo:
The purpose of this paper is to develop a Bayesian approach for log-Birnbaum-Saunders Student-t regression models under right-censored survival data. Markov chain Monte Carlo (MCMC) methods are used to develop a Bayesian procedure for the considered model. In order to attenuate the influence of the outlying observations on the parameter estimates, we present in this paper Birnbaum-Saunders models in which a Student-t distribution is assumed to explain the cumulative damage. Also, some discussions on the model selection to compare the fitted models are given and case deletion influence diagnostics are developed for the joint posterior distribution based on the Kullback-Leibler divergence. The developed procedures are illustrated with a real data set. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
Linear mixed models were developed to handle clustered data and have been a topic of increasing interest in statistics for the past 50 years. Generally. the normality (or symmetry) of the random effects is a common assumption in linear mixed models but it may, sometimes, be unrealistic, obscuring important features of among-subjects variation. In this article, we utilize skew-normal/independent distributions as a tool for robust modeling of linear mixed models under a Bayesian paradigm. The skew-normal/independent distributions is an attractive class of asymmetric heavy-tailed distributions that includes the skew-normal distribution, skew-t, skew-slash and the skew-contaminated normal distributions as special cases, providing an appealing robust alternative to the routine use of symmetric distributions in this type of models. The methods developed are illustrated using a real data set from Framingham cholesterol study. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
The main goal of this paper is to investigate a cure rate model that comprehends some well-known proposals found in the literature. In our work the number of competing causes of the event of interest follows the negative binomial distribution. The model is conveniently reparametrized through the cured fraction, which is then linked to covariates by means of the logistic link. We explore the use of Markov chain Monte Carlo methods to develop a Bayesian analysis in the proposed model. The procedure is illustrated with a numerical example.
Resumo:
Considering the Wald, score, and likelihood ratio asymptotic test statistics, we analyze a multivariate null intercept errors-in-variables regression model, where the explanatory and the response variables are subject to measurement errors, and a possible structure of dependency between the measurements taken within the same individual are incorporated, representing a longitudinal structure. This model was proposed by Aoki et al. (2003b) and analyzed under the bayesian approach. In this article, considering the classical approach, we analyze asymptotic test statistics and present a simulation study to compare the behavior of the three test statistics for different sample sizes, parameter values and nominal levels of the test. Also, closed form expressions for the score function and the Fisher information matrix are presented. We consider two real numerical illustrations, the odontological data set from Hadgu and Koch (1999), and a quality control data set.
Resumo:
Skew-normal distribution is a class of distributions that includes the normal distributions as a special case. In this paper, we explore the use of Markov Chain Monte Carlo (MCMC) methods to develop a Bayesian analysis in a multivariate, null intercept, measurement error model [R. Aoki, H. Bolfarine, J.A. Achcar, and D. Leao Pinto Jr, Bayesian analysis of a multivariate null intercept error-in -variables regression model, J. Biopharm. Stat. 13(4) (2003b), pp. 763-771] where the unobserved value of the covariate (latent variable) follows a skew-normal distribution. The results and methods are applied to a real dental clinical trial presented in [A. Hadgu and G. Koch, Application of generalized estimating equations to a dental randomized clinical trial, J. Biopharm. Stat. 9 (1999), pp. 161-178].
Resumo:
The adsorption kinetics curves of poly(xylylidene tetrahydrothiophenium chloride) (PTHT), a poly-p-phenylenevinylene (PPV) precursor, and the sodium salt of dodecylbenzene sulfonic acid (DBS), onto (PTHT/DBS)(n) layer-by-layer (LBL) films were characterized by means of UV-vis spectroscopy. The amount of PTHT/DBS and PTHT adsorbed on each layer was shown to be practically independent of adsorption time. A Langmuir-type metastable equilibrium model was used to adjust the adsorption isotherms data and to estimate adsorption/desorption coefficients ratios, k = k(ads)/k(des), values of 2 x 10(5) and 4 x 10(6) for PTHT and PTHT/DBS layers, respectively. The desorption coefficient has been estimated, using literature values for poly(o-methoxyaniline) desorption coefficient, as was found to be in the range of 10(-9) to 10(-6) s(-1), indicating that quasi equilibrium is rapidly attained.
Resumo:
Semi-empirical weighted oscillator strengths (gf) and lifetimes presented in this work for all experimentally known electric dipole P XII spectral lines and energy levels were computed within a multiconfiguration Hartree-Fock relativistic approach. In this calculation, the electrostatic parameters were optimized by a least-squares procedure in order to improve the adjustment to experimental energy levels. The method produces lifetime and gf values that are in agreement with intensity observations used for the interpretation of spectrograms of solar and laboratory plasmas.
Resumo:
Combined fluid inclusion (FI) microthermometry, Raman spectroscopy, X-ray diffraction, C-O-H isotopes and oxygen fugacities of granulites from central Ribeira Fold Belt, SE Brazil, provided the following results: i) Magnetite-Hematite fO(2) estimates range from 10(-11.5) bar (QFM + 1) to 10(-18.3) bar (QFM - 1) for the temperature range of 896 degrees C-656 degrees C, implying fO(2) decrease from metamorphic peak temperatures to retrograde conditions; ii) 5 main types of fluid inclusions were observed: a) CO(2) and CO(2)-N(2) (0-11 mol%) high to medium density (1.01-0.59 g/cm(3)) FI; b) CO(2) and CO(2)-N(2) (0-36 mol%) low density (0.19-0.29 g/cm(3)) FI; c) CO(2) (94-95 mol%)-N(2) (3 mol%)-CH(4) (2-3 mol%)-H(2)O (water phi(v) (25 degrees C) = 0.1) FI; d) low-salinity H(2)O-CO(2) FI; and e) late low-salinity H(2)O FI; iii) Raman analyses evidence two graphite types in khondalites: an early highly ordered graphite (T similar to 450 degrees C) overgrown by a disordered kind (T similar to 330 degrees C); iv) delta(18)O quartz results of 10.3-10.7 parts per thousand, imply high-temperature CO(2) delta(18)O values of 14.4-14.8 parts per thousand, suggesting the involvement of a metamorphic fluid, whereas lower temperature biotite delta(18)O and delta D results of 7.5-8.5 parts per thousand and -54 to -67 parts per thousand respectively imply H(2)O delta(18)O values of 10-11 parts per thousand and delta D(H2O) of -23 to -36 parts per thousand suggesting delta(18)O depletion and increasing fluid/rock ratio from metamorphic peak to retrograde conditions. Isotopic results are compatible with low-temperature H(2)O influx and fO(2) decrease that promoted graphite deposition in retrograde granulites, simultaneous with low density CO(2), CO(2)-N(2) and CO(2)-N(2)-CH(4)-H(2)O fluid inclusions at T = 450-330 degrees C. Graphite delta(13)C results of -10.9 to -11.4 parts per thousand imply CO(2) delta(13)C values of -0.8 to -1.3 parts per thousand suggesting decarbonation of Cambrian marine carbonates with small admixture of lighter biogenic or mantle derived fluids. Based on these results, it is suggested that metamorphic fluids from the central segment of Ribeira Fold Belt evolved to CO(2)-N(2) fluids during granulitic metamorphism at high fO(2), followed by rapid pressure drop at T similar to 400-450 degrees C during late exhumation that caused fO(2) reduction induced by temperature decrease and water influx, turning carbonic fluids into CO(2)-H(2)O (depleting biotite delta(18)O and delta D values), and progressively into H(2)O. When fO(2) decreased substantially by mixture of carbonic and aqueous fluids, graphite deposited forming khondalites. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
Managing software maintenance is rarely a precise task due to uncertainties concerned with resources and services descriptions. Even when a well-established maintenance process is followed, the risk of delaying tasks remains if the new services are not precisely described or when resources change during process execution. Also, the delay of a task at an early process stage may represent a different delay at the end of the process, depending on complexity or services reliability requirements. This paper presents a knowledge-based representation (Bayesian Networks) for maintenance project delays based on specialists experience and a corresponding tool to help in managing software maintenance projects. (c) 2006 Elsevier Ltd. All rights reserved.
Resumo:
Predictors of random effects are usually based on the popular mixed effects (ME) model developed under the assumption that the sample is obtained from a conceptual infinite population; such predictors are employed even when the actual population is finite. Two alternatives that incorporate the finite nature of the population are obtained from the superpopulation model proposed by Scott and Smith (1969. Estimation in multi-stage surveys. J. Amer. Statist. Assoc. 64, 830-840) or from the finite population mixed model recently proposed by Stanek and Singer (2004. Predicting random effects from finite population clustered samples with response error. J. Amer. Statist. Assoc. 99, 1119-1130). Predictors derived under the latter model with the additional assumptions that all variance components are known and that within-cluster variances are equal have smaller mean squared error (MSE) than the competitors based on either the ME or Scott and Smith`s models. As population variances are rarely known, we propose method of moment estimators to obtain empirical predictors and conduct a simulation study to evaluate their performance. The results suggest that the finite population mixed model empirical predictor is more stable than its competitors since, in terms of MSE, it is either the best or the second best and when second best, its performance lies within acceptable limits. When both cluster and unit intra-class correlation coefficients are very high (e.g., 0.95 or more), the performance of the empirical predictors derived under the three models is similar. (c) 2007 Elsevier B.V. All rights reserved.
Resumo:
Item response theory (IRT) comprises a set of statistical models which are useful in many fields, especially when there is interest in studying latent variables. These latent variables are directly considered in the Item Response Models (IRM) and they are usually called latent traits. A usual assumption for parameter estimation of the IRM, considering one group of examinees, is to assume that the latent traits are random variables which follow a standard normal distribution. However, many works suggest that this assumption does not apply in many cases. Furthermore, when this assumption does not hold, the parameter estimates tend to be biased and misleading inference can be obtained. Therefore, it is important to model the distribution of the latent traits properly. In this paper we present an alternative latent traits modeling based on the so-called skew-normal distribution; see Genton (2004). We used the centred parameterization, which was proposed by Azzalini (1985). This approach ensures the model identifiability as pointed out by Azevedo et al. (2009b). Also, a Metropolis Hastings within Gibbs sampling (MHWGS) algorithm was built for parameter estimation by using an augmented data approach. A simulation study was performed in order to assess the parameter recovery in the proposed model and the estimation method, and the effect of the asymmetry level of the latent traits distribution on the parameter estimation. Also, a comparison of our approach with other estimation methods (which consider the assumption of symmetric normality for the latent traits distribution) was considered. The results indicated that our proposed algorithm recovers properly all parameters. Specifically, the greater the asymmetry level, the better the performance of our approach compared with other approaches, mainly in the presence of small sample sizes (number of examinees). Furthermore, we analyzed a real data set which presents indication of asymmetry concerning the latent traits distribution. The results obtained by using our approach confirmed the presence of strong negative asymmetry of the latent traits distribution. (C) 2010 Elsevier B.V. All rights reserved.