71 resultados para Bayesian model selection
Resumo:
This paper addresses the investment decisions considering the presence of financial constraints of 373 large Brazilian firms from 1997 to 2004, using panel data. A Bayesian econometric model was used considering ridge regression for multicollinearity problems among the variables in the model. Prior distributions are assumed for the parameters, classifying the model into random or fixed effects. We used a Bayesian approach to estimate the parameters, considering normal and Student t distributions for the error and assumed that the initial values for the lagged dependent variable are not fixed, but generated by a random process. The recursive predictive density criterion was used for model comparisons. Twenty models were tested and the results indicated that multicollinearity does influence the value of the estimated parameters. Controlling for capital intensity, financial constraints are found to be more important for capital-intensive firms, probably due to their lower profitability indexes, higher fixed costs and higher degree of property diversification.
Resumo:
For the purpose of developing a longitudinal model to predict hand-and-foot syndrome (HFS) dynamics in patients receiving capecitabine, data from two large phase III studies were used. Of 595 patients in the capecitabine arms, 400 patients were randomly selected to build the model, and the other 195 were assigned for model validation. A score for risk of developing HFS was modeled using the proportional odds model, a sigmoidal maximum effect model driven by capecitabine accumulation as estimated through a kinetic-pharmacodynamic model and a Markov process. The lower the calculated creatinine clearance value at inclusion, the higher was the risk of HFS. Model validation was performed by visual and statistical predictive checks. The predictive dynamic model of HFS in patients receiving capecitabine allows the prediction of toxicity risk based on cumulative capecitabine dose and previous HFS grade. This dose-toxicity model will be useful in developing Bayesian individual treatment adaptations and may be of use in the clinic.
Resumo:
Fuzzy Bayesian tests were performed to evaluate whether the mother`s seroprevalence and children`s seroconversion to measles vaccine could be considered as ""high"" or ""low"". The results of the tests were aggregated into a fuzzy rule-based model structure, which would allow an expert to influence the model results. The linguistic model was developed considering four input variables. As the model output, we obtain the recommended age-specific vaccine coverage. The inputs of the fuzzy rules are fuzzy sets and the outputs are constant functions, performing the simplest Takagi-Sugeno-Kang model. This fuzzy approach is compared to a classical one, where the classical Bayes test was performed. Although the fuzzy and classical performances were similar, the fuzzy approach was more detailed and revealed important differences. In addition to taking into account subjective information in the form of fuzzy hypotheses it can be intuitively grasped by the decision maker. Finally, we show that the Bayesian test of fuzzy hypotheses is an interesting approach from the theoretical point of view, in the sense that it combines two complementary areas of investigation, normally seen as competitive. (C) 2007 IMACS. Published by Elsevier B.V. All rights reserved.
Resumo:
The benefits of breastfeeding for the children`s health have been highlighted in many studies. The innovative aspect of the present study lies in its use of a multilevel model, a technique that has rarely been applied to studies on breastfeeding. The data reported were collected from a larger study, the Family Budget Survey-Pesquisa de Orcamentos Familiares, carried out between 2002 and 2003 in Brazil that involved a sample of 48 470 households. A representative national sample of 1477 infants aged 0-6 months was used. The statistical analysis was performed using a multilevel model, with two levels grouped by region. In Brazil, breastfeeding prevalence was 58%. The factors that bore a negative influence on breastfeeding were over four residents living in the same household [odds ratio (OR) = 0.68, 90% confidence interval (CI) = 0.51-0.89] and mothers aged 30 years or more (OR = 0.68, 90% CI = 0.53-0.89). The factors that positively influenced breastfeeding were the following: higher socio-economic levels (OR = 1.37, 90% CI = 1.01-1.88), families with over two infants under 5 years (OR = 1.25, 90% CI = 1.00-1.58) and being a resident in rural areas (OR = 1.25, 90% CI = 1.00-1.58). Although majority of the mothers was aware of the value of maternal milk and breastfed their babies, the prevalence of breastfeeding remains lower than the rate advised by the World Health Organization, and the number of residents living in the same household along with mothers aged 30 years or older were both factors associated with early cessation of infant breastfeeding before 6 months.
Resumo:
In 2004 the National Household Survey (Pesquisa Nacional par Amostras de Domicilios - PNAD) estimated the prevalence of food and nutrition insecurity in Brazil. However, PNAD data cannot be disaggregated at the municipal level. The objective of this study was to build a statistical model to predict severe food insecurity for Brazilian municipalities based on the PNAD dataset. Exclusion criteria were: incomplete food security data (19.30%); informants younger than 18 years old (0.07%); collective households (0.05%); households headed by indigenous persons (0.19%). The modeling was carried out in three stages, beginning with the selection of variables related to food insecurity using univariate logistic regression. The variables chosen to construct the municipal estimates were selected from those included in PNAD as well as the 2000 Census. Multivariate logistic regression was then initiated, removing the non-significant variables with odds ratios adjusted by multiple logistic regression. The Wald Test was applied to check the significance of the coefficients in the logistic equation. The final model included the variables: per capita income; years of schooling; race and gender of the household head; urban or rural residence; access to public water supply; presence of children; total number of household inhabitants and state of residence. The adequacy of the model was tested using the Hosmer-Lemeshow test (p=0.561) and ROC curve (area=0.823). Tests indicated that the model has strong predictive power and can be used to determine household food insecurity in Brazilian municipalities, suggesting that similar predictive models may be useful tools in other Latin American countries.
Resumo:
Scrotal circumference data from 47,605 Nellore young bulls, measured at around 18 mo of age (SC18), were analyzed simultaneously with 27,924 heifer pregnancy (HP) and 80,831 stayability (STAY) records to estimate their additive genetic relationships. Additionally, the possibility that economically relevant traits measured directly in females could replace SC18 as a selection criterion was verified. Heifer pregnancy was defined as the observation that a heifer conceived and remained pregnant, which was assessed by rectal palpation at 60 d. Females were exposed to sires for the first time at about 14 mo of age (between 11 and 16 mo). Stayability was defined as whether or not a cow calved every year up to 5 yr of age, when the opportunity to breed was provided. A Bayesian linear-threshold-threshold analysis via Gibbs sampler was used to estimate the variance and covariance components of the multitrait model. Heritability estimates were 0.42 +/- 0.01, 0.53 +/- 0.03, and 0.10 +/- 0.01, for SC18, HP, and STAY, respectively. The genetic correlation estimates were 0.29 +/- 0.05, 0.19 +/- 0.05, and 0.64 +/- 0.07 between SC18 and HP, SC18 and STAY, and HP and STAY, respectively. The residual correlation estimate between HP and STAY was -0.08 +/- 0.03. The heritability values indicate the existence of considerable genetic variance for SC18 and HP traits. However, genetic correlations between SC18 and the female reproductive traits analyzed in the present study can only be considered moderate. The small residual correlation between HP and STAY suggests that environmental effects common to both traits are not major. The large heritability estimate for HP and the high genetic correlation between HP and STAY obtained in the present study confirm that EPD for HP can be used to select bulls for the production of precocious, fertile, and long-lived daughters. Moreover, SC18 could be incorporated in multitrait analysis to improve the prediction accuracy for HP genetic merit of young bulls.
Resumo:
The kinematic expansion history of the universe is investigated by using the 307 supernovae type Ia from the Union Compilation set. Three simple model parameterizations for the deceleration parameter ( constant, linear and abrupt transition) and two different models that are explicitly parametrized by the cosmic jerk parameter ( constant and variable) are considered. Likelihood and Bayesian analyses are employed to find best fit parameters and compare models among themselves and with the flat Lambda CDM model. Analytical expressions and estimates for the deceleration and cosmic jerk parameters today (q(0) and j(0)) and for the transition redshift (z(t)) between a past phase of cosmic deceleration to a current phase of acceleration are given. All models characterize an accelerated expansion for the universe today and largely indicate that it was decelerating in the past, having a transition redshift around 0.5. The cosmic jerk is not strongly constrained by the present supernovae data. For the most realistic kinematic models the 1 sigma confidence limits imply the following ranges of values: q(0) is an element of [-0.96, -0.46], j(0) is an element of [-3.2,-0.3] and z(t) is an element of [0.36, 0.84], which are compatible with the Lambda CDM predictions, q(0) = -0.57 +/- 0.04, j(0) = -1 and z(t) = 0.71 +/- 0.08. We find that even very simple kinematic models are equally good to describe the data compared to the concordance Lambda CDM model, and that the current observations are not powerful enough to discriminate among all of them.
Resumo:
Attention is a critical mechanism for visual scene analysis. By means of attention, it is possible to break down the analysis of a complex scene to the analysis of its parts through a selection process. Empirical studies demonstrate that attentional selection is conducted on visual objects as a whole. We present a neurocomputational model of object-based selection in the framework of oscillatory correlation. By segmenting an input scene and integrating the segments with their conspicuity obtained from a saliency map, the model selects salient objects rather than salient locations. The proposed system is composed of three modules: a saliency map providing saliency values of image locations, image segmentation for breaking the input scene into a set of objects, and object selection which allows one of the objects of the scene to be selected at a time. This object selection system has been applied to real gray-level and color images and the simulation results show the effectiveness of the system. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
In this paper, we compare the performance of two statistical approaches for the analysis of data obtained from the social research area. In the first approach, we use normal models with joint regression modelling for the mean and for the variance heterogeneity. In the second approach, we use hierarchical models. In the first case, individual and social variables are included in the regression modelling for the mean and for the variance, as explanatory variables, while in the second case, the variance at level 1 of the hierarchical model depends on the individuals (age of the individuals), and in the level 2 of the hierarchical model, the variance is assumed to change according to socioeconomic stratum. Applying these methodologies, we analyze a Colombian tallness data set to find differences that can be explained by socioeconomic conditions. We also present some theoretical and empirical results concerning the two models. From this comparative study, we conclude that it is better to jointly modelling the mean and variance heterogeneity in all cases. We also observe that the convergence of the Gibbs sampling chain used in the Markov Chain Monte Carlo method for the jointly modeling the mean and variance heterogeneity is quickly achieved.
Resumo:
In this paper, we introduce a Bayesian analysis for bioequivalence data assuming multivariate pharmacokinetic measures. With the introduction of correlation parameters between the pharmacokinetic measures or between the random effects in the bioequivalence models, we observe a good improvement in the bioequivalence results. These results are of great practical interest since they can yield higher accuracy and reliability for the bioequivalence tests, usually assumed by regulatory offices. An example is introduced to illustrate the proposed methodology by comparing the usual univariate bioequivalence methods with multivariate bioequivalence. We also consider some usual existing discrimination Bayesian methods to choose the best model to be used in bioequivalence studies.
Resumo:
In this paper, we consider the problem of estimating the number of times an air quality standard is exceeded in a given period of time. A non-homogeneous Poisson model is proposed to analyse this issue. The rate at which the Poisson events occur is given by a rate function lambda(t), t >= 0. This rate function also depends on some parameters that need to be estimated. Two forms of lambda(t), t >= 0 are considered. One of them is of the Weibull form and the other is of the exponentiated-Weibull form. The parameters estimation is made using a Bayesian formulation based on the Gibbs sampling algorithm. The assignation of the prior distributions for the parameters is made in two stages. In the first stage, non-informative prior distributions are considered. Using the information provided by the first stage, more informative prior distributions are used in the second one. The theoretical development is applied to data provided by the monitoring network of Mexico City. The rate function that best fit the data varies according to the region of the city and/or threshold that is considered. In some cases the best fit is the Weibull form and in other cases the best option is the exponentiated-Weibull. Copyright (C) 2007 John Wiley & Sons, Ltd.
Resumo:
In this paper we make use of some stochastic volatility models to analyse the behaviour of a weekly ozone average measurements series. The models considered here have been used previously in problems related to financial time series. Two models are considered and their parameters are estimated using a Bayesian approach based on Markov chain Monte Carlo (MCMC) methods. Both models are applied to the data provided by the monitoring network of the Metropolitan Area of Mexico City. The selection of the best model for that specific data set is performed using the Deviance Information Criterion and the Conditional Predictive Ordinate method.
Resumo:
In this paper, we formulate a flexible density function from the selection mechanism viewpoint (see, for example, Bayarri and DeGroot (1992) and Arellano-Valle et al. (2006)) which possesses nice biological and physical interpretations. The new density function contains as special cases many models that have been proposed recently in the literature. In constructing this model, we assume that the number of competing causes of the event of interest has a general discrete distribution characterized by its probability generating function. This function has an important role in the selection procedure as well as in computing the conditional personal cure rate. Finally, we illustrate how various models can be deduced as special cases of the proposed model. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
In this paper we have discussed inference aspects of the skew-normal nonlinear regression models following both, a classical and Bayesian approach, extending the usual normal nonlinear regression models. The univariate skew-normal distribution that will be used in this work was introduced by Sahu et al. (Can J Stat 29:129-150, 2003), which is attractive because estimation of the skewness parameter does not present the same degree of difficulty as in the case with Azzalini (Scand J Stat 12:171-178, 1985) one and, moreover, it allows easy implementation of the EM-algorithm. As illustration of the proposed methodology, we consider a data set previously analyzed in the literature under normality.
Resumo:
In interval-censored survival data, the event of interest is not observed exactly but is only known to occur within some time interval. Such data appear very frequently. In this paper, we are concerned only with parametric forms, and so a location-scale regression model based on the exponentiated Weibull distribution is proposed for modeling interval-censored data. We show that the proposed log-exponentiated Weibull regression model for interval-censored data represents a parametric family of models that include other regression models that are broadly used in lifetime data analysis. Assuming the use of interval-censored data, we employ a frequentist analysis, a jackknife estimator, a parametric bootstrap and a Bayesian analysis for the parameters of the proposed model. We derive the appropriate matrices for assessing local influences on the parameter estimates under different perturbation schemes and present some ways to assess global influences. Furthermore, for different parameter settings, sample sizes and censoring percentages, various simulations are performed; in addition, the empirical distribution of some modified residuals are displayed and compared with the standard normal distribution. These studies suggest that the residual analysis usually performed in normal linear regression models can be straightforwardly extended to a modified deviance residual in log-exponentiated Weibull regression models for interval-censored data. (C) 2009 Elsevier B.V. All rights reserved.