995 resultados para Non informative priors


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We analyze crash data collected by the Iowa Department of Transportation using Bayesian methods. The data set includes monthly crash numbers, estimated monthly traffic volumes, site length and other information collected at 30 paired sites in Iowa over more than 20 years during which an intervention experiment was set up. The intervention consisted in transforming 15 undivided road segments from four-lane to three lanes, while an additional 15 segments, thought to be comparable in terms of traffic safety-related characteristics were not converted. The main objective of this work is to find out whether the intervention reduces the number of crashes and the crash rates at the treated sites. We fitted a hierarchical Poisson regression model with a change-point to the number of monthly crashes per mile at each of the sites. Explanatory variables in the model included estimated monthly traffic volume, time, an indicator for intervention reflecting whether the site was a “treatment” or a “control” site, and various interactions. We accounted for seasonal effects in the number of crashes at a site by including smooth trigonometric functions with three different periods to reflect the four seasons of the year. A change-point at the month and year in which the intervention was completed for treated sites was also included. The number of crashes at a site can be thought to follow a Poisson distribution. To estimate the association between crashes and the explanatory variables, we used a log link function and added a random effect to account for overdispersion and for autocorrelation among observations obtained at the same site. We used proper but non-informative priors for all parameters in the model, and carried out all calculations using Markov chain Monte Carlo methods implemented in WinBUGS. We evaluated the effect of the four to three-lane conversion by comparing the expected number of crashes per year per mile during the years preceding the conversion and following the conversion for treatment and control sites. We estimated this difference using the observed traffic volumes at each site and also on a per 100,000,000 vehicles. We also conducted a prospective analysis to forecast the expected number of crashes per mile at each site in the study one year, three years and five years following the four to three-lane conversion. Posterior predictive distributions of the number of crashes, the crash rate and the percent reduction in crashes per mile were obtained for each site for the months of January and June one, three and five years after completion of the intervention. The model appears to fit the data well. We found that in most sites, the intervention was effective and reduced the number of crashes. Overall, and for the observed traffic volumes, the reduction in the expected number of crashes per year and mile at converted sites was 32.3% (31.4% to 33.5% with 95% probability) while at the control sites, the reduction was estimated to be 7.1% (5.7% to 8.2% with 95% probability). When the reduction in the expected number of crashes per year, mile and 100,000,000 AADT was computed, the estimates were 44.3% (43.9% to 44.6%) and 25.5% (24.6% to 26.0%) for converted and control sites, respectively. In both cases, the difference in the percent reduction in the expected number of crashes during the years following the conversion was significantly larger at converted sites than at control sites, even though the number of crashes appears to decline over time at all sites. Results indicate that the reduction in the expected number of sites per mile has a steeper negative slope at converted than at control sites. Consistent with this, the forecasted reduction in the number of crashes per year and mile during the years after completion of the conversion at converted sites is more pronounced than at control sites. Seasonal effects on the number of crashes have been well-documented. In this dataset, we found that, as expected, the expected number of monthly crashes per mile tends to be higher during winter months than during the rest of the year. Perhaps more interestingly, we found that there is an interaction between the four to three-lane conversion and season; the reduction in the number of crashes appears to be more pronounced during months, when the weather is nice than during other times of the year, even though a reduction was estimated for the entire year. Thus, it appears that the four to three-lane conversion, while effective year-round, is particularly effective in reducing the expected number of crashes in nice weather.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In Bayesian Inference it is often desirable to have a posterior density reflecting mainly the information from sample data. To achieve this purpose it is important to employ prior densities which add little information to the sample. We have in the literature many such prior densities, for example, Jeffreys (1967), Lindley (1956); (1961), Hartigan (1964), Bernardo (1979), Zellner (1984), Tibshirani (1989), etc. In the present article, we compare the posterior densities of the reliability function by using Jeffreys, the maximal data information (Zellner, 1984), Tibshirani's, and reference priors for the reliability function R(t) in a Weibull distribution.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

To estimate causal relationships, time series econometricians must be aware of spurious correlation, a problem first mentioned by Yule (1926). To deal with this problem, one can work either with differenced series or multivariate models: VAR (VEC or VECM) models. These models usually include at least one cointegration relation. Although the Bayesian literature on VAR/VEC is quite advanced, Bauwens et al. (1999) highlighted that "the topic of selecting the cointegrating rank has not yet given very useful and convincing results". The present article applies the Full Bayesian Significance Test (FBST), especially designed to deal with sharp hypotheses, to cointegration rank selection tests in VECM time series models. It shows the FBST implementation using both simulated and available (in the literature) data sets. As illustration, standard non informative priors are used.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The generalized exponential distribution, proposed by Gupta and Kundu (1999), is a good alternative to standard lifetime distributions as exponential, Weibull or gamma. Several authors have considered the problem of Bayesian estimation of the parameters of generalized exponential distribution, assuming independent gamma priors and other informative priors. In this paper, we consider a Bayesian analysis of the generalized exponential distribution by assuming the conventional non-informative prior distributions, as Jeffreys and reference prior, to estimate the parameters. These priors are compared with independent gamma priors for both parameters. The comparison is carried out by examining the frequentist coverage probabilities of Bayesian credible intervals. We shown that maximal data information prior implies in an improper posterior distribution for the parameters of a generalized exponential distribution. It is also shown that the choice of a parameter of interest is very important for the reference prior. The different choices lead to different reference priors in this case. Numerical inference is illustrated for the parameters by considering data set of different sizes and using MCMC (Markov Chain Monte Carlo) methods.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper, we consider the problem of estimating the number of times an air quality standard is exceeded in a given period of time. A non-homogeneous Poisson model is proposed to analyse this issue. The rate at which the Poisson events occur is given by a rate function lambda(t), t >= 0. This rate function also depends on some parameters that need to be estimated. Two forms of lambda(t), t >= 0 are considered. One of them is of the Weibull form and the other is of the exponentiated-Weibull form. The parameters estimation is made using a Bayesian formulation based on the Gibbs sampling algorithm. The assignation of the prior distributions for the parameters is made in two stages. In the first stage, non-informative prior distributions are considered. Using the information provided by the first stage, more informative prior distributions are used in the second one. The theoretical development is applied to data provided by the monitoring network of Mexico City. The rate function that best fit the data varies according to the region of the city and/or threshold that is considered. In some cases the best fit is the Weibull form and in other cases the best option is the exponentiated-Weibull. Copyright (C) 2007 John Wiley & Sons, Ltd.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The exponential-logarithmic is a new lifetime distribution with decreasing failure rate and interesting applications in the biological and engineering sciences. Thus, a Bayesian analysis of the parameters would be desirable. Bayesian estimation requires the selection of prior distributions for all parameters of the model. In this case, researchers usually seek to choose a prior that has little information on the parameters, allowing the data to be very informative relative to the prior information. Assuming some noninformative prior distributions, we present a Bayesian analysis using Markov Chain Monte Carlo (MCMC) methods. Jeffreys prior is derived for the parameters of exponential-logarithmic distribution and compared with other common priors such as beta, gamma, and uniform distributions. In this article, we show through a simulation study that the maximum likelihood estimate may not exist except under restrictive conditions. In addition, the posterior density is sometimes bimodal when an improper prior density is used. © 2013 Copyright Taylor and Francis Group, LLC.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The standard analyses of survival data involve the assumption that survival and censoring are independent. When censoring and survival are related, the phenomenon is known as informative censoring. This paper examines the effects of an informative censoring assumption on the hazard function and the estimated hazard ratio provided by the Cox model.^ The limiting factor in all analyses of informative censoring is the problem of non-identifiability. Non-identifiability implies that it is impossible to distinguish a situation in which censoring and death are independent from one in which there is dependence. However, it is possible that informative censoring occurs. Examination of the literature indicates how others have approached the problem and covers the relevant theoretical background.^ Three models are examined in detail. The first model uses conditionally independent marginal hazards to obtain the unconditional survival function and hazards. The second model is based on the Gumbel Type A method for combining independent marginal distributions into bivariate distributions using a dependency parameter. Finally, a formulation based on a compartmental model is presented and its results described. For the latter two approaches, the resulting hazard is used in the Cox model in a simulation study.^ The unconditional survival distribution formed from the first model involves dependency, but the crude hazard resulting from this unconditional distribution is identical to the marginal hazard, and inferences based on the hazard are valid. The hazard ratios formed from two distributions following the Gumbel Type A model are biased by a factor dependent on the amount of censoring in the two populations and the strength of the dependency of death and censoring in the two populations. The Cox model estimates this biased hazard ratio. In general, the hazard resulting from the compartmental model is not constant, even if the individual marginal hazards are constant, unless censoring is non-informative. The hazard ratio tends to a specific limit.^ Methods of evaluating situations in which informative censoring is present are described, and the relative utility of the three models examined is discussed. ^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Defining the pharmacokinetics of drugs in overdose is complicated. Deliberate self-poisoning is generally impulsive and associated with poor accuracy in dose history. In addition, early blood samples are rarely collected to characterize the whole plasma-concentration time profile and the effect of decontamination on the pharmacokinetics is uncertain. The aim of this study was to explore a fully Bayesian methodology for population pharmacokinetic analysis of data that arose from deliberate self-poisoning with citalopram. Prior information on the pharmacokinetic parameters was elicited from 14 published studies on citalopram when taken in therapeutic doses. The data set included concentration-time data from 53 patients studied after 63 citalopram overdose events (dose range: 20-1700 mg). Activated charcoal was administered between 0.5 and 4 h after 17 overdose events. The clinical investigator graded the veracity of the patients' dosing history on a 5-point ordinal scale. Inclusion of informative priors stabilised the pharmacokinetic model and the population mean values could be estimated well. There were no indications of non-linear clearance after excessive doses. The final model included an estimated uncertainty of the dose amount which in a simulation study was shown to not affect the model's ability to characterise the effects of activated charcoal. The effect of activated charcoal on clearance and bioavailability was pronounced and resulted in a 72% increase and 22% decrease, respectively. These findings suggest charcoal administration is potentially beneficial after citalopram overdose. The methodology explored seems promising for exploring the dose-exposure relationship in the toxicological settings.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Les modèles incrémentaux sont des modèles statistiques qui ont été développés initialement dans le domaine du marketing. Ils sont composés de deux groupes, un groupe contrôle et un groupe traitement, tous deux comparés par rapport à une variable réponse binaire (le choix de réponses est « oui » ou « non »). Ces modèles ont pour but de détecter l’effet du traitement sur les individus à l’étude. Ces individus n’étant pas tous des clients, nous les appellerons : « prospects ». Cet effet peut être négatif, nul ou positif selon les caractéristiques des individus composants les différents groupes. Ce mémoire a pour objectif de comparer des modèles incrémentaux d’un point de vue bayésien et d’un point de vue fréquentiste. Les modèles incrémentaux utilisés en pratique sont ceux de Lo (2002) et de Lai (2004). Ils sont initialement réalisés d’un point de vue fréquentiste. Ainsi, dans ce mémoire, l’approche bayésienne est utilisée et comparée à l’approche fréquentiste. Les simulations sont e ectuées sur des données générées avec des régressions logistiques. Puis, les paramètres de ces régressions sont estimés avec des simulations Monte-Carlo dans l’approche bayésienne et comparés à ceux obtenus dans l’approche fréquentiste. L’estimation des paramètres a une influence directe sur la capacité du modèle à bien prédire l’effet du traitement sur les individus. Nous considérons l’utilisation de trois lois a priori pour l’estimation des paramètres de façon bayésienne. Elles sont choisies de manière à ce que les lois a priori soient non informatives. Les trois lois utilisées sont les suivantes : la loi bêta transformée, la loi Cauchy et la loi normale. Au cours de l’étude, nous remarquerons que les méthodes bayésiennes ont un réel impact positif sur le ciblage des individus composant les échantillons de petite taille.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Les modèles incrémentaux sont des modèles statistiques qui ont été développés initialement dans le domaine du marketing. Ils sont composés de deux groupes, un groupe contrôle et un groupe traitement, tous deux comparés par rapport à une variable réponse binaire (le choix de réponses est « oui » ou « non »). Ces modèles ont pour but de détecter l’effet du traitement sur les individus à l’étude. Ces individus n’étant pas tous des clients, nous les appellerons : « prospects ». Cet effet peut être négatif, nul ou positif selon les caractéristiques des individus composants les différents groupes. Ce mémoire a pour objectif de comparer des modèles incrémentaux d’un point de vue bayésien et d’un point de vue fréquentiste. Les modèles incrémentaux utilisés en pratique sont ceux de Lo (2002) et de Lai (2004). Ils sont initialement réalisés d’un point de vue fréquentiste. Ainsi, dans ce mémoire, l’approche bayésienne est utilisée et comparée à l’approche fréquentiste. Les simulations sont e ectuées sur des données générées avec des régressions logistiques. Puis, les paramètres de ces régressions sont estimés avec des simulations Monte-Carlo dans l’approche bayésienne et comparés à ceux obtenus dans l’approche fréquentiste. L’estimation des paramètres a une influence directe sur la capacité du modèle à bien prédire l’effet du traitement sur les individus. Nous considérons l’utilisation de trois lois a priori pour l’estimation des paramètres de façon bayésienne. Elles sont choisies de manière à ce que les lois a priori soient non informatives. Les trois lois utilisées sont les suivantes : la loi bêta transformée, la loi Cauchy et la loi normale. Au cours de l’étude, nous remarquerons que les méthodes bayésiennes ont un réel impact positif sur le ciblage des individus composant les échantillons de petite taille.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Diagnostic methods have been an important tool in regression analysis to detect anomalies, such as departures from error assumptions and the presence of outliers and influential observations with the fitted models. Assuming censored data, we considered a classical analysis and Bayesian analysis assuming no informative priors for the parameters of the model with a cure fraction. A Bayesian approach was considered by using Markov Chain Monte Carlo Methods with Metropolis-Hasting algorithms steps to obtain the posterior summaries of interest. Some influence methods, such as the local influence, total local influence of an individual, local influence on predictions and generalized leverage were derived, analyzed and discussed in survival data with a cure fraction and covariates. The relevance of the approach was illustrated with a real data set, where it is shown that, by removing the most influential observations, the decision about which model best fits the data is changed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this study we elicit agents’ prior information set regarding a public good, exogenously give information treatments to survey respondents and subsequently elicit willingness to pay for the good and posterior information sets. The design of this field experiment allows us to perform theoretically motivated hypothesis testing between different updating rules: non-informative updating, Bayesian updating, and incomplete updating. We find causal evidence that agents imperfectly update their information sets. We also field causal evidence that the amount of additional information provided to subjects relative to their pre-existing information levels can affect stated WTP in ways consistent overload from too much learning. This result raises important (though familiar) issues for the use of stated preference methods in policy analysis.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper studies a model of announcements by a privately informed government about the future state of the economic activity in an economy subject to recurrent shocks and with distortions due to income taxation. Although transparent communication would ex ante be desirable, we find that even a benevolent government may ex-post be non-informative, in an attempt to countervail the tax distortion with a "second best" compensating distortion in information. This result provides a rationale for independent national statistical offices, committed to truthful communication. We also find that whether inequality in income distribution favors or harms government transparency depends on labor supply elasticity.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We propose to analyze shapes as “compositions” of distances in Aitchison geometry asan alternate and complementary tool to classical shape analysis, especially when sizeis non-informative.Shapes are typically described by the location of user-chosen landmarks. Howeverthe shape – considered as invariant under scaling, translation, mirroring and rotation– does not uniquely define the location of landmarks. A simple approach is to usedistances of landmarks instead of the locations of landmarks them self. Distances arepositive numbers defined up to joint scaling, a mathematical structure quite similar tocompositions. The shape fixes only ratios of distances. Perturbations correspond torelative changes of the size of subshapes and of aspect ratios. The power transformincreases the expression of the shape by increasing distance ratios. In analogy to thesubcompositional consistency, results should not depend too much on the choice ofdistances, because different subsets of the pairwise distances of landmarks uniquelydefine the shape.Various compositional analysis tools can be applied to sets of distances directly or afterminor modifications concerning the singularity of the covariance matrix and yield resultswith direct interpretations in terms of shape changes. The remaining problem isthat not all sets of distances correspond to a valid shape. Nevertheless interpolated orpredicted shapes can be backtransformated by multidimensional scaling (when all pairwisedistances are used) or free geodetic adjustment (when sufficiently many distancesare used)