20 resultados para Bayesian hypothesis testing
em Scottish Institute for Research in Economics (SIRE) (SIRE), United Kingdom
Resumo:
In this study we elicit agents’ prior information set regarding a public good, exogenously give information treatments to survey respondents and subsequently elicit willingness to pay for the good and posterior information sets. The design of this field experiment allows us to perform theoretically motivated hypothesis testing between different updating rules: non-informative updating, Bayesian updating, and incomplete updating. We find causal evidence that agents imperfectly update their information sets. We also field causal evidence that the amount of additional information provided to subjects relative to their pre-existing information levels can affect stated WTP in ways consistent overload from too much learning. This result raises important (though familiar) issues for the use of stated preference methods in policy analysis.
Resumo:
This paper considers trade secrecy as an appropriation mechanism in the context ofb the US Economic Espionage Act (EEA) 1996. We examine the relation between trade secret intensity and firm size, using a cross section of 95 court cases. The paper builds on extant work in three respects. First, we create a unique body of evidence, using EEA prosecutions from 1996 to 2008. Second, we use an econometric approach to measurement, estimation and hypothesis testing. This allows us comprehensively to test the robustness of findings. Third, we focus on objectively measured valuations, instead of the subjective, self-reported values used elsewhere. We find a stable, robust value for the elasticity of trade secret intensity with respect to firm size, which indicates that a 10% reduction in firm size leads to a 7% increase in trade secret intensity. We find that this result is not sensitive to industrial sector, sample trimming, or functional form.
Resumo:
We develop tests of the proportional hazards assumption, with respect to a continuous covariate, in the presence of unobserved heterogeneity with unknown distribution at the individual observation level. The proposed tests are specially powerful against ordered alternatives useful for modeling non-proportional hazards situations. By contrast to the case when the heterogeneity distribution is known up to …nite dimensional parameters, the null hypothesis for the current problem is similar to a test for absence of covariate dependence. However, the two testing problems di¤er in the nature of relevant alternative hypotheses. We develop tests for both the problems against ordered alternatives. Small sample performance and an application to real data highlight the usefulness of the framework and methodology.
Resumo:
There are both theoretical and empirical reasons for believing that the parameters of macroeconomic models may vary over time. However, work with time-varying parameter models has largely involved Vector autoregressions (VARs), ignoring cointegration. This is despite the fact that cointegration plays an important role in informing macroeconomists on a range of issues. In this paper we develop time varying parameter models which permit cointegration. Time-varying parameter VARs (TVP-VARs) typically use state space representations to model the evolution of parameters. In this paper, we show that it is not sensible to use straightforward extensions of TVP-VARs when allowing for cointegration. Instead we develop a specification which allows for the cointegrating space to evolve over time in a manner comparable to the random walk variation used with TVP-VARs. The properties of our approach are investigated before developing a method of posterior simulation. We use our methods in an empirical investigation involving a permanent/transitory variance decomposition for inflation.
Resumo:
We report experiments designed to test between Nash equilibria that are stable and unstable under learning. The “TASP” (Time Average of the Shapley Polygon) gives a precise prediction about what happens when there is divergence from equilibrium under fictitious play like learning processes. We use two 4 x 4 games each with a unique mixed Nash equilibrium; one is stable and one is unstable under learning. Both games are versions of Rock-Paper-Scissors with the addition of a fourth strategy, Dumb. Nash equilibrium places a weight of 1/2 on Dumb in both games, but the TASP places no weight on Dumb when the equilibrium is unstable. We also vary the level of monetary payoffs with higher payoffs predicted to increase instability. We find that the high payoff unstable treatment differs from the others. Frequency of Dumb is lower and play is further from Nash than in the other treatments. That is, we find support for the comparative statics prediction of learning theory, although the frequency of Dumb is substantially greater than zero in the unstable treatments.
Resumo:
We test the real interest rate parity hypothesis using data for the G7 countries over the period 1970-2008. Our contribution is two-fold. First, we utilize the ARDL bounds approach of Pesaran et al. (2001) which allows us to overcome uncertainty about the order of integration of real interest rates. Second, we test for structural breaks in the underlying relationship using the multiple structural breaks test of Bai and Perron (1998, 2003). Our results indicate significant parameter instability and suggest that, despite the advances in economic and financial integration, real interest rate parity has not fully recovered from a breakdown in the 1980s.
Resumo:
This paper uses an infinite hidden Markov model (IIHMM) to analyze U.S. inflation dynamics with a particular focus on the persistence of inflation. The IHMM is a Bayesian nonparametric approach to modeling structural breaks. It allows for an unknown number of breakpoints and is a flexible and attractive alternative to existing methods. We found a clear structural break during the recent financial crisis. Prior to that, inflation persistence was high and fairly constant.
Resumo:
In recent years there has been increasing concern about the identification of parameters in dynamic stochastic general equilibrium (DSGE) models. Given the structure of DSGE models it may be difficult to determine whether a parameter is identified. For the researcher using Bayesian methods, a lack of identification may not be evident since the posterior of a parameter of interest may differ from its prior even if the parameter is unidentified. We show that this can even be the case even if the priors assumed on the structural parameters are independent. We suggest two Bayesian identification indicators that do not suffer from this difficulty and are relatively easy to compute. The first applies to DSGE models where the parameters can be partitioned into those that are known to be identified and the rest where it is not known whether they are identified. In such cases the marginal posterior of an unidentified parameter will equal the posterior expectation of the prior for that parameter conditional on the identified parameters. The second indicator is more generally applicable and considers the rate at which the posterior precision gets updated as the sample size (T) is increased. For identified parameters the posterior precision rises with T, whilst for an unidentified parameter its posterior precision may be updated but its rate of update will be slower than T. This result assumes that the identified parameters are pT-consistent, but similar differential rates of updates for identified and unidentified parameters can be established in the case of super consistent estimators. These results are illustrated by means of simple DSGE models.
Resumo:
This paper considers the instrumental variable regression model when there is uncertainty about the set of instruments, exogeneity restrictions, the validity of identifying restrictions and the set of exogenous regressors. This uncertainty can result in a huge number of models. To avoid statistical problems associated with standard model selection procedures, we develop a reversible jump Markov chain Monte Carlo algorithm that allows us to do Bayesian model averaging. The algorithm is very exible and can be easily adapted to analyze any of the di¤erent priors that have been proposed in the Bayesian instrumental variables literature. We show how to calculate the probability of any relevant restriction (e.g. the posterior probability that over-identifying restrictions hold) and discuss diagnostic checking using the posterior distribution of discrepancy vectors. We illustrate our methods in a returns-to-schooling application.
Resumo:
We propose a nonlinear heterogeneous panel unit root test for testing the null hypothesis of unit-roots processes against the alternative that allows a proportion of units to be generated by globally stationary ESTAR processes and a remaining non-zero proportion to be generated by unit root processes. The proposed test is simple to implement and accommodates cross sectional dependence. We show that the distribution of the test statistic is free of nuisance parameters as (N, T) −! 1. Monte Carlo simulation shows that our test holds correct size and under the hypothesis that data are generated by globally stationary ESTAR processes has a better power than the recent test proposed in Pesaran [2007]. Various applications are provided.
Resumo:
This paper is motivated by the recent interest in the use of Bayesian VARs for forecasting, even in cases where the number of dependent variables is large. In such cases, factor methods have been traditionally used but recent work using a particular prior suggests that Bayesian VAR methods can forecast better. In this paper, we consider a range of alternative priors which have been used with small VARs, discuss the issues which arise when they are used with medium and large VARs and examine their forecast performance using a US macroeconomic data set containing 168 variables. We nd that Bayesian VARs do tend to forecast better than factor methods and provide an extensive comparison of the strengths and weaknesses of various approaches. Our empirical results show the importance of using forecast metrics which use the entire predictive density, instead of using only point forecasts.
Resumo:
The Conservative Party emerged from the 2010 United Kingdom General Election as the largest single party, but their support was not geographically uniform. In this paper, we estimate a hierarchical Bayesian spatial probit model that tests for the presence of regional voting effects. This model allows for the estimation of individual region-specic effects on the probability of Conservative Party success, incorporating information on the spatial relationships between the regions of the mainland United Kingdom. After controlling for a range of important covariates, we find that these spatial relationships are significant and that our individual region-specic effects estimates provide additional evidence of North-South variations in Conservative Party support.
Resumo:
This paper considers Bayesian variable selection in regressions with a large number of possibly highly correlated macroeconomic predictors. I show that by acknowledging the correlation structure in the predictors can improve forecasts over existing popular Bayesian variable selection algorithms.
Resumo:
In contrast to previous results combining all ages we find positive effects of comparison income on happiness for the under 45s, and negative effects for those over 45. In the BHPS these coefficients are several times the magnitude of own income effects. In GSOEP they cancel to give no effect of effect of comparison income on life satisfaction in the whole sample, when controlling for fixed effects, and time-in-panel, and with flexible, age-group dummies. The residual age-happiness relationship is hump-shaped in all three countries. Results are consistent with a simple life cycle model of relative income under uncertainty.
Resumo:
This paper proposes a novel way of testing exogeneity of an explanatory variable without any parametric assumptions in the presence of a "conditional" instrumental variable. A testable implication is derived that if an explanatory variable is endogenous, the conditional distribution of the outcome given the endogenous variable is not independent of its instrumental variable(s). The test rejects the null hypothesis with probability one if the explanatory variable is endogenous and it detects alternatives converging to the null at a rate n..1=2:We propose a consistent nonparametric bootstrap test to implement this testable implication. We show that the proposed bootstrap test can be asymptotically justi.ed in the sense that it produces asymptotically correct size under the null of exogeneity, and it has unit power asymptotically. Our nonparametric test can be applied to the cases in which the outcome is generated by an additively non-separable structural relation or in which the outcome is discrete, which has not been studied in the literature.