941 resultados para Bayesian rationality
Resumo:
We forecast quarterly US inflation based on the generalized Phillips curve using econometric methods which incorporate dynamic model averaging. These methods not only allow for coe¢ cients to change over time, but also allow for the entire forecasting model to change over time. We nd that dynamic model averaging leads to substantial forecasting improvements over simple benchmark regressions and more sophisticated approaches such as those using time varying coe¢ cient models. We also provide evidence on which sets of predictors are relevant for forecasting in each period.
Resumo:
Agents have two forecasting models, one consistent with the unique rational expectations equilibrium, another that assumes a time-varying parameter structure. When agents use Bayesian updating to choose between models in a self-referential system, we find that learning dynamics lead to selection of one of the two models. However, there are parameter regions for which the non-rational forecasting model is selected in the long-run. A key structural parameter governing outcomes measures the degree of expectations feedback in Muth's model of price determination.
Resumo:
This paper introduces a new model of trend (or underlying) inflation. In contrast to many earlier approaches, which allow for trend inflation to evolve according to a random walk, ours is a bounded model which ensures that trend inflation is constrained to lie in an interval. The bounds of this interval can either be fixed or estimated from the data. Our model also allows for a time-varying degree of persistence in the transitory component of inflation. The bounds placed on trend inflation mean that standard econometric methods for estimating linear Gaussian state space models cannot be used and we develop a posterior simulation algorithm for estimating the bounded trend inflation model. In an empirical exercise with CPI inflation we find the model to work well, yielding more sensible measures of trend inflation and forecasting better than popular alternatives such as the unobserved components stochastic volatility model.
Resumo:
In this paper we develop methods for estimation and forecasting in large timevarying parameter vector autoregressive models (TVP-VARs). To overcome computational constraints with likelihood-based estimation of large systems, we rely on Kalman filter estimation with forgetting factors. We also draw on ideas from the dynamic model averaging literature and extend the TVP-VAR so that its dimension can change over time. A final extension lies in the development of a new method for estimating, in a time-varying manner, the parameter(s) of the shrinkage priors commonly-used with large VARs. These extensions are operationalized through the use of forgetting factor methods and are, thus, computationally simple. An empirical application involving forecasting inflation, real output, and interest rates demonstrates the feasibility and usefulness of our approach.
Resumo:
Incorporating adaptive learning into macroeconomics requires assumptions about how agents incorporate their forecasts into their decision-making. We develop a theory of bounded rationality that we call finite-horizon learning. This approach generalizes the two existing benchmarks in the literature: Eulerequation learning, which assumes that consumption decisions are made to satisfy the one-step-ahead perceived Euler equation; and infinite-horizon learning, in which consumption today is determined optimally from an infinite-horizon optimization problem with given beliefs. In our approach, agents hold a finite forecasting/planning horizon. We find for the Ramsey model that the unique rational expectations equilibrium is E-stable at all horizons. However, transitional dynamics can differ significantly depending upon the horizon.
Resumo:
We develop methods for Bayesian inference in vector error correction models which are subject to a variety of switches in regime (e.g. Markov switches in regime or structural breaks). An important aspect of our approach is that we allow both the cointegrating vectors and the number of cointegrating relationships to change when the regime changes. We show how Bayesian model averaging or model selection methods can be used to deal with the high-dimensional model space that results. Our methods are used in an empirical study of the Fisher e ffect.
Resumo:
Employing an endogenous growth model with human capital, this paper explores how productivity shocks in the goods and human capital producing sectors contribute to explaining aggregate fluctuations in output, consumption, investment and hours. Given the importance of accounting for both the dynamics and the trends in the data not captured by the theoretical growth model, we introduce a vector error correction model (VECM) of the measurement errors and estimate the model’s posterior density function using Bayesian methods. To contextualize our findings with those in the literature, we also assess whether the endogenous growth model or the standard real business cycle model better explains the observed variation in these aggregates. In addressing these issues we contribute to both the methods of analysis and the ongoing debate regarding the effects of innovations to productivity on macroeconomic activity.
Resumo:
We model a boundedly rational agent who suffers from limited attention. The agent considers each feasible alternative with a given (unobservable) probability, the attention parameter, and then chooses the alternative that maximises a preference relation within the set of considered alternatives. We show that this random choice rule is the only one for which the impact of removing an alternative on the choice probability of any other alternative is asymmetric and menu independent. Both the preference relation and the attention parameters are identi fied uniquely by stochastic choice data.
Resumo:
This paper discusses the challenges faced by the empirical macroeconomist and methods for surmounting them. These challenges arise due to the fact that macroeconometric models potentially include a large number of variables and allow for time variation in parameters. These considerations lead to models which have a large number of parameters to estimate relative to the number of observations. A wide range of approaches are surveyed which aim to overcome the resulting problems. We stress the related themes of prior shrinkage, model averaging and model selection. Subsequently, we consider a particular modelling approach in detail. This involves the use of dynamic model selection methods with large TVP-VARs. A forecasting exercise involving a large US macroeconomic data set illustrates the practicality and empirical success of our approach.
Resumo:
Most of the literature estimating DSGE models for monetary policy analysis assume that policy follows a simple rule. In this paper we allow policy to be described by various forms of optimal policy - commitment, discretion and quasi-commitment. We find that, even after allowing for Markov switching in shock variances, the inflation target and/or rule parameters, the data preferred description of policy is that the US Fed operates under discretion with a marked increase in conservatism after the 1970s. Parameter estimates are similar to those obtained under simple rules, except that the degree of habits is significantly lower and the prevalence of cost-push shocks greater. Moreover, we find that the greatest welfare gains from the ‘Great Moderation’ arose from the reduction in the variances in shocks hitting the economy, rather than increased inflation aversion. However, much of the high inflation of the 1970s could have been avoided had policy makers been able to commit, even without adopting stronger anti-inflation objectives. More recently the Fed appears to have temporarily relaxed policy following the 1987 stock market crash, and has lost, without regaining, its post-Volcker conservatism following the bursting of the dot-com bubble in 2000.
Resumo:
We analyze and quantify co-movements in real effective exchange rates while considering the regional location of countries. More specifically, using the dynamic hierarchical factor model (Moench et al. (2011)), we decompose exchange rate movements into several latent components; worldwide and two regional factors as well as country-specific elements. Then, we provide evidence that the worldwide common factor is closely related to monetary policies in large advanced countries while regional common factors tend to be captured by those in the rest of the countries in a region. However, a substantial proportion of the variation in the real exchange rates is reported to be country-specific; even in Europe country-specific movements exceed worldwide and regional common factors.
Resumo:
This paper revisits the problem of adverse selection in the insurance market of Rothschild and Stiglitz [28]. We propose a simple extension of the game-theoretic structure in Hellwig [14] under which Nash-type strategic interaction between the informed customers and the uninformed firms results always in a particular separating equilibrium. The equilibrium allocation is unique and Pareto-efficient in the interim sense subject to incentive-compatibility and individual rationality. In fact, it is the unique neutral optimum in the sense of Myerson [22].
Resumo:
An important disconnect in the news driven view of the business cycle formalized by Beaudry and Portier (2004), is the lack of agreement between different—VAR and DSGE—methodologies over the empirical plausibility of this view. We argue that this disconnect can be largely resolved once we augment a standard DSGE model with a financial channel that provides amplification to news shocks. Both methodologies suggest news shocks to the future growth prospects of the economy to be significant drivers of U.S. business cycles in the post-Greenspan era (1990-2011), explaining as much as 50% of the forecast error variance in hours worked in cyclical frequencies
Resumo:
In this study we elicit agents’ prior information set regarding a public good, exogenously give information treatments to survey respondents and subsequently elicit willingness to pay for the good and posterior information sets. The design of this field experiment allows us to perform theoretically motivated hypothesis testing between different updating rules: non-informative updating, Bayesian updating, and incomplete updating. We find causal evidence that agents imperfectly update their information sets. We also field causal evidence that the amount of additional information provided to subjects relative to their pre-existing information levels can affect stated WTP in ways consistent overload from too much learning. This result raises important (though familiar) issues for the use of stated preference methods in policy analysis.
Resumo:
We model the choice behaviour of an agent who suffers from imperfect attention. We define inattention axiomatically through preference over menus and endowed alternatives: an agent is inattentive if it is better to be endowed with an alternative a than to be allowed to pick a from a menu in which a is is the best alternative. This property and vNM rationality on the domain of menus and alternatives imply that the agent notices each alternative with a given menu-dependent probability (attention parameter) and maximises a menu independent utility function over the alternatives he notices. Preference for flexibility restricts the model to menu independent attention parameters as in Manzini and Mariotti [19]. Our theory explains anomalies (e.g. the attraction and compromise effect) that the Random Utility Model cannot accommodate.