937 resultados para Bayesian smoothing
Resumo:
The objective of this study is the empirical identification of the monetary policy rules pursued in individual countries of EU before and after the launch of European Monetary Union. In particular, we have employed an estimation of the augmented version of the Taylor rule (TR) for 25 countries of the EU in two periods (1992-1998, 1999-2006). While uniequational estimation methods have been used to identify the policy rules of individual central banks, for the rule of the European Central Bank has been employed a dynamic panel setting. We have found that most central banks really followed some interest rate rule but its form was usually different from the original TR (proposing that domestic interest rate responds only to domestic inflation rate and output gap). Crucial features of policy rules in many countries have been the presence of interest rate smoothing as well as response to foreign interest rate. Any response to domestic macroeconomic variables have been missing in the rules of countries with inflexible exchange rate regimes and the rules consisted in mimicking of the foreign interest rates. While we have found response to long-term interest rates and exchange rate in rules of some countries, the importance of monetary growth and asset prices has been generally negligible. The Taylor principle (the response of interest rates to domestic inflation rate must be more than unity as a necessary condition for achieving the price stability) has been confirmed only in large economies and economies troubled with unsustainable inflation rates. Finally, the deviation of the actual interest rate from the rule-implied target rate can be interpreted as policy shocks (these deviation often coincided with actual turbulent periods).
Resumo:
This paper discusses the challenges faced by the empirical macroeconomist and methods for surmounting them. These challenges arise due to the fact that macroeconometric models potentially include a large number of variables and allow for time variation in parameters. These considerations lead to models which have a large number of parameters to estimate relative to the number of observations. A wide range of approaches are surveyed which aim to overcome the resulting problems. We stress the related themes of prior shrinkage, model averaging and model selection. Subsequently, we consider a particular modelling approach in detail. This involves the use of dynamic model selection methods with large TVP-VARs. A forecasting exercise involving a large US macroeconomic data set illustrates the practicality and empirical success of our approach.
Resumo:
Most of the literature estimating DSGE models for monetary policy analysis assume that policy follows a simple rule. In this paper we allow policy to be described by various forms of optimal policy - commitment, discretion and quasi-commitment. We find that, even after allowing for Markov switching in shock variances, the inflation target and/or rule parameters, the data preferred description of policy is that the US Fed operates under discretion with a marked increase in conservatism after the 1970s. Parameter estimates are similar to those obtained under simple rules, except that the degree of habits is significantly lower and the prevalence of cost-push shocks greater. Moreover, we find that the greatest welfare gains from the ‘Great Moderation’ arose from the reduction in the variances in shocks hitting the economy, rather than increased inflation aversion. However, much of the high inflation of the 1970s could have been avoided had policy makers been able to commit, even without adopting stronger anti-inflation objectives. More recently the Fed appears to have temporarily relaxed policy following the 1987 stock market crash, and has lost, without regaining, its post-Volcker conservatism following the bursting of the dot-com bubble in 2000.
Resumo:
This paper revisits the argument that the stabilisation bias that arises under discretionary monetary policy can be reduced if policy is delegated to a policymaker with redesigned objectives. We study four delegation schemes: price level targeting, interest rate smoothing, speed limits and straight conservatism. These can all increase social welfare in models with a unique discretionary equilibrium. We investigate how these schemes perform in a model with capital accumulation where uniqueness does not necessarily apply. We discuss how multiplicity arises and demonstrate that no delegation scheme is able to eliminate all potential bad equilibria. Price level targeting has two interesting features. It can create a new equilibrium that is welfare dominated, but it can also alter equilibrium stability properties and make coordination on the best equilibrium more likely.
Resumo:
We analyze and quantify co-movements in real effective exchange rates while considering the regional location of countries. More specifically, using the dynamic hierarchical factor model (Moench et al. (2011)), we decompose exchange rate movements into several latent components; worldwide and two regional factors as well as country-specific elements. Then, we provide evidence that the worldwide common factor is closely related to monetary policies in large advanced countries while regional common factors tend to be captured by those in the rest of the countries in a region. However, a substantial proportion of the variation in the real exchange rates is reported to be country-specific; even in Europe country-specific movements exceed worldwide and regional common factors.
Resumo:
An important disconnect in the news driven view of the business cycle formalized by Beaudry and Portier (2004), is the lack of agreement between different—VAR and DSGE—methodologies over the empirical plausibility of this view. We argue that this disconnect can be largely resolved once we augment a standard DSGE model with a financial channel that provides amplification to news shocks. Both methodologies suggest news shocks to the future growth prospects of the economy to be significant drivers of U.S. business cycles in the post-Greenspan era (1990-2011), explaining as much as 50% of the forecast error variance in hours worked in cyclical frequencies
Resumo:
In this study we elicit agents’ prior information set regarding a public good, exogenously give information treatments to survey respondents and subsequently elicit willingness to pay for the good and posterior information sets. The design of this field experiment allows us to perform theoretically motivated hypothesis testing between different updating rules: non-informative updating, Bayesian updating, and incomplete updating. We find causal evidence that agents imperfectly update their information sets. We also field causal evidence that the amount of additional information provided to subjects relative to their pre-existing information levels can affect stated WTP in ways consistent overload from too much learning. This result raises important (though familiar) issues for the use of stated preference methods in policy analysis.
Resumo:
An expanding literature articulates the view that Taylor rules are helpful in predicting exchange rates. In a changing world however, Taylor rule parameters may be subject to structural instabilities, for example during the Global Financial Crisis. This paper forecasts exchange rates using such Taylor rules with Time Varying Parameters (TVP) estimated by Bayesian methods. In core out-of-sample results, we improve upon a random walk benchmark for at least half, and for as many as eight out of ten, of the currencies considered. This contrasts with a constant parameter Taylor rule model that yields a more limited improvement upon the benchmark. In further results, Purchasing Power Parity and Uncovered Interest Rate Parity TVP models beat a random walk benchmark, implying our methods have some generality in exchange rate prediction.
Resumo:
This paper develop and estimates a model of demand estimation for environmental public goods which allows for consumers to learn about their preferences through consumption experiences. We develop a theoretical model of Bayesian updating, perform comparative statics over the model, and show how the theoretical model can be consistently incorporated into a reduced form econometric model. We then estimate the model using data collected for two environmental goods. We find that the predictions of the theoretical exercise that additional experience makes consumers more certain over their preferences in both mean and variance are supported in each case.
Resumo:
We develop methods for Bayesian model averaging (BMA) or selection (BMS) in Panel Vector Autoregressions (PVARs). Our approach allows us to select between or average over all possible combinations of restricted PVARs where the restrictions involve interdependencies between and heterogeneities across cross-sectional units. The resulting BMA framework can find a parsimonious PVAR specification, thus dealing with overparameterization concerns. We use these methods in an application involving the euro area sovereign debt crisis and show that our methods perform better than alternatives. Our findings contradict a simple view of the sovereign debt crisis which divides the euro zone into groups of core and peripheral countries and worries about financial contagion within the latter group.
Resumo:
We develop methods for Bayesian model averaging (BMA) or selection (BMS) in Panel Vector Autoregressions (PVARs). Our approach allows us to select between or average over all possible combinations of restricted PVARs where the restrictions involve interdependencies between and heterogeneities across cross-sectional units. The resulting BMA framework can find a parsimonious PVAR specification, thus dealing with overparameterization concerns. We use these methods in an application involving the euro area sovereign debt crisis and show that our methods perform better than alternatives. Our findings contradict a simple view of the sovereign debt crisis which divides the euro zone into groups of core and peripheral countries and worries about financial contagion within the latter group.
Resumo:
We estimate a New Keynesian DSGE model for the Euro area under alternative descriptions of monetary policy (discretion, commitment or a simple rule) after allowing for Markov switching in policy maker preferences and shock volatilities. This reveals that there have been several changes in Euro area policy making, with a strengthening of the anti-inflation stance in the early years of the ERM, which was then lost around the time of German reunification and only recovered following the turnoil in the ERM in 1992. The ECB does not appear to have been as conservative as aggregate Euro-area policy was under Bundesbank leadership, and its response to the financial crisis has been muted. The estimates also suggest that the most appropriate description of policy is that of discretion, with no evidence of commitment in the Euro-area. As a result although both ‘good luck’ and ‘good policy’ played a role in the moderation of inflation and output volatility in the Euro-area, the welfare gains would have been substantially higher had policy makers been able to commit. We consider a range of delegation schemes as devices to improve upon the discretionary outcome, and conclude that price level targeting would have achieved welfare levels close to those attained under commitment, even after accounting for the existence of the Zero Lower Bound on nominal interest rates.
Resumo:
We analyse the role of time-variation in coefficients and other sources of uncertainty in exchange rate forecasting regressions. Our techniques incorporate the notion that the relevant set of predictors and their corresponding weights, change over time. We find that predictive models which allow for sudden rather than smooth, changes in coefficients significantly beat the random walk benchmark in out-of-sample forecasting exercise. Using innovative variance decomposition scheme, we identify uncertainty in coefficients' estimation and uncertainty about the precise degree of coefficients' variability, as the main factors hindering models' forecasting performance. The uncertainty regarding the choice of the predictor is small.
Resumo:
Time-lapse crosshole ground-penetrating radar (GPR) data, collected while infiltration occurs, can provide valuable information regarding the hydraulic properties of the unsaturated zone. In particular, the stochastic inversion of such data provides estimates of parameter uncertainties, which are necessary for hydrological prediction and decision making. Here, we investigate the effect of different infiltration conditions on the stochastic inversion of time-lapse, zero-offset-profile, GPR data. Inversions are performed using a Bayesian Markov-chain-Monte-Carlo methodology. Our results clearly indicate that considering data collected during a forced infiltration test helps to better refine soil hydraulic properties compared to data collected under natural infiltration conditions
Resumo:
We estimate a New Keynesian DSGE model for the Euro area under alternative descriptions of monetary policy (discretion, commitment or a simple rule) after allowing for Markov switching in policy maker preferences and shock volatilities. This reveals that there have been several changes in Euro area policy making, with a strengthening of the anti-inflation stance in the early years of the ERM, which was then lost around the time of German reunification and only recovered following the turnoil in the ERM in 1992. The ECB does not appear to have been as conservative as aggregate Euro-area policy was under Bundesbank leadership, and its response to the financial crisis has been muted. The estimates also suggest that the most appropriate description of policy is that of discretion, with no evidence of commitment in the Euro-area. As a result although both ‘good luck’ and ‘good policy’ played a role in the moderation of inflation and output volatility in the Euro-area, the welfare gains would have been substantially higher had policy makers been able to commit. We consider a range of delegation schemes as devices to improve upon the discretionary outcome, and conclude that price level targeting would have achieved welfare levels close to those attained under commitment, even after accounting for the existence of the Zero Lower Bound on nominal interest rates.