25 resultados para Switching time
Resumo:
The paper aims to examine the empirical relationship between trade openness and economic growth of India for the time period 1970-2010. Trade openness is a multi-dimensional concept and hence measures of both trade barriers and trade volumes have been used as proxies for openness. The estimation results from Vector Autoregressive method suggest that growth in trade volumes accelerate economic growth in case of India. We do not find any evidence from our analysis that trade barriers lower growth.
Resumo:
We use a dynamic multipath general-to-specific algorithm to capture structural instability in the link between euro area sovereign bond yield spreads against Germany and their underlying determinants over the period January 1999 – August 2011. We offer new evidence suggesting a significant heterogeneity across countries, both in terms of the risk factors determining spreads over time as well as in terms of the magnitude of their impact on spreads. Our findings suggest that the relationship between euro area sovereign risk and the underlying fundamentals is strongly timevarying, turning from inactive to active since the onset of the global financial crisis and further intensifying during the sovereign debt crisis. As a general rule, the set of financial and macro spreads’ determinants in the euro area is rather unstable but generally becomes richer and stronger in significance as the crisis evolves.
Resumo:
This paper is an investigation into the dynamics of asset markets with adverse selection a la Akerlof (1970). The particular question asked is: can market failure at some later date precipitate market failure at an earlier date? The answer is yes: there can be "contagious illiquidity" from the future back to the present. The mechanism works as follows. If the market is expected to break down in the future, then agents holding assets they know to be lemons (assets with low returns) will be forced to hold them for longer - they cannot quickly resell them. As a result, the effective difference in payoff between a lemon and a good asset is greater. But it is known from the static Akerlof model that the greater the payoff differential between lemons and non-lemons, the more likely is the market to break down. Hence market failure in the future is more likely to lead to market failure today. Conversely, if the market is not anticipated to break down in the future, assets can be readily sold and hence an agent discovering that his or her asset is a lemon can quickly jettison it. In effect, there is little difference in payoff between a lemon and a good asset. The logic of the static Akerlof model then runs the other way: the small payoff differential is unlikely to lead to market breakdown today. The conclusion of the paper is that the nature of today's market - liquid or illiquid - hinges critically on the nature of tomorrow's market, which in turn depends on the next day's, and so on. The tail wags the dog.
Resumo:
This paper investigates dynamic completeness of financial markets in which the underlying risk process is a multi-dimensional Brownian motion and the risky securities dividends geometric Brownian motions. A sufficient condition, that the instantaneous dispersion matrix of the relative dividends is non-degenerate, was established recently in the literature for single-commodity, pure-exchange economies with many heterogenous agents, under the assumption that the intermediate flows of all dividends, utilities, and endowments are analytic functions. For the current setting, a different mathematical argument in which analyticity is not needed shows that a slightly weaker condition suffices for general pricing kernels. That is, dynamic completeness obtains irrespectively of preferences, endowments, and other structural elements (such as whether or not the budget constraints include only pure exchange, whether or not the time horizon is finite with lump-sum dividends available on the terminal date, etc.)
Resumo:
Using survey expectations data and Markov-switching models, this paper evaluates the characteristics and evolution of investors' forecast errors about the yen/dollar exchange rate. Since our model is derived from the uncovered interest rate parity (UIRP) condition and our data cover a period of low interest rates, this study is also related to the forward premium puzzle and the currency carry trade strategy. We obtain the following results. First, with the same forecast horizon, exchange rate forecasts are homogeneous among different industry types, but within the same industry, exchange rate forecasts differ if the forecast time horizon is different. In particular, investors tend to undervalue the future exchange rate for long term forecast horizons; however, in the short run they tend to overvalue the future exchange rate. Second, while forecast errors are found to be partly driven by interest rate spreads, evidence against the UIRP is provided regardless of the forecasting time horizon; the forward premium puzzle becomes more significant in shorter term forecasting errors. Consistent with this finding, our coefficients on interest rate spreads provide indirect evidence of the yen carry trade over only a short term forecast horizon. Furthermore, the carry trade seems to be active when there is a clear indication that the interest rate will be low in the future.
Resumo:
We compare three methods for the elicitation of time preferences in an experimental setting: the Becker-DeGroot-Marschak procedure (BDM); the second price auction; and the multiple price list format. The first two methods have been used rarely to elicit time preferences. All methods used are perfectly equivalent from a decision theoretic point of view, and they should induce the same ‘truthful’ revelation i dominant strategies. In spite of this, we find that framing does matter: the money discount rates elicited with the multiple price list tend to be higher than those elicited with the other two methods. In addition, our results shed some light on attitudes towards time, and they permit a broad classification of subjects depending on how the size of the elicited values varies with the time horizon.
Resumo:
We estimate a New Keynesian DSGE model for the Euro area under alternative descriptions of monetary policy (discretion, commitment or a simple rule) after allowing for Markov switching in policy maker preferences and shock volatilities. This reveals that there have been several changes in Euro area policy making, with a strengthening of the anti-inflation stance in the early years of the ERM, which was then lost around the time of German reunification and only recovered following the turnoil in the ERM in 1992. The ECB does not appear to have been as conservative as aggregate Euro-area policy was under Bundesbank leadership, and its response to the financial crisis has been muted. The estimates also suggest that the most appropriate description of policy is that of discretion, with no evidence of commitment in the Euro-area. As a result although both ‘good luck’ and ‘good policy’ played a role in the moderation of inflation and output volatility in the Euro-area, the welfare gains would have been substantially higher had policy makers been able to commit. We consider a range of delegation schemes as devices to improve upon the discretionary outcome, and conclude that price level targeting would have achieved welfare levels close to those attained under commitment, even after accounting for the existence of the Zero Lower Bound on nominal interest rates.
Resumo:
We estimate a New Keynesian DSGE model for the Euro area under alternative descriptions of monetary policy (discretion, commitment or a simple rule) after allowing for Markov switching in policy maker preferences and shock volatilities. This reveals that there have been several changes in Euro area policy making, with a strengthening of the anti-inflation stance in the early years of the ERM, which was then lost around the time of German reunification and only recovered following the turnoil in the ERM in 1992. The ECB does not appear to have been as conservative as aggregate Euro-area policy was under Bundesbank leadership, and its response to the financial crisis has been muted. The estimates also suggest that the most appropriate description of policy is that of discretion, with no evidence of commitment in the Euro-area. As a result although both ‘good luck’ and ‘good policy’ played a role in the moderation of inflation and output volatility in the Euro-area, the welfare gains would have been substantially higher had policy makers been able to commit. We consider a range of delegation schemes as devices to improve upon the discretionary outcome, and conclude that price level targeting would have achieved welfare levels close to those attained under commitment, even after accounting for the existence of the Zero Lower Bound on nominal interest rates.
Resumo:
Time varying parameter (TVP) models have enjoyed an increasing popularity in empirical macroeconomics. However, TVP models are parameter-rich and risk over-fitting unless the dimension of the model is small. Motivated by this worry, this paper proposes several Time Varying dimension (TVD) models where the dimension of the model can change over time, allowing for the model to automatically choose a more parsimonious TVP representation, or to switch between different parsimonious representations. Our TVD models all fall in the category of dynamic mixture models. We discuss the properties of these models and present methods for Bayesian inference. An application involving US inflation forecasting illustrates and compares the different TVD models. We find our TVD approaches exhibit better forecasting performance than several standard benchmarks and shrink towards parsimonious specifications.
Resumo:
In this paper, we forecast EU-area inflation with many predictors using time-varying parameter models. The facts that time-varying parameter models are parameter-rich and the time span of our data is relatively short motivate a desire for shrinkage. In constant coefficient regression models, the Bayesian Lasso is gaining increasing popularity as an effective tool for achieving such shrinkage. In this paper, we develop econometric methods for using the Bayesian Lasso with time-varying parameter models. Our approach allows for the coefficient on each predictor to be: i) time varying, ii) constant over time or iii) shrunk to zero. The econometric methodology decides automatically which category each coefficient belongs in. Our empirical results indicate the benefits of such an approach.