833 resultados para US credibility
Resumo:
Employing the financial accelerator (FA) model of Bernanke, Gertler and Gilchrist (1999) enhanced to include a shock to the FA mechanism, we construct and study shocks to the efficiency of the financial sector in post-war US business cycles. We find that financial shocks are very tightly linked with the onset of recessions, more so than TFP or monetary shocks. The financial shock invariably remains contractionary for sometime after recessions have ended. The shock accounts for a large part of the variance of GDP and is strongly negatively correlated with the external finance premium. Second-moments comparisons across variants of the model with and without a (stochastic) FA mechanism suggests the stochastic FA model helps us understand the data.
Resumo:
This paper investigates the relationship between short term and long term in ation expectations in the US and the UK with a focus on iflation pass through (i.e. how changes in short term expectations affect long term expectations). An econometric methodology is used which allows us to uncover the relationship between in ation pass through and various explanatory variables. We relate our empirical results to theoretical models of anchored, contained and unmoored inflation expectations. For neither country do we find anchored or unmoored inflation expectations. For the US, contained inflation expectations are found. For the UK, our ndings are not consistent with the specifi =c model of contained inflation expectations presented here, but are consistent with a more broad view of expectations being constrained by the existence of an inflation target.
Resumo:
The Environmental Kuznets Curve (EKC) hypothesis focuses on the argument that rising prosperity will eventually be accompanied by falling pollution levels as a result of one or more of three factors: (1) structural change in the economy; (2) demand for environmental quality increasing at a more-than-proportional rate; (3) technological progress. Here, we focus on the third of these. In particular, energy efficiency is commonly regarded as a key element of climate policy in terms of achieving reductions in economy-wide CO2 emissions over time. However, a growing literature suggests that improvements in energy efficiency will lead to rebound (or backfire) effects that partially (or wholly) offset energy savings from efficiency improvements. Where efficiency improvements are aimed at the production side of the economy, the net impact of increased efficiency in any input to production will depend on the combination and relative strength of substitution, output/competitiveness, composition and income effects that occur in response to changes in effective and actual factor prices, as well as on the structure of the economy in question, including which sectors are targeted with the efficiency improvement. In this paper we consider whether increasing labour productivity will have a more beneficial, or more predictable, impact on CO2/GDP ratios than improvements in energy efficiency. We do this by using CGE models of the Scottish regional and UK national economies to analyse the impacts of a simple 5% exogenous (and costless) increase in energy or labour augmenting technological progress.
Resumo:
In this paper we attempt an empirical application of the multi-region input-output (MRIO) method in order to enumerate the pollution content of interregional trade flows between five Mid-West regions/states in the US –Illinois, Indiana, Iowa, Michigan and Wisconsin – and the rest of the US. This allows us to analyse some very important issues in terms of the nature and significance of interregional environmental spillovers within the US Mid-West and the existence of pollution ‘trade balances’ between states. Our results raise questions in terms of the extent to which authorities at State level can control local emissions where they are limited in the way some emissions can be controlled, particularly with respect to changes in demand elsewhere in the Mid-West and US. This implies a need for policy co-ordination between national and state level authorities in the US to meet emissions reductions targets. The existence of an environmental trade balances between states also raises issues in terms of net losses/gains in terms of pollutants as a result of interregional trade within the US and whether, if certain activities can be carried out using less polluting technology in one region relative to others, it is better for the US as a whole if this type of relationship exists.
Resumo:
Hong Kong’s currency is pegged to the US dollar in a currency board arrangement. In autumn 2003, the Hong Kong dollar appreciated from close to 7.80 per US dollar to 7.70, as investors feared that the currency board would be abandoned. In the wake of this appreciation, the monetary authorities revamped the one-sided currency board mechanism into a symmetric two-sided system with a narrow exchange rate band. This paper reviews the characteristics of the new currency board arrangement and embeds a theoretical soft edge target zone model typifying many intermediate regimes, to explain the notable achievement of speculative peace and credibility since May 2005.
Resumo:
In Evans, Guse, and Honkapohja (2008) the intended steady state is locally but not globally stable under adaptive learning, and unstable deflationary paths can arise after large pessimistic shocks to expectations. In the current paper a modified model is presented that includes a locally stable stagnation regime as a possible outcome arising from large expectation shocks. Policy implications are examined. Sufficiently large temporary increases in government spending can dislodge the economy from the stagnation regime and restore the natural stabilizing dynamics. More specific policy proposals are presented and discussed.
Resumo:
This paper presents a theoretical framework analysing the signalling channel of exchange rate interventions as an informational trigger. We develop an implicit target zone framework with learning in order to model the signalling channel. The theoretical premise of the model is that interventions convey signals that communicate information about the exchange rate objectives of central bank. The model is used to analyse the impact of Japanese FX interventions during the period 1999 -2011 on the yen/US dollar dynamics.
Resumo:
In this paper we investigate the ability of a number of different ordered probit models to predict ratings based on firm-specific data on business and financial risks. We investigate models based on momentum, drift and ageing and compare them against alternatives that take into account the initial rating of the firm and its previous actual rating. Using data on US bond issuing firms rated by Fitch over the years 2000 to 2007 we compare the performance of these models in predicting the rating in-sample and out-of-sample using root mean squared errors, Diebold-Mariano tests of forecast performance and contingency tables. We conclude that initial and previous states have a substantial influence on rating prediction.
Resumo:
This paper presents a theoretical framework analysing the signalling channel of exchange rate interventions as an informational trigger. We develop an implicit target zone framework with learning in order to model the signalling channel. The theoretical premise of the model is that interventions convey signals that communicate information about the exchange rate objectives of central bank. The model is used to analyse the impact of Japanese FX interventions during the period 1999 -2011 on the yen/US dollar dynamics.
Resumo:
When Bank of England (and the Federal Reserve Board) introduced their quantitative easing (QE) operations they emphasised the effects on money and credit, but much of their empirical research on the effects of QE focuses on long-term interest rates. We use a flow of funds matrix with an independent central bank to show the implications of QE and other monetary developments, and argue that the financial crisis, the fiscal expansion and QE are likely to have constituted major exogenous shocks to money and credit in the UK which could not be digested immediately by the usual adjustment mechanisms. We present regressions of a reduced form model which considers the growth of nominal spending as determined by the growth of nominal money and other variables. These results suggest that money was not important during the Great Moderation but has had a much larger role in the period of the crisis and QE. We then use these estimates to illustrate the effects of the financial crisis and QE. We conclude that it would be useful to incorporate money and/or credit in wider macroeconometric models of the UK economy.
Resumo:
VAR methods have been used to model the inter-relationships between inflows and outfl ows into unemployment and vacancies using tools such as impulse response analysis. In order to investigate whether such impulse responses change over the course of the business cycle or or over time, this paper uses TVP-VARs for US and Canadian data. For the US, we find interesting differences between the most recent recession and earlier recessions and expansions. In particular, we find the immediate effect of a negative shock on both in ow and out flow hazards to be larger in 2008 than in earlier times. Furthermore, the effect of this shock takes longer to decay. For Canada, we fi nd less evidence of time-variation in impulse responses.
Resumo:
In this paper we examine the out-of-sample forecast performance of high-yield credit spreads regarding real-time and revised data on employment and industrial production in the US. We evaluate models using both a point forecast and a probability forecast exercise. Our main findings suggest the use of few factors obtained by pooling information from a number of sector-specific high-yield credit spreads. This can be justified by observing that, especially for employment, there is a gain from using a principal components model fitted to high-yield credit spreads compared to the prediction produced by benchmarks, such as an AR, and ARDL models that use either the term spread or the aggregate high-yield spread as exogenous regressor. Moreover, forecasts based on real-time data are generally comparable to forecasts based on revised data. JEL Classification: C22; C53; E32 Keywords: Credit spreads; Principal components; Forecasting; Real-time data.
Resumo:
Most of the literature estimating DSGE models for monetary policy analysis assume that policy follows a simple rule. In this paper we allow policy to be described by various forms of optimal policy - commitment, discretion and quasi-commitment. We find that, even after allowing for Markov switching in shock variances, the inflation target and/or rule parameters, the data preferred description of policy is that the US Fed operates under discretion with a marked increase in conservatism after the 1970s. Parameter estimates are similar to those obtained under simple rules, except that the degree of habits is significantly lower and the prevalence of cost-push shocks greater. Moreover, we find that the greatest welfare gains from the ‘Great Moderation’ arose from the reduction in the variances in shocks hitting the economy, rather than increased inflation aversion. However, much of the high inflation of the 1970s could have been avoided had policy makers been able to commit, even without adopting stronger anti-inflation objectives. More recently the Fed appears to have temporarily relaxed policy following the 1987 stock market crash, and has lost, without regaining, its post-Volcker conservatism following the bursting of the dot-com bubble in 2000.
Resumo:
This paper studies the behavior of a central bank that seeks to conduct policy optimally while having imperfect credibility and harboring doubts about its model. Taking the Smets-Wouters model as the central bank.s approximating model, the paper's main findings are as follows. First, a central bank.s credibility can have large consequences for how policy responds to shocks. Second, central banks that have low credibility can bene.t from a desire for robustness because this desire motivates the central bank to follow through on policy announcements that would otherwise not be time-consistent. Third, even relatively small departures from perfect credibility can produce important declines in policy performance. Finally, as a technical contribution, the paper develops a numerical procedure to solve the decision-problem facing an imperfectly credible policymaker that seeks robustness.
Resumo:
This paper uses sequential stochastic dominance procedures to compare the joint distribution of health and income across space and time. It is the First application of which we are aware of methods to compare multidimensional distributions of income and health using procedures that are robust to aggregation techniques. The paper's approach is more general than comparisons of health gradients and does not require the estimation of health equivalent incomes. We illustrate the approach by contrasting Canada and the US using comparable data. Canada dominates the US over the lower bidimensional welfare distribution of health and income, though not generally in terms of the uni-dimensional distribution of health or income. The paper also finds that welfare for both Canadians and Americans has not unambiguously improved during the last decade over the joint distribution of income and health, in spite of the fact that the uni-dimensional distributions of income have clearly improved during that period.