954 resultados para Aggregate shocks
Resumo:
In this paper we examine the out-of-sample forecast performance of high-yield credit spreads regarding real-time and revised data on employment and industrial production in the US. We evaluate models using both a point forecast and a probability forecast exercise. Our main findings suggest the use of few factors obtained by pooling information from a number of sector-specific high-yield credit spreads. This can be justified by observing that, especially for employment, there is a gain from using a principal components model fitted to high-yield credit spreads compared to the prediction produced by benchmarks, such as an AR, and ARDL models that use either the term spread or the aggregate high-yield spread as exogenous regressor. Moreover, forecasts based on real-time data are generally comparable to forecasts based on revised data. JEL Classification: C22; C53; E32 Keywords: Credit spreads; Principal components; Forecasting; Real-time data.
Resumo:
The objective of this study is the empirical identification of the monetary policy rules pursued in individual countries of EU before and after the launch of European Monetary Union. In particular, we have employed an estimation of the augmented version of the Taylor rule (TR) for 25 countries of the EU in two periods (1992-1998, 1999-2006). While uniequational estimation methods have been used to identify the policy rules of individual central banks, for the rule of the European Central Bank has been employed a dynamic panel setting. We have found that most central banks really followed some interest rate rule but its form was usually different from the original TR (proposing that domestic interest rate responds only to domestic inflation rate and output gap). Crucial features of policy rules in many countries have been the presence of interest rate smoothing as well as response to foreign interest rate. Any response to domestic macroeconomic variables have been missing in the rules of countries with inflexible exchange rate regimes and the rules consisted in mimicking of the foreign interest rates. While we have found response to long-term interest rates and exchange rate in rules of some countries, the importance of monetary growth and asset prices has been generally negligible. The Taylor principle (the response of interest rates to domestic inflation rate must be more than unity as a necessary condition for achieving the price stability) has been confirmed only in large economies and economies troubled with unsustainable inflation rates. Finally, the deviation of the actual interest rate from the rule-implied target rate can be interpreted as policy shocks (these deviation often coincided with actual turbulent periods).
Resumo:
Do intermediate goods help explain relative and aggregate productivity differences across countries? Three observations suggest they do: (i) intermediates are relatively expensive in poor countries; (ii) goods industries demand intermediates more intensively than service industries; (iii) goods industries are more prominent intermediate suppliers in poor countries. I build a standard multi-sector growth model accommodating these features to show that inefficient intermediate production strongly depresses aggregate labor productivity and increases the price ratio of final goods to services. Applying the model to data, low and high income countries in fact reveal similar relative efficiency levels between goods and services despite clear differences in relative sectoral labor productivity. Moreover, the main empirical exercise suggests that poorer countries are substantially less efficient at producing intermediate relative to final goods and services. Closing the cross-country efficiency gap in intermediate input production would strongly narrow the aggregate labor productivity difference across countries as well as turn final goods in poorer countries relatively cheap compared to services.
Resumo:
Most of the literature estimating DSGE models for monetary policy analysis assume that policy follows a simple rule. In this paper we allow policy to be described by various forms of optimal policy - commitment, discretion and quasi-commitment. We find that, even after allowing for Markov switching in shock variances, the inflation target and/or rule parameters, the data preferred description of policy is that the US Fed operates under discretion with a marked increase in conservatism after the 1970s. Parameter estimates are similar to those obtained under simple rules, except that the degree of habits is significantly lower and the prevalence of cost-push shocks greater. Moreover, we find that the greatest welfare gains from the ‘Great Moderation’ arose from the reduction in the variances in shocks hitting the economy, rather than increased inflation aversion. However, much of the high inflation of the 1970s could have been avoided had policy makers been able to commit, even without adopting stronger anti-inflation objectives. More recently the Fed appears to have temporarily relaxed policy following the 1987 stock market crash, and has lost, without regaining, its post-Volcker conservatism following the bursting of the dot-com bubble in 2000.
Resumo:
This paper develops a dynamic general equilibrium model to highlight the role of human capital accumulation of agents differentiated by skill type in the joint determination of social mobility and the skill premium. We first show that our model captures the empirical co-movement of the skill premium, the relative supply of skilled to unskilled workers and aggregate output in the U.S. data from 1970-2000. We next show that endogenous social mobility and human capital accumulation are key channels through which the effects of capital tax cuts and increases in public spending on both pre- and post-college education are transmitted. In particular, social mobility creates additional incentives for the agents which enhance the beneficial effects of policy reforms. Moreover, the dynamics of human capital accumulation imply that, post reform, the skill premium is higher in the short- to medium-run than in the long-run.
Resumo:
Bilateral oligopoly is a simple model of exchange in which a finite set of sellers seek to exchange the goods they are endowed with for money with a finite set of buyers, and no price-taking assumptions are imposed. If trade takes place via a strategic market game bilateral oligopoly can be thought of as two linked proportional-sharing contests: in one the sellers share the aggregate bid from the buyers in proportion to their supply and in the other the buyers share the aggregate supply in proportion to their bids. The analysis can be separated into two ‘partial games’. First, fix the aggregate bid at B; in the first partial game the sellers contest this fixed prize in proportion to their supply and the aggregate supply in the equilibrium of this game is X˜ (B). Next, fix the aggregate supply at X; in the second partial game the buyers contest this fixed prize in proportion to their bids and the aggregate bid in the equilibrium of this game is ˜B (X). The analysis of these two partial games takes into account competition within each side of the market. Equilibrium in bilateral oligopoly must take into account competition between sellers and buyers and requires, for example, ˜B (X˜ (B)) = B. When all traders have Cobb-Douglas preferences ˜ X(B) does not depend on B and ˜B (X) does not depend on X: whilst there is competition within each side of the market there is no strategic interdependence between the sides of the market. The Cobb-Douglas assumption provides a tractable framework in which to explore the features of fully strategic trade but it misses perhaps the most interesting feature of bilateral oligopoly, the implications of which are investigated.
Resumo:
Adverse selection may thwart trade between an informed seller, who knows the probability p that an item of antiquity is genuine, and an uninformed buyer, who does not know p. The buyer might not be wholly uninformed, however. Suppose he can perform a simple inspection, a test of his own: the probability that an item passes the test is g if the item is genuine, but only f < g if it is fake. Given that the buyer is no expert, his test may have little power: f may be close to g. Unfortunately, without much power, the buyer's test will not resolve the difficulty of adverse selection; gains from trade may remain unexploited. But now consider a "store", where the seller groups a number of items, perhaps all with the same quality, the same probability p of being genuine. (We show that in equilibrium the seller will choose to group items in this manner.) Now the buyer can conduct his test across a large sample, perhaps all, of a group of items in the seller's store. He can thereby assess the overall quality of these items; he can invert the aggregate of his test results to uncover the underlying p; he can form a "prior". There is thus no longer asymmetric information between seller and buyer: gains from trade can be exploited. This is our theory of retailing: by grouping items together - setting up a store - a seller is able to supply buyers with priors, as well as the items themselves. We show that the weaker the power of the buyer�s test (the closer f is to g), the greater the seller�s profit. So the seller has no incentive to assist the buyer � e.g., by performing her own tests on the items, or by cleaning them to reveal more about their true age. The paper ends with an analysis of which sellers should specialise in which qualities. We show that quality will be low in busy locations and high in expensive locations.
Resumo:
Most of the expansion of global trade during the last three decades has been of the North-South kind - between capital-abundant developed and labour-abundant developing countries. Based on this observation, I argue that the recent growth of world trade is best understood from a factor-proportions perspective. I present novel evidence documenting that differences in capital-labour ratios across countries have increased in the wake of two shocks to the global economy: i) the opening up of China and ii) financial globalisation and the resulting upstream capital flows towards capital-abundant regions. I analyse their impact on specialisation and the volume of trade in a dynamic model which combines factor-proportions trade in goods with international trade in financial assets. Calibrating this model, I find that it can account for 60% of world trade growth between 1980 and 2007. It is also capable of predicting international investment patterns which are consistent with the data
Resumo:
This paper studies the behavior of a central bank that seeks to conduct policy optimally while having imperfect credibility and harboring doubts about its model. Taking the Smets-Wouters model as the central bank.s approximating model, the paper's main findings are as follows. First, a central bank.s credibility can have large consequences for how policy responds to shocks. Second, central banks that have low credibility can bene.t from a desire for robustness because this desire motivates the central bank to follow through on policy announcements that would otherwise not be time-consistent. Third, even relatively small departures from perfect credibility can produce important declines in policy performance. Finally, as a technical contribution, the paper develops a numerical procedure to solve the decision-problem facing an imperfectly credible policymaker that seeks robustness.
Resumo:
We study a business cycle model in which a benevolent fiscal authority must determine the optimal provision of government services, while lacking credibility, lump-sum taxes, and the ability to bond finance deficits. Households and the fiscal authority have risk sensitive preferences. We find that outcomes are affected importantly by the household's risk sensitivity, but not by the fiscal authority's. Further, while household risk-sensitivity induces a strong precautionary saving motive, which raises capital and lowers the return on assets, its effects on fluctuations and the business cycle are generally small, although more pronounced for negative shocks. Holding the stochastic steady state constant, increases in household risk-sensitivity lower the risk-free rate and raise the return on equity, increasing the equity premium. Finally, although risk-sensitivity has little effect on the provision of government services, it does cause the fiscal authority to lower the income tax rate. An additional contribution of this paper is to present a method for computing Markov-perfect equilibria in models where private agents and the government are risk-sensitive decisionmakers.
Resumo:
This paper analyses optimal income taxes over the business cycle under a balanced-budget restriction, for low, middle and high income households. A model incorporating capital-skill complementarity in production and differential access to capital and labour markets is developed to capture the cyclical characteristics of the US economy, as well as the empirical observations on wage (skill premium) and wealth inequality. We .nd that the tax rate for high income agents is optimally the least volatile and the tax rate for low income agents the least countercyclical. In contrast, the path of optimal taxes for the middle income group is found to be very volatile and counter-cyclical. We further find that the optimal response to output-enhancing capital equipment technology and spending cuts is to increase the progressivity of income taxes. Finally, in response to positive TFP shocks, taxation becomes more progressive after about two years.
Resumo:
Using the framework of Desmet and Rossi-Hansberg (forthcoming), we present a model of spatial takeoff that is calibrated using spatially-disaggregated occupational data for England in c.1710. The model predicts changes in the spatial distribution of agricultural and manufacturing employment which match data for c.1817 and 1861. The model also matches a number of aggregate changes that characterise the first industrial revolution. Using counterfactual geographical distributions, we show that the initial concentration of productivity can matter for whether and when an industrial takeoff occurs. Subsidies to innovation in either sector can bring forward the date of takeoff while subsidies to the use of land by manufacturing firms can significantly delay a takeoff because it decreases spatial concentration of activity.
Resumo:
The purpose of this paper is to highlight the curiously circular course followed by mainstream macroeconomic thinking in recent times. Having broken from classical orthodoxy in the late 1930s via Keynes’s General Theory, over the last three or four decades the mainstream conventional wisdom, regressing rather than progressing, has now come to embrace a conception of the working of the macroeconomy which is again of a classical, essentially pre-Keynesian, character. At the core of the analysis presented in the typical contemporary macro textbook is the (neo)classical model of the labour market, which represents employment as determined (given conditions of productivity) by the terms of labour supply. While it is allowed that changes in aggregate demand may temporarily affect output and employment, the contention is that in due course employment will automatically return to its ‘natural’ (full employment) level. Unemployment is therefore identified as a merely frictional or voluntary phenomenon: involuntary unemployment - in other words persisting demand-deficient unemployment - is entirely absent from the picture. Variations in aggregate demand are understood to have a lasting impact only on the price level, not on output and employment. This in effect amounts to a return to a Pigouvian conception such as targeted by Keynes in the General Theory. We take the view that this reversion to ideas which should by now be obsolete reflects not the discovery of logical or empirical deficiencies in the Keynes analysis, but results rather from doctrinaire blindness and failure of scholarship on account of which essential features of the Keynes theory have been overlooked or misrepresented. There is an urgent need for a critical appraisal of the current conventional macroeconomic wisdom.
Resumo:
Genuine Savings has emerged as a widely-used indicator of sustainable development. In this paper, we use long-term data stretching back to 1870 to undertake empirical tests of the relationship between Genuine Savings (GS) and future well-being for three countries: Britain, the USA and Germany. Our tests are based on an underlying theoretical relationship between GS and changes in the present value of future consumption. Based on both single country and panel results, we find evidence supporting the existence of a cointegrating (long run equilibrium) relationship between GS and future well-being, and fail to reject the basic theoretical result on the relationship between these two macroeconomic variables. This provides some support for the GS measure of weak sustainability. We also show the effects of modelling shocks, such as World War Two and the Great Depression.
Resumo:
Using the framework of Desmet and Rossi-Hansberg (forthcoming), we present a model of spatial takeoff that is calibrated using spatially-disaggregated occupational data for England in c.1710. The model predicts changes in the spatial distribution of agricultural and manufacturing employment which match data for c.1817 and 1861. The model also matches a number of aggregate changes that characterise the first industrial revolution. Using counterfactual geographical distributions, we show that the initial concentration of productivity can matter for whether and when an industrial takeoff occurs. Subsidies to innovation in either sector can bring forward the date of takeoff while subsidies to the use of land by manufacturing firms can significantly delay a takeoff because it decreases spatial concentration of activity.