983 resultados para Aggregate quarry
Resumo:
We construct new series for common native language and common spoken language for 195 countries, which we use together with series for common official language and linguis-tic proximity in order to draw inferences about (1) the aggregate impact of all linguistic factors on bilateral trade, (2) whether the linguistic influences come from ethnicity and trust or ease of communication, and (3) in so far they come from ease of communication, to what extent trans-lation and interpreters play a role. The results show that the impact of linguistic factors, all together, is at least twice as great as the usual dummy variable for common language, resting on official language, would say. In addition, ease of communication is far more important than ethnicity and trust. Further, so far as ease of communication is at work, translation and inter-preters are extremely important. Finally, ethnicity and trust come into play largely because of immigrants and their influence is otherwise difficult to detect.
Resumo:
In this paper we examine the out-of-sample forecast performance of high-yield credit spreads regarding real-time and revised data on employment and industrial production in the US. We evaluate models using both a point forecast and a probability forecast exercise. Our main findings suggest the use of few factors obtained by pooling information from a number of sector-specific high-yield credit spreads. This can be justified by observing that, especially for employment, there is a gain from using a principal components model fitted to high-yield credit spreads compared to the prediction produced by benchmarks, such as an AR, and ARDL models that use either the term spread or the aggregate high-yield spread as exogenous regressor. Moreover, forecasts based on real-time data are generally comparable to forecasts based on revised data. JEL Classification: C22; C53; E32 Keywords: Credit spreads; Principal components; Forecasting; Real-time data.
Resumo:
Do intermediate goods help explain relative and aggregate productivity differences across countries? Three observations suggest they do: (i) intermediates are relatively expensive in poor countries; (ii) goods industries demand intermediates more intensively than service industries; (iii) goods industries are more prominent intermediate suppliers in poor countries. I build a standard multi-sector growth model accommodating these features to show that inefficient intermediate production strongly depresses aggregate labor productivity and increases the price ratio of final goods to services. Applying the model to data, low and high income countries in fact reveal similar relative efficiency levels between goods and services despite clear differences in relative sectoral labor productivity. Moreover, the main empirical exercise suggests that poorer countries are substantially less efficient at producing intermediate relative to final goods and services. Closing the cross-country efficiency gap in intermediate input production would strongly narrow the aggregate labor productivity difference across countries as well as turn final goods in poorer countries relatively cheap compared to services.
Resumo:
This paper develops a dynamic general equilibrium model to highlight the role of human capital accumulation of agents differentiated by skill type in the joint determination of social mobility and the skill premium. We first show that our model captures the empirical co-movement of the skill premium, the relative supply of skilled to unskilled workers and aggregate output in the U.S. data from 1970-2000. We next show that endogenous social mobility and human capital accumulation are key channels through which the effects of capital tax cuts and increases in public spending on both pre- and post-college education are transmitted. In particular, social mobility creates additional incentives for the agents which enhance the beneficial effects of policy reforms. Moreover, the dynamics of human capital accumulation imply that, post reform, the skill premium is higher in the short- to medium-run than in the long-run.
Resumo:
Bilateral oligopoly is a simple model of exchange in which a finite set of sellers seek to exchange the goods they are endowed with for money with a finite set of buyers, and no price-taking assumptions are imposed. If trade takes place via a strategic market game bilateral oligopoly can be thought of as two linked proportional-sharing contests: in one the sellers share the aggregate bid from the buyers in proportion to their supply and in the other the buyers share the aggregate supply in proportion to their bids. The analysis can be separated into two ‘partial games’. First, fix the aggregate bid at B; in the first partial game the sellers contest this fixed prize in proportion to their supply and the aggregate supply in the equilibrium of this game is X˜ (B). Next, fix the aggregate supply at X; in the second partial game the buyers contest this fixed prize in proportion to their bids and the aggregate bid in the equilibrium of this game is ˜B (X). The analysis of these two partial games takes into account competition within each side of the market. Equilibrium in bilateral oligopoly must take into account competition between sellers and buyers and requires, for example, ˜B (X˜ (B)) = B. When all traders have Cobb-Douglas preferences ˜ X(B) does not depend on B and ˜B (X) does not depend on X: whilst there is competition within each side of the market there is no strategic interdependence between the sides of the market. The Cobb-Douglas assumption provides a tractable framework in which to explore the features of fully strategic trade but it misses perhaps the most interesting feature of bilateral oligopoly, the implications of which are investigated.
Resumo:
Adverse selection may thwart trade between an informed seller, who knows the probability p that an item of antiquity is genuine, and an uninformed buyer, who does not know p. The buyer might not be wholly uninformed, however. Suppose he can perform a simple inspection, a test of his own: the probability that an item passes the test is g if the item is genuine, but only f < g if it is fake. Given that the buyer is no expert, his test may have little power: f may be close to g. Unfortunately, without much power, the buyer's test will not resolve the difficulty of adverse selection; gains from trade may remain unexploited. But now consider a "store", where the seller groups a number of items, perhaps all with the same quality, the same probability p of being genuine. (We show that in equilibrium the seller will choose to group items in this manner.) Now the buyer can conduct his test across a large sample, perhaps all, of a group of items in the seller's store. He can thereby assess the overall quality of these items; he can invert the aggregate of his test results to uncover the underlying p; he can form a "prior". There is thus no longer asymmetric information between seller and buyer: gains from trade can be exploited. This is our theory of retailing: by grouping items together - setting up a store - a seller is able to supply buyers with priors, as well as the items themselves. We show that the weaker the power of the buyer�s test (the closer f is to g), the greater the seller�s profit. So the seller has no incentive to assist the buyer � e.g., by performing her own tests on the items, or by cleaning them to reveal more about their true age. The paper ends with an analysis of which sellers should specialise in which qualities. We show that quality will be low in busy locations and high in expensive locations.
Resumo:
Using the framework of Desmet and Rossi-Hansberg (forthcoming), we present a model of spatial takeoff that is calibrated using spatially-disaggregated occupational data for England in c.1710. The model predicts changes in the spatial distribution of agricultural and manufacturing employment which match data for c.1817 and 1861. The model also matches a number of aggregate changes that characterise the first industrial revolution. Using counterfactual geographical distributions, we show that the initial concentration of productivity can matter for whether and when an industrial takeoff occurs. Subsidies to innovation in either sector can bring forward the date of takeoff while subsidies to the use of land by manufacturing firms can significantly delay a takeoff because it decreases spatial concentration of activity.
Resumo:
The purpose of this paper is to highlight the curiously circular course followed by mainstream macroeconomic thinking in recent times. Having broken from classical orthodoxy in the late 1930s via Keynes’s General Theory, over the last three or four decades the mainstream conventional wisdom, regressing rather than progressing, has now come to embrace a conception of the working of the macroeconomy which is again of a classical, essentially pre-Keynesian, character. At the core of the analysis presented in the typical contemporary macro textbook is the (neo)classical model of the labour market, which represents employment as determined (given conditions of productivity) by the terms of labour supply. While it is allowed that changes in aggregate demand may temporarily affect output and employment, the contention is that in due course employment will automatically return to its ‘natural’ (full employment) level. Unemployment is therefore identified as a merely frictional or voluntary phenomenon: involuntary unemployment - in other words persisting demand-deficient unemployment - is entirely absent from the picture. Variations in aggregate demand are understood to have a lasting impact only on the price level, not on output and employment. This in effect amounts to a return to a Pigouvian conception such as targeted by Keynes in the General Theory. We take the view that this reversion to ideas which should by now be obsolete reflects not the discovery of logical or empirical deficiencies in the Keynes analysis, but results rather from doctrinaire blindness and failure of scholarship on account of which essential features of the Keynes theory have been overlooked or misrepresented. There is an urgent need for a critical appraisal of the current conventional macroeconomic wisdom.
Resumo:
Using the framework of Desmet and Rossi-Hansberg (forthcoming), we present a model of spatial takeoff that is calibrated using spatially-disaggregated occupational data for England in c.1710. The model predicts changes in the spatial distribution of agricultural and manufacturing employment which match data for c.1817 and 1861. The model also matches a number of aggregate changes that characterise the first industrial revolution. Using counterfactual geographical distributions, we show that the initial concentration of productivity can matter for whether and when an industrial takeoff occurs. Subsidies to innovation in either sector can bring forward the date of takeoff while subsidies to the use of land by manufacturing firms can significantly delay a takeoff because it decreases spatial concentration of activity.
Resumo:
This paper undertakes a normative investigation of the quantitative properties of optimal tax smoothing in a business cycle model with state contingent debt, capital-skill complementarity, endogenous skill formation and stochastic shocks to public consumption as well as total factor and capital equipment productivity. Our main finding is that an empirically relevant restriction which does not allow the relative supply of skilled labour to adjust in response to aggregate shocks, signi cantly changes the cyclical properties of optimal labour taxes. Under a restricted relative skill supply, the government fi nds it optimal to adjust labour income tax rates so that the average net returns to skilled and unskilled labour hours exhibit the same dynamic behaviour as under fl exible skill supply.
Resumo:
We estimate a New Keynesian DSGE model for the Euro area under alternative descriptions of monetary policy (discretion, commitment or a simple rule) after allowing for Markov switching in policy maker preferences and shock volatilities. This reveals that there have been several changes in Euro area policy making, with a strengthening of the anti-inflation stance in the early years of the ERM, which was then lost around the time of German reunification and only recovered following the turnoil in the ERM in 1992. The ECB does not appear to have been as conservative as aggregate Euro-area policy was under Bundesbank leadership, and its response to the financial crisis has been muted. The estimates also suggest that the most appropriate description of policy is that of discretion, with no evidence of commitment in the Euro-area. As a result although both ‘good luck’ and ‘good policy’ played a role in the moderation of inflation and output volatility in the Euro-area, the welfare gains would have been substantially higher had policy makers been able to commit. We consider a range of delegation schemes as devices to improve upon the discretionary outcome, and conclude that price level targeting would have achieved welfare levels close to those attained under commitment, even after accounting for the existence of the Zero Lower Bound on nominal interest rates.
Resumo:
We estimate a New Keynesian DSGE model for the Euro area under alternative descriptions of monetary policy (discretion, commitment or a simple rule) after allowing for Markov switching in policy maker preferences and shock volatilities. This reveals that there have been several changes in Euro area policy making, with a strengthening of the anti-inflation stance in the early years of the ERM, which was then lost around the time of German reunification and only recovered following the turnoil in the ERM in 1992. The ECB does not appear to have been as conservative as aggregate Euro-area policy was under Bundesbank leadership, and its response to the financial crisis has been muted. The estimates also suggest that the most appropriate description of policy is that of discretion, with no evidence of commitment in the Euro-area. As a result although both ‘good luck’ and ‘good policy’ played a role in the moderation of inflation and output volatility in the Euro-area, the welfare gains would have been substantially higher had policy makers been able to commit. We consider a range of delegation schemes as devices to improve upon the discretionary outcome, and conclude that price level targeting would have achieved welfare levels close to those attained under commitment, even after accounting for the existence of the Zero Lower Bound on nominal interest rates.
Resumo:
This paper studies unemployed workers’ decisions to change occupations, and their impact on fluctuations in aggregate unemployment and its underlying duration distribution. We develop an analytically and computationally tractable stochastic equilibrium model with heterogenous labor markets. In this model three different types of unemployment arise: search, rest and reallocation unemployment. We document new evidence on unemployed workers’ gross occupational mobility and use it to calibrate the model. We show that rest unemployment is the main driver of unemployment fluctuations over the business cycle and causes cyclical unemployment to be highly volatile. The resulting unemployment duration distribution generated by the model responds realistically to the business cycle, creating substantial longer-term unemployment in downturns. Finally, rest unemployment also makes our model simultaneously consistent with procyclical occupational mobility of the unemployed, countercyclical job separations into unemployment and a negatively-sloped Beveridge curve.
Resumo:
Operating overheads are widespread and lead to concentrated bursts of activity. To transfer resources between active and idle spells, agents demand financial assets. Futures contracts and lotteries are unsuitable, as they have substantial overheads of their own.We show that money – under efficient monetary policy – is a liquid asset that leads to efficient allocations. Under all other policies, agents follow inefficient “money cycle” patterns of saving, activity, and inactivity. Agents spend their money too quickly – a “hot potato effect of inflation”. We show that inflation can stimulate inefficiently high aggregate output.
Resumo:
This paper evaluates the effects of policy interventions on sectoral labour markets and the aggregate economy in a business cycle model with search and matching frictions. We extend the canonical model by including capital-skill complementarity in production, labour markets with skilled and unskilled workers and on-the-job-learning (OJL) within and across skill types. We first find that, the model does a good job at matching the cyclical properties of sectoral employment and the wage-skill premium. We next find that vacancy subsidies for skilled and unskilled jobs lead to output multipliers which are greater than unity with OJL and less than unity without OJL. In contrast, the positive output effects from cutting skilled and unskilled income taxes are close to zero. Finally, we find that the sectoral and aggregate effects of vacancy subsidies do not depend on whether they are financed via public debt or distorting taxes.