867 resultados para Time-Consistent Policy
Resumo:
Official calculations of automatic stabilizers are seriously flawed since they rest on the assumption that the only element of social spending that reacts automatically to the cycle is unemployment compensation. This puts into question many estimates of discretionary fiscal policy. In response, we propose a simultaneous estimate of automatic and discretionary fiscal policy. This leads us, quite naturally, to a tripartite decomposition of the budget balance between revenues, social spending and other spending as a bare minimum. Our headline results for a panel of 20 OECD countries in 1981-2003 are .59 automatic stabilization in percentage-points of primary surplus balances. All of this stabilization remains following discretionary responses during contractions, but arguably only about 3/5 of it remains so in expansions while discretionary behavior cancels the rest. We pay a lot of attention to the impact of the Maastricht Treaty and the SGP on the EU members of our sample and to real time data.
Resumo:
In an input-output context the impact of any particular industrial sector is commonly measured in terms of the output multiplier for that industry. Although such measures are routinely calculated and often used to guide regional industrial policy the behaviour of such measures over time is an area that has attracted little academic study. The output multipliers derived from any one table will have a distribution; for some industries the multiplier will be relatively high, for some it will be relatively low. The recentpublication of consistent input-output tables for the Scottish economy makes it possible to examine trends in this mdistribution over the ten year period 1998-2007. This is done by comparing the means and other summary measures of the distributions, the histograms and the cumulative densities. The results indicate a tendency for the multipliers to increase over the period. A Markov chain modelling approach suggests that this drift is a slow but long term phenomenon which appears not to tend to an equilibrium state. The prime reason for the increase in the output multipliers is traced to a decline in the relative importance of imported (both from the rest of the UK and the rest of the world) intermediate inputs used by Scottish industries. This suggests that models calibrated on the set of tables might have to be interpreted with caution.
Resumo:
During the past four decades both between and within group wage inequality increased significantly in the US. I provide a microfounded justification for this pattern, by introducing private employer learning in a model of signaling with credit constraints. In particular, I show that when financial constraints relax, talented individuals can acquire education and leave the uneducated pool, this decreases unskilled inexperienced wages and boosts wage inequality. This explanation is consistent with US data from 1970 to 1997, indicating that the rise of the skill and the experience premium coincides with a fall in unskilled-inexperienced wages, while at the same time skilled or experienced wages do not change much. The model accounts for: (i) the increase in the skill premium despite the growing supply of skills; (ii) the understudied aspect of rising inequality related to the increase in the experience premium; (iii) the sharp growth of the skill premium for inexperienced workers and its moderate expansion for the experienced ones; (iv) the puzzling coexistence of increasing experience premium within the group of unskilled workers and its stable pattern among the skilled ones. The results hold under various robustness checks and provide some interesting policy implications about the potential conflict between inequality of opportunity and substantial economic inequality, as well as the role of minimum wage policy in determining the equilibrium wage inequality.
Resumo:
This paper revisits the argument that the stabilisation bias that arises under discretionary monetary policy can be reduced if policy is delegated to a policymaker with redesigned objectives. We study four delegation schemes: price level targeting, interest rate smoothing, speed limits and straight conservatism. These can all increase social welfare in models with a unique discretionary equilibrium. We investigate how these schemes perform in a model with capital accumulation where uniqueness does not necessarily apply. We discuss how multiplicity arises and demonstrate that no delegation scheme is able to eliminate all potential bad equilibria. Price level targeting has two interesting features. It can create a new equilibrium that is welfare dominated, but it can also alter equilibrium stability properties and make coordination on the best equilibrium more likely.
Resumo:
We estimate a New Keynesian DSGE model for the Euro area under alternative descriptions of monetary policy (discretion, commitment or a simple rule) after allowing for Markov switching in policy maker preferences and shock volatilities. This reveals that there have been several changes in Euro area policy making, with a strengthening of the anti-inflation stance in the early years of the ERM, which was then lost around the time of German reunification and only recovered following the turnoil in the ERM in 1992. The ECB does not appear to have been as conservative as aggregate Euro-area policy was under Bundesbank leadership, and its response to the financial crisis has been muted. The estimates also suggest that the most appropriate description of policy is that of discretion, with no evidence of commitment in the Euro-area. As a result although both ‘good luck’ and ‘good policy’ played a role in the moderation of inflation and output volatility in the Euro-area, the welfare gains would have been substantially higher had policy makers been able to commit. We consider a range of delegation schemes as devices to improve upon the discretionary outcome, and conclude that price level targeting would have achieved welfare levels close to those attained under commitment, even after accounting for the existence of the Zero Lower Bound on nominal interest rates.
Resumo:
We estimate a New Keynesian DSGE model for the Euro area under alternative descriptions of monetary policy (discretion, commitment or a simple rule) after allowing for Markov switching in policy maker preferences and shock volatilities. This reveals that there have been several changes in Euro area policy making, with a strengthening of the anti-inflation stance in the early years of the ERM, which was then lost around the time of German reunification and only recovered following the turnoil in the ERM in 1992. The ECB does not appear to have been as conservative as aggregate Euro-area policy was under Bundesbank leadership, and its response to the financial crisis has been muted. The estimates also suggest that the most appropriate description of policy is that of discretion, with no evidence of commitment in the Euro-area. As a result although both ‘good luck’ and ‘good policy’ played a role in the moderation of inflation and output volatility in the Euro-area, the welfare gains would have been substantially higher had policy makers been able to commit. We consider a range of delegation schemes as devices to improve upon the discretionary outcome, and conclude that price level targeting would have achieved welfare levels close to those attained under commitment, even after accounting for the existence of the Zero Lower Bound on nominal interest rates.
Resumo:
This paper has three objectives. First, it aims at revealing the logic of interest rate setting pursued by monetary authorities of 12 new EU members. Using estimation of an augmented Taylor rule, we find that this setting was not always consistent with the official monetary policy. Second, we seek to shed light on the inflation process of these countries. To this end, we carry out an estimation of an open economy Philips curve (PC). Our main finding is that inflation rates were not only driven by backward persistency but also held a forward-looking component. Finally, we assess the viability of existing monetary arrangements for price stability. The analysis of the conditional inflation variance obtained from GARCH estimation of PC is used for this purpose. We conclude that inflation targeting is preferable to an exchange rate peg because it allowed decreasing the inflation rate and anchored its volatility.
Resumo:
Domestic action on climate change is increasingly important in the light of the difficulties with international agreements and requires a combination of solutions, in terms of institutions and policy instruments. One way of achieving government carbon policy goals may be the creation of an independent body to advise, set or monitor policy. This paper critically assesses the Committee on Climate Change (CCC), which was created in 2008 as an independent body to help move the UK towards a low carbon economy. We look at the motivation for its creation in terms of: information provision, advice, monitoring, or policy delegation. In particular we consider its ability to overcome a time inconsistency problem by comparing and contrasting it with another independent body, the Monetary Policy Committee of the Bank of England. In practice the Committee on Climate Change appears to be the ‘inverse’ of the Monetary Policy Committee, in that it advises on what the policy goal should be rather than being responsible for achieving it. The CCC incorporates both advisory and monitoring functions to inform government and achieve a credible carbon policy over a long time frame. This is a similar framework to that adopted by Stern (2006), but the CCC operates on a continuing basis. We therefore believe the CCC is best viewed as a "Rolling Stern plus" body. There are also concerns as to how binding the budgets actually are and how the budgets interact with other energy policy goals and instruments, such as Renewable Obligation Contracts and the EU Emissions Trading Scheme. The CCC could potentially be reformed to include: an explicit information provision role; consumption-based accounting of emissions and control of a policy instrument such as a balanced-budget carbon tax.
Resumo:
The possibility of low-probability extreme natural events has reignited the debate over the optimal intensity and timing of climate policy. In this paper, we contribute to the literature by assessing the implications of low-probability extreme events on environmental policy in a continuous-time real options model with “tail risk”. In a nutshell, our results indicate the importance of tail risk and call for foresighted pre-emptive climate policies.
Resumo:
An abundant scientific literature about climate change economics points out that the future participation of developing countries in international environmental policies will depend on their amount of pay offs inside and outside specific agreements. These studies are aimed at analyzing coalitions stability typically through a game theoretical approach. Though these contributions represent a corner stone in the research field investigating future plausible international coalitions and the reasons behind the difficulties incurred over time to implement emissions stabilizing actions, they cannot disentangle satisfactorily the role that equality play in inducing poor regions to tackle global warming. If we focus on the Stern Review findings stressing that climate change will generate heavy damages and policy actions will be costly in a finite time horizon, we understand why there is a great incentive to free ride in order to exploit benefits from emissions reduction efforts of others. The reluctance of poor countries in joining international agreements is mainly supported by historical responsibility of rich regions in generating atmospheric carbon concentration, whereas rich countries claim that emissions stabilizing policies will be effective only when developing countries will join them.Scholars recently outline that a perceived fairness in the distribution of emissions would facilitate a wide spread participation in international agreements. In this paper we overview the literature about distributional aspects of emissions by focusing on those contributions investigating past trends of emissions distribution through empirical data and future trajectories through simulations obtained by integrated assessment models. We will explain methodologies used to elaborate data and the link between real data and those coming from simulations. Results from this strand of research will be interpreted in order to discuss future negotiations for post Kyoto agreements that will be the focus of the next. Conference of the Parties in Copenhagen at the end of 2009. A particular attention will be devoted to the role that technological change will play in affecting the distribution of emissions over time and to how spillovers and experience diffusion could influence equality issues and future outcomes of policy negotiations.
Resumo:
This paper is the first to examine the implications of switching to PT work for women's subsequent earnings trajectories, distinguishing by their type of contract: permanent or fixedterm. Using a rich longitudinal Spanish data set from Social Security records of over 76,000 prime-aged women strongly attached to the Spanish labor market, we find that PT work aggravates the segmentation of the labor market insofar there is a PT pay penalty and this penalty is larger and more persistent in the case of women with fixed-term contracts. The paper discusses problems arising in empirical estimation (including a problem not discussed in the literature up to now: the differential measurement error of the LHS variable by PT status), and how to address them. It concludes with policy implications relevant for Continental Europe and its dual structure of employment protection.
Resumo:
This paper provides evidence on the sources of differences in inequalities in educational scores in European Union member states, by decomposing them into their determining factors. Using PISA data from the 2000 and 2006 waves, the paper shows that inequalities emerge in all countries and in both period, but decreased in Germany, whilst they increased in France and Italy. Decomposition shows that educational inequalities do not only reflect background related inequality, but especially schools’ characteristics. The findings allow policy makers to target areas that may make a contribution in reducing educational inequalities.
Resumo:
We examine the evolution of monetary policy rules in a group of inflation targeting countries (Australia, Canada, New Zealand, Sweden and the United Kingdom) applying moment- based estimator at time-varying parameter model with endogenous regressors. Using this novel flexible framework, our main findings are threefold. First, monetary policy rules change gradually pointing to the importance of applying time-varying estimation framework. Second, the interest rate smoothing parameter is much lower that what previous time-invariant estimates of policy rules typically report. External factors matter for all countries, albeit the importance of exchange rate diminishes after the adoption of inflation targeting. Third, the response of interest rates on inflation is particularly strong during the periods, when central bankers want to break the record of high inflation such as in the U.K. or in Australia at the beginning of 1980s. Contrary to common wisdom, the response becomes less aggressive after the adoption of inflation targeting suggesting the positive effect of this regime on anchoring inflation expectations. This result is supported by our finding that inflation persistence as well as policy neutral rate typically decreased after the adoption of inflation targeting.
Resumo:
We examine whether and how main central banks responded to episodes of financial stress over the last three decades. We employ a new methodology for monetary policy rules estimation, which allows for time-varying response coefficients as well as corrects for endogeneity. This flexible framework applied to the U.S., U.K., Australia, Canada and Sweden together with a new financial stress dataset developed by the International Monetary Fund allows not only testing whether the central banks responded to financial stress but also detects the periods and type of stress that were the most worrying for monetary authorities and to quantify the intensity of policy response. Our findings suggest that central banks often change policy
Resumo:
This dissertation focuses on the practice of regulatory governance, throughout the study of the functioning of formally independent regulatory agencies (IRAs), with special attention to their de facto independence. The research goals are grounded on a "neo-positivist" (or "reconstructed positivist") position (Hawkesworth 1992; Radaelli 2000b; Sabatier 2000). This perspective starts from the ontological assumption that even if subjective perceptions are constitutive elements of political phenomena, a real world exists beyond any social construction and can, however imperfectly, become the object of scientific inquiry. Epistemologically, it follows that hypothetical-deductive theories with explanatory aims can be tested by employing a proper methodology and set of analytical techniques. It is thus possible to make scientific inferences and general conclusions to a certain extent, according to a Bayesian conception of knowledge, in order to update the prior scientific beliefs in the truth of the related hypotheses (Howson 1998), while acknowledging the fact that the conditions of truth are at least partially subjective and historically determined (Foucault 1988; Kuhn 1970). At the same time, a sceptical position is adopted towards the supposed disjunction between facts and values and the possibility of discovering abstract universal laws in social science. It has been observed that the current version of capitalism corresponds to the golden age of regulation, and that since the 1980s no government activity in OECD countries has grown faster than regulatory functions (Jacobs 1999). Following an apparent paradox, the ongoing dynamics of liberalisation, privatisation, decartelisation, internationalisation, and regional integration hardly led to the crumbling of the state, but instead promoted a wave of regulatory growth in the face of new risks and new opportunities (Vogel 1996). Accordingly, a new order of regulatory capitalism is rising, implying a new division of labour between state and society and entailing the expansion and intensification of regulation (Levi-Faur 2005). The previous order, relying on public ownership and public intervention and/or on sectoral self-regulation by private actors, is being replaced by a more formalised, expert-based, open, and independently regulated model of governance. Independent regulation agencies (IRAs), that is, formally independent administrative agencies with regulatory powers that benefit from public authority delegated from political decision makers, represent the main institutional feature of regulatory governance (Gilardi 2008). IRAs constitute a relatively new technology of regulation in western Europe, at least for certain domains, but they are increasingly widespread across countries and sectors. For instance, independent regulators have been set up for regulating very diverse issues, such as general competition, banking and finance, telecommunications, civil aviation, railway services, food safety, the pharmaceutical industry, electricity, environmental protection, and personal data privacy. Two attributes of IRAs deserve a special mention. On the one hand, they are formally separated from democratic institutions and elected politicians, thus raising normative and empirical concerns about their accountability and legitimacy. On the other hand, some hard questions about their role as political actors are still unaddressed, though, together with regulatory competencies, IRAs often accumulate executive, (quasi-)legislative, and adjudicatory functions, as well as about their performance.