24 resultados para optimal hedge ratio
Resumo:
We estimate a New Keynesian DSGE model for the Euro area under alternative descriptions of monetary policy (discretion, commitment or a simple rule) after allowing for Markov switching in policy maker preferences and shock volatilities. This reveals that there have been several changes in Euro area policy making, with a strengthening of the anti-inflation stance in the early years of the ERM, which was then lost around the time of German reunification and only recovered following the turnoil in the ERM in 1992. The ECB does not appear to have been as conservative as aggregate Euro-area policy was under Bundesbank leadership, and its response to the financial crisis has been muted. The estimates also suggest that the most appropriate description of policy is that of discretion, with no evidence of commitment in the Euro-area. As a result although both ‘good luck’ and ‘good policy’ played a role in the moderation of inflation and output volatility in the Euro-area, the welfare gains would have been substantially higher had policy makers been able to commit. We consider a range of delegation schemes as devices to improve upon the discretionary outcome, and conclude that price level targeting would have achieved welfare levels close to those attained under commitment, even after accounting for the existence of the Zero Lower Bound on nominal interest rates.
Resumo:
In this paper we make three contributions to the literature on optimal Competition Law enforcement procedures. The first (which is of general interest beyond competition policy) is to clarify the concept of “legal uncertainty”, relating it to ideas in the literature on Law and Economics, but formalising the concept through various information structures which specify the probability that each firm attaches – at the time it takes an action – to the possibility of its being deemed anti-competitive were it to be investigated by a Competition Authority. We show that the existence of Type I and Type II decision errors by competition authorities is neither necessary nor sufficient for the existence of legal uncertainty, and that information structures with legal uncertainty can generate higher welfare than information structures with legal certainty – a result echoing a similar finding obtained in a completely different context and under different assumptions in earlier Law and Economics literature (Kaplow and Shavell, 1992). Our second contribution is to revisit and significantly generalise the analysis in our previous paper, Katsoulacos and Ulph (2009), involving a welfare comparison of Per Se and Effects- Based legal standards. In that analysis we considered just a single information structure under an Effects-Based standard and also penalties were exogenously fixed. Here we allow for (a) different information structures under an Effects-Based standard and (b) endogenous penalties. We obtain two main results: (i) considering all information structures a Per Se standard is never better than an Effects-Based standard; (ii) optimal penalties may be higher when there is legal uncertainty than when there is no legal uncertainty.
Resumo:
We determine he optimal combination of a universal benefit, B, and categorical benefit, C, for an economy in which individuals differ in both their ability to work - modelled as an exogenous zero quantity constraint on labour supply - and, conditional on being able to work, their productivity at work. C is targeted at those unable to work, and is conditioned in two dimensions: ex-ante an individual must be unable to work and be awarded the benefit, whilst ex-post a recipient must not subsequently work. However, the ex-ante conditionality may be imperfectly enforced due to Type I (false rejection) and Type II (false award) classification errors, whilst, in addition, the ex-post conditionality may be imperfectly enforced. If there are no classification errors - and thus no enforcement issues - it is always optimal to set C>0, whilst B=0 only if the benefit budget is sufficiently small. However, when classification errors occur, B=0 only if there are no Type I errors and the benefit budget is sufficiently small, while the conditions under which C>0 depend on the enforcement of the ex-post conditionality. We consider two discrete alternatives. Under No Enforcement C>0 only if the test administering C has some discriminatory power. In addition, social welfare is decreasing in the propensity to make each type error. However, under Full Enforcement C>0 for all levels of discriminatory power. Furthermore, whilst social welfare is decreasing in the propensity to make Type I errors, there are certain conditions under which it is increasing in the propensity to make Type II errors. This implies that there may be conditions under which it would be welfare enhancing to lower the chosen eligibility threshold - support the suggestion by Goodin (1985) to "err on the side of kindness".
Resumo:
We estimate a New Keynesian DSGE model for the Euro area under alternative descriptions of monetary policy (discretion, commitment or a simple rule) after allowing for Markov switching in policy maker preferences and shock volatilities. This reveals that there have been several changes in Euro area policy making, with a strengthening of the anti-inflation stance in the early years of the ERM, which was then lost around the time of German reunification and only recovered following the turnoil in the ERM in 1992. The ECB does not appear to have been as conservative as aggregate Euro-area policy was under Bundesbank leadership, and its response to the financial crisis has been muted. The estimates also suggest that the most appropriate description of policy is that of discretion, with no evidence of commitment in the Euro-area. As a result although both ‘good luck’ and ‘good policy’ played a role in the moderation of inflation and output volatility in the Euro-area, the welfare gains would have been substantially higher had policy makers been able to commit. We consider a range of delegation schemes as devices to improve upon the discretionary outcome, and conclude that price level targeting would have achieved welfare levels close to those attained under commitment, even after accounting for the existence of the Zero Lower Bound on nominal interest rates.
Resumo:
In a market in which sellers compete by posting mechanisms, we study how the properties of the meeting technology affect the mechanism that sellers select. In general, sellers have incentive to use mechanisms that are socially efficient. In our environment, sellers achieve this by posting an auction with a reserve price equal to their own valuation, along with a transfer that is paid by (or to) all buyers with whom the seller meets. However, we define a novel condition on meeting technologies, which we call “invariance,” and show that the transfer is equal to zero if and only if the meeting technology satisfies this condition.
Resumo:
The aim of this paper is to study inequality and deprivations as reflected in the human sex ratio (commonly defined as the number of males per 100 females). The particular focus is on three emerging economies, viz., Russia, India and China. The paper compares and contrasts the experiences of these countries and discusses policy issues. It is noted that while the feminist perspective on the issues surrounding the sex ratio is important, it would be wrong to view these issues always or exclusively through the prism of that perspective . It is also suggested that India and China probably have better prospects of sustained economic growth in the foreseeable future than does Russia.
Resumo:
Time-inconsistency is an essential feature of many policy problems (Kydland and Prescott, 1977). This paper presents and compares three methods for computing Markov-perfect optimal policies in stochastic nonlinear business cycle models. The methods considered include value function iteration, generalized Euler-equations, and parameterized shadow prices. In the context of a business cycle model in which a scal authority chooses government spending and income taxation optimally, while lacking the ability to commit, we show that the solutions obtained using value function iteration and generalized Euler equations are somewhat more accurate than that obtained using parameterized shadow prices. Among these three methods, we show that value function iteration can be applied easily, even to environments that include a risk-sensitive scal authority and/or inequality constraints on government spending. We show that the risk-sensitive scal authority lowers government spending and income-taxation, reducing the disincentive households face to accumulate wealth.
Resumo:
This paper considers the optimal degree of discretion in monetary policy when the central bank conducts policy based on its private information about the state of the economy and is unable to commit. Society seeks to maximize social welfare by imposing restrictions on the central bank's actions over time, and the central bank takes these restrictions and the New Keynesian Phillips curve as constraints. By solving a dynamic mechanism design problem we find that it is optimal to grant "constrained discretion" to the central bank by imposing both upper and lower bounds on permissible inflation, and that these bounds must be set in a history-dependent way. The optimal degree of discretion varies over time with the severity of the time-inconsistency problem, and, although no discretion is optimal when the time-inconsistency problem is very severe, our numerical experiment suggests that no-discretion is a transient phenomenon, and that some discretion is granted eventually.
Resumo:
We re-examine the dynamics of returns and dividend growth within the present-value framework of stock prices. We find that the finite sample order of integration of returns is approximately equal to the order of integration of the first-differenced price-dividend ratio. As such, the traditional return forecasting regressions based on the price-dividend ratio are invalid. Moreover, the nonstationary long memory behaviour of the price-dividend ratio induces antipersistence in returns. This suggests that expected returns should be modelled as an AFIRMA process and we show this improves the forecast ability of the present-value model in-sample and out-of-sample.