970 resultados para optimal machining parameters
Resumo:
Using a standard open economy DSGE model, it is shown that the timing of asset trade relative to policy decisions has a potentially important impact on the welfare evaluation of monetary policy at the individual country level. If asset trade in the initial period takes place before the announcement of policy, a national policymaker can choose a policy rule which reduces the work effort of households in the policymaker’s country in the knowledge that consumption is fully insured by optimally chosen international portfolio positions. But if asset trade takes place after the policy announcement, this insurance is absent and households in the policymaker’s country bear the full consumption consequences of the chosen policy rule. The welfare incentives faced by national policymakers are very different between the two cases. Numerical examples confirm that asset market timing has a significant impact on the optimal policy rule.
Resumo:
In the theoretical macroeconomics literature, fiscal policy is almost uniformly taken to mean taxing and spending by a ‘benevolent government’ that exploits the potential aggregate demand externalities inherent in the imperfectly competitive nature of goods markets. Whilst shown to raise aggregate output and employment, these policies crowd-out private consumption and hence typically reduce welfare. In this paper we consider the use of ‘tax-and-subsidise’ instead of ‘taxand- spend’ policies on account of their widespread use by governments, even in the recent recession, to stimulate economic activity. Within a static general equilibrium macro-model with imperfectly competitive good markets we examine the effect of wage and output subsidies and show that, for a small open economy, positive tax and subsidy rates exist which maximise welfare, rendering no intervention as a suboptimal state. We also show that, within a two-country setting, a Nash non-cooperative symmetric equilibrium with positive tax and subsidy rates exists, and that cooperation between trading partners in setting these rates is more expansionary and leads to an improvement upon the non-cooperative solution.
Resumo:
This paper investigates underlying changes in the UK economy over the past thirtyfive years using a small open economy DSGE model. Using Bayesian analysis, we find UK monetary policy, nominal price rigidity and exogenous shocks, are all subject to regime shifting. A model incorporating these changes is used to estimate the realised monetary policy and derive the optimal monetary policy for the UK. This allows us to assess the effectiveness of the realised policy in terms of stabilising economic fluctuations, and, in turn, provide an indication of whether there is room for monetary authorities to further improve their policies.
Resumo:
The stylized facts suggest a negative relationship between tax progressivity and the skill premium from the early 1960s until the early 1990s, and a positive one thereafter. They also generally imply rising tax progressivity, except for the 1980s. In this paper, we ask whether optimal tax policy is consistent with these observations, taking into account the demographic and technological factors that have also affected the skill premium. To this end, we construct a dynamic general equilibrium model in which the skill premium and the progressivity of the tax system are endogenously determined, with the latter being optimally chosen by a benevolent government. We find that optimal policy delivers both a progressive tax system and model predictions which are generally consistent, except for the 1980s, with the stylized facts relating to the skill premium and progressivity. To capture the patterns in the data over the 1980s requires that we adopt a government policy which is biased towards the interests of skilled agents. Thus, in addition to demographic and technological factors, changes in the preferences of policy-makers appear to be a potentially important factor in determining the evolution of the observed skill premium.
Resumo:
The quintessence of recent natural science studies is that the 2 degrees C target can only be achieved with massive emission reductions in the next few years. The central twist of this paper is the addition of this limited time to act into a non-perpetual real options framework analysing optimal climate policy under uncertainty. The window-of-opportunity modelling setup shows that the limited time to act may spark a trend reversal in the direction of low-carbon alternatives. However, the implementation of a climate policy is evaded by high uncertainty about possible climate pathways.
Resumo:
This paper analyses optimal income taxes over the business cycle under a balanced-budget restriction, for low, middle and high income households. A model incorporating capital-skill complementarity in production and differential access to capital and labour markets is developed to capture the cyclical characteristics of the US economy, as well as the empirical observations on wage (skill premium) and wealth inequality. We .nd that the tax rate for high income agents is optimally the least volatile and the tax rate for low income agents the least countercyclical. In contrast, the path of optimal taxes for the middle income group is found to be very volatile and counter-cyclical. We further find that the optimal response to output-enhancing capital equipment technology and spending cuts is to increase the progressivity of income taxes. Finally, in response to positive TFP shocks, taxation becomes more progressive after about two years.
Resumo:
This paper examines whether efficiency considerations require that optimal labour income taxation is progressive or regressive in a model with skill heterogeneity, endogenous skill acquisition and a production sector with capital-skill complementarity. We find that wage inequality driven by the resource requirements of skill-creation implies progressive labour income taxation in the steady-state as well as along the transition path from the exogenous to optimal policy steady-state. We find that these results are explained by a lower labour supply elasticity for skilled versus unskilled labour which results from the introduction of the skill acquisition technology.
Resumo:
We estimate a New Keynesian DSGE model for the Euro area under alternative descriptions of monetary policy (discretion, commitment or a simple rule) after allowing for Markov switching in policy maker preferences and shock volatilities. This reveals that there have been several changes in Euro area policy making, with a strengthening of the anti-inflation stance in the early years of the ERM, which was then lost around the time of German reunification and only recovered following the turnoil in the ERM in 1992. The ECB does not appear to have been as conservative as aggregate Euro-area policy was under Bundesbank leadership, and its response to the financial crisis has been muted. The estimates also suggest that the most appropriate description of policy is that of discretion, with no evidence of commitment in the Euro-area. As a result although both ‘good luck’ and ‘good policy’ played a role in the moderation of inflation and output volatility in the Euro-area, the welfare gains would have been substantially higher had policy makers been able to commit. We consider a range of delegation schemes as devices to improve upon the discretionary outcome, and conclude that price level targeting would have achieved welfare levels close to those attained under commitment, even after accounting for the existence of the Zero Lower Bound on nominal interest rates.
Resumo:
In this paper we make three contributions to the literature on optimal Competition Law enforcement procedures. The first (which is of general interest beyond competition policy) is to clarify the concept of “legal uncertainty”, relating it to ideas in the literature on Law and Economics, but formalising the concept through various information structures which specify the probability that each firm attaches – at the time it takes an action – to the possibility of its being deemed anti-competitive were it to be investigated by a Competition Authority. We show that the existence of Type I and Type II decision errors by competition authorities is neither necessary nor sufficient for the existence of legal uncertainty, and that information structures with legal uncertainty can generate higher welfare than information structures with legal certainty – a result echoing a similar finding obtained in a completely different context and under different assumptions in earlier Law and Economics literature (Kaplow and Shavell, 1992). Our second contribution is to revisit and significantly generalise the analysis in our previous paper, Katsoulacos and Ulph (2009), involving a welfare comparison of Per Se and Effects- Based legal standards. In that analysis we considered just a single information structure under an Effects-Based standard and also penalties were exogenously fixed. Here we allow for (a) different information structures under an Effects-Based standard and (b) endogenous penalties. We obtain two main results: (i) considering all information structures a Per Se standard is never better than an Effects-Based standard; (ii) optimal penalties may be higher when there is legal uncertainty than when there is no legal uncertainty.
Resumo:
We determine he optimal combination of a universal benefit, B, and categorical benefit, C, for an economy in which individuals differ in both their ability to work - modelled as an exogenous zero quantity constraint on labour supply - and, conditional on being able to work, their productivity at work. C is targeted at those unable to work, and is conditioned in two dimensions: ex-ante an individual must be unable to work and be awarded the benefit, whilst ex-post a recipient must not subsequently work. However, the ex-ante conditionality may be imperfectly enforced due to Type I (false rejection) and Type II (false award) classification errors, whilst, in addition, the ex-post conditionality may be imperfectly enforced. If there are no classification errors - and thus no enforcement issues - it is always optimal to set C>0, whilst B=0 only if the benefit budget is sufficiently small. However, when classification errors occur, B=0 only if there are no Type I errors and the benefit budget is sufficiently small, while the conditions under which C>0 depend on the enforcement of the ex-post conditionality. We consider two discrete alternatives. Under No Enforcement C>0 only if the test administering C has some discriminatory power. In addition, social welfare is decreasing in the propensity to make each type error. However, under Full Enforcement C>0 for all levels of discriminatory power. Furthermore, whilst social welfare is decreasing in the propensity to make Type I errors, there are certain conditions under which it is increasing in the propensity to make Type II errors. This implies that there may be conditions under which it would be welfare enhancing to lower the chosen eligibility threshold - support the suggestion by Goodin (1985) to "err on the side of kindness".
Resumo:
Time-lapse crosshole ground-penetrating radar (GPR) data, collected while infiltration occurs, can provide valuable information regarding the hydraulic properties of the unsaturated zone. In particular, the stochastic inversion of such data provides estimates of parameter uncertainties, which are necessary for hydrological prediction and decision making. Here, we investigate the effect of different infiltration conditions on the stochastic inversion of time-lapse, zero-offset-profile, GPR data. Inversions are performed using a Bayesian Markov-chain-Monte-Carlo methodology. Our results clearly indicate that considering data collected during a forced infiltration test helps to better refine soil hydraulic properties compared to data collected under natural infiltration conditions
Resumo:
We estimate a New Keynesian DSGE model for the Euro area under alternative descriptions of monetary policy (discretion, commitment or a simple rule) after allowing for Markov switching in policy maker preferences and shock volatilities. This reveals that there have been several changes in Euro area policy making, with a strengthening of the anti-inflation stance in the early years of the ERM, which was then lost around the time of German reunification and only recovered following the turnoil in the ERM in 1992. The ECB does not appear to have been as conservative as aggregate Euro-area policy was under Bundesbank leadership, and its response to the financial crisis has been muted. The estimates also suggest that the most appropriate description of policy is that of discretion, with no evidence of commitment in the Euro-area. As a result although both ‘good luck’ and ‘good policy’ played a role in the moderation of inflation and output volatility in the Euro-area, the welfare gains would have been substantially higher had policy makers been able to commit. We consider a range of delegation schemes as devices to improve upon the discretionary outcome, and conclude that price level targeting would have achieved welfare levels close to those attained under commitment, even after accounting for the existence of the Zero Lower Bound on nominal interest rates.
Resumo:
In a market in which sellers compete by posting mechanisms, we study how the properties of the meeting technology affect the mechanism that sellers select. In general, sellers have incentive to use mechanisms that are socially efficient. In our environment, sellers achieve this by posting an auction with a reserve price equal to their own valuation, along with a transfer that is paid by (or to) all buyers with whom the seller meets. However, we define a novel condition on meeting technologies, which we call “invariance,” and show that the transfer is equal to zero if and only if the meeting technology satisfies this condition.
Resumo:
Acute cardiovascular dysfunction occurs perioperatively in more than 20% of cardiosurgical patients, yet current acute heart failure (HF) classification is not applicable to this period. Indicators of major perioperative risk include unstable coronary syndromes, decompensated HF, significant arrhythmias and valvular disease. Clinical risk factors include history of heart disease, compensated HF, cerebrovascular disease, presence of diabetes mellitus, renal insufficiency and high-risk surgery. EuroSCORE reliably predicts perioperative cardiovascular alteration in patients aged less than 80 years. Preoperative B-type natriuretic peptide level is an additional risk stratification factor. Aggressively preserving heart function during cardiosurgery is a major goal. Volatile anaesthetics and levosimendan seem to be promising cardioprotective agents, but large trials are still needed to assess the best cardioprotective agent(s) and optimal protocol(s). The aim of monitoring is early detection and assessment of mechanisms of perioperative cardiovascular dysfunction. Ideally, volume status should be assessed by 'dynamic' measurement of haemodynamic parameters. Assess heart function first by echocardiography, then using a pulmonary artery catheter (especially in right heart dysfunction). If volaemia and heart function are in the normal range, cardiovascular dysfunction is very likely related to vascular dysfunction. In treating myocardial dysfunction, consider the following options, either alone or in combination: low-to-moderate doses of dobutamine and epinephrine, milrinone or levosimendan. In vasoplegia-induced hypotension, use norepinephrine to maintain adequate perfusion pressure. Exclude hypovolaemia in patients under vasopressors, through repeated volume assessments. Optimal perioperative use of inotropes/vasopressors in cardiosurgery remains controversial, and further large multinational studies are needed. Cardiosurgical perioperative classification of cardiac impairment should be based on time of occurrence (precardiotomy, failure to wean, postcardiotomy) and haemodynamic severity of the patient's condition (crash and burn, deteriorating fast, stable but inotrope dependent). In heart dysfunction with suspected coronary hypoperfusion, an intra-aortic balloon pump is highly recommended. A ventricular assist device should be considered before end organ dysfunction becomes evident. Extra-corporeal membrane oxygenation is an elegant solution as a bridge to recovery and/or decision making. This paper offers practical recommendations for management of perioperative HF in cardiosurgery based on European experts' opinion. It also emphasizes the need for large surveys and studies to assess the optimal way to manage perioperative HF in cardiac surgery.
Resumo:
Time-inconsistency is an essential feature of many policy problems (Kydland and Prescott, 1977). This paper presents and compares three methods for computing Markov-perfect optimal policies in stochastic nonlinear business cycle models. The methods considered include value function iteration, generalized Euler-equations, and parameterized shadow prices. In the context of a business cycle model in which a scal authority chooses government spending and income taxation optimally, while lacking the ability to commit, we show that the solutions obtained using value function iteration and generalized Euler equations are somewhat more accurate than that obtained using parameterized shadow prices. Among these three methods, we show that value function iteration can be applied easily, even to environments that include a risk-sensitive scal authority and/or inequality constraints on government spending. We show that the risk-sensitive scal authority lowers government spending and income-taxation, reducing the disincentive households face to accumulate wealth.