631 resultados para Algorithme DP
Resumo:
We present a form of soft paternalism called "autonomy-enhancing paternalism" that seeks to in-crease individual well-being by facilitating the individual ability to make critically reflected, autonomous decisions. The focus of autonomy-enhancing paternalism is on helping individuals to become better decision-makers, rather than on helping them by making better decisions for them. Autonomy-enhancing paternalism acknowledges that behavioral interventions can change the strength of decision-making anomalies over time, and favors those interventions that improve, rather than reduce, individuals ability to make good and unbiased decisions. By this it prevents manipulation of the individual by the soft paternalist, accounts for the heterogeneity of individuals, and counteracts slippery slope arguments by decreasing the probability of future paternalistic interventions. Moreover, autonomy-enhancing paternalism can be defended based on both liberal values and welfare considerations.
Resumo:
Modern macroeconomic theory utilises optimal control techniques to model the maximisation of individual well-being using a lifetime utility function. Agents face choices over current and future consumption (with resultant implied savings decisions) seeking to maximise the present value of current plus future well-being. However, such inter-temporal welfare-maximising assumptions remain empirically untested. In the work presented here we test whether welfare was in (historical) fact maximised in the US between 1870-2000 and find empirical support for the optimising basis of growth theory, but only once a comprehensive view of what constitutes a country’s wealth or capital is taken into account.
Resumo:
Many authors have proposed incorporating measures of well-being into evaluations of public policy. Yet few evaluations use experimental design or examine multiple aspects of well-being, thus the causal impact of public policies on well-being is largely unknown. In this paper we examine the effect of an intensive early intervention program on maternal well-being in a targeted disadvantaged community. Using a randomized controlled trial design we estimate and compare treatment effects on global well-being using measures of life satisfaction, experienced well-being using both the Day Reconstruction Method (DRM) and a measure of mood yesterday, and also a standardized measure of parenting stress. The intervention has no significant impact on negative measures of well-being, such as experienced negative affect as measured by the DRM and global measures of well-being such as life satisfaction or a global measure of parenting stress. Significant treatment effects are observed on experienced measures of positive affect using the DRM, and a measure of mood yesterday. The DRM treatment effects are primarily concentrated during times spent without the target child which may reflect the increased effort and burden associated with additional parental investment. Our findings suggest that a maternal-focused intervention may produce meaningful improvements in experienced well-being. Incorporating measures of experienced affect may thus alter cost-benefit calculations for public policies.
Resumo:
We estimate a New Keynesian DSGE model for the Euro area under alternative descriptions of monetary policy (discretion, commitment or a simple rule) after allowing for Markov switching in policy maker preferences and shock volatilities. This reveals that there have been several changes in Euro area policy making, with a strengthening of the anti-inflation stance in the early years of the ERM, which was then lost around the time of German reunification and only recovered following the turnoil in the ERM in 1992. The ECB does not appear to have been as conservative as aggregate Euro-area policy was under Bundesbank leadership, and its response to the financial crisis has been muted. The estimates also suggest that the most appropriate description of policy is that of discretion, with no evidence of commitment in the Euro-area. As a result although both ‘good luck’ and ‘good policy’ played a role in the moderation of inflation and output volatility in the Euro-area, the welfare gains would have been substantially higher had policy makers been able to commit. We consider a range of delegation schemes as devices to improve upon the discretionary outcome, and conclude that price level targeting would have achieved welfare levels close to those attained under commitment, even after accounting for the existence of the Zero Lower Bound on nominal interest rates.
Resumo:
This paper employs an unobserved component model that incorporates a set of economic fundamentals to obtain the Euro-Dollar permanent equilibrium exchange rates (PEER) for the period 1975Q1 to 2008Q4. The results show that for most of the sample period, the Euro-Dollar exchange rate closely followed the values implied by the PEER. The only significant deviations from the PEER occurred in the years immediately before and after the introduction of the single European currency. The forecasting exercise shows that incorporating economic fundamentals provides a better long-run exchange rate forecasting performance than a random walk process.
Resumo:
Vector Autoregressive Moving Average (VARMA) models have many theoretical properties which should make them popular among empirical macroeconomists. However, they are rarely used in practice due to over-parameterization concerns, difficulties in ensuring identification and computational challenges. With the growing interest in multivariate time series models of high dimension, these problems with VARMAs become even more acute, accounting for the dominance of VARs in this field. In this paper, we develop a Bayesian approach for inference in VARMAs which surmounts these problems. It jointly ensures identification and parsimony in the context of an efficient Markov chain Monte Carlo (MCMC) algorithm. We use this approach in a macroeconomic application involving up to twelve dependent variables. We find our algorithm to work successfully and provide insights beyond those provided by VARs.
Resumo:
The paper considers the use of artificial regression in calculating different types of score test when the log
Resumo:
The delays in the release of macroeconomic variables such as GDP mean that policymakers do not know their current values. Thus, nowcasts, which are estimates of current values of macroeconomic variables, are becoming increasingly popular. This paper takes up the challenge of nowcasting Scottish GDP growth. Nowcasting in Scotland, currently a government office region within the United Kingdom, is complicated due to data limitations. For instance, key nowcast predictors such as industrial production are unavailable. Accordingly, we use data on some non-traditional variables and investigate whether UK aggregates can help nowcast Scottish GDP growth. Such data limitations are shared by many other sub-national regions, so we hope this paper can provide lessons for other regions interested in developing nowcasting models.
Resumo:
This paper provides a general treatment of the implications for welfare of legal uncertainty. We distinguish legal uncertainty from decision errors: though the former can be influenced by the latter, the latter are neither necessary nor sufficient for the existence of legal uncertainty. We show that an increase in decision errors will always reduce welfare. However, for any given level of decision errors, information structures involving more legal uncertainty can improve welfare. This holds always, even when there is complete legal uncertainty, when sanctions on socially harmful actions are set at their optimal level. This transforms radically one’s perception about the “costs” of legal uncertainty. We also provide general proofs for two results, previously established under restrictive assumptions. The first is that Effects-Based enforcement procedures may welfare dominate Per Se (or object-based) procedures and will always do so when sanctions are optimally set. The second is that optimal sanctions may well be higher under enforcement procedures involving more legal uncertainty.
Resumo:
In this paper we summarise some of our recent work on consumer behaviour, drawing on recent developments in behavioural economics, in which consumers are embedded in a social context, so their behaviour is shaped by their interactions with other consumers. For the purpose of this paper we also allow consumption to cause environmental damage. Analysing the social context of consumption naturally lends itself to the use of game theoretic tools, and indicates that we seek to develop links between economics and sociology rather than economics and psychology, which has been the more predominant field for work in behavioural economics. We shall be concerned with three sets of issues: conspicuous consumption, consumption norms and altruistic behaviour. Our aim is to show that building links between sociological and economic approaches to the study of consumer behaviour can lead to significant and surprising implications for conventional economic policy prescriptions, especially with respect to environmental policy.
Resumo:
Free‐riding is often associated with self‐interested behaviour. However if there is a global mixed pollutant, free‐riding will arise if individuals calculate that their emissions are negligible relative to the total, so total emissions and hence any damage that they and others suffer will be unaffected by whatever consumption choice they make. In this context consumer behaviour and the optimal environmental tax are independent of the degree of altruism. For behaviour to change, individuals need to make their decisions in a different way. We propose a new theory of moral behaviour whereby individuals recognise that they will be worse off by not acting in their own self‐interest, and balance this cost off against the hypothetical moral value of adopting a Kantian form of behaviour, that is by calculating the consequences of their action by asking what would happen if everyone else acted in the same way as they did. We show that: (a) if individuals behave this way, then altruism matters and the greater the degree of altruism the more individuals cut back their consumption of a ’dirty’ good; (b) nevertheless the optimal environmental tax is exactly the same as that emerging from classical analysis where individuals act in self‐interested fashion.
Resumo:
In this paper we make three contributions to the literature on optimal Competition Law enforcement procedures. The first (which is of general interest beyond competition policy) is to clarify the concept of “legal uncertainty”, relating it to ideas in the literature on Law and Economics, but formalising the concept through various information structures which specify the probability that each firm attaches – at the time it takes an action – to the possibility of its being deemed anti-competitive were it to be investigated by a Competition Authority. We show that the existence of Type I and Type II decision errors by competition authorities is neither necessary nor sufficient for the existence of legal uncertainty, and that information structures with legal uncertainty can generate higher welfare than information structures with legal certainty – a result echoing a similar finding obtained in a completely different context and under different assumptions in earlier Law and Economics literature (Kaplow and Shavell, 1992). Our second contribution is to revisit and significantly generalise the analysis in our previous paper, Katsoulacos and Ulph (2009), involving a welfare comparison of Per Se and Effects- Based legal standards. In that analysis we considered just a single information structure under an Effects-Based standard and also penalties were exogenously fixed. Here we allow for (a) different information structures under an Effects-Based standard and (b) endogenous penalties. We obtain two main results: (i) considering all information structures a Per Se standard is never better than an Effects-Based standard; (ii) optimal penalties may be higher when there is legal uncertainty than when there is no legal uncertainty.
Resumo:
We determine he optimal combination of a universal benefit, B, and categorical benefit, C, for an economy in which individuals differ in both their ability to work - modelled as an exogenous zero quantity constraint on labour supply - and, conditional on being able to work, their productivity at work. C is targeted at those unable to work, and is conditioned in two dimensions: ex-ante an individual must be unable to work and be awarded the benefit, whilst ex-post a recipient must not subsequently work. However, the ex-ante conditionality may be imperfectly enforced due to Type I (false rejection) and Type II (false award) classification errors, whilst, in addition, the ex-post conditionality may be imperfectly enforced. If there are no classification errors - and thus no enforcement issues - it is always optimal to set C>0, whilst B=0 only if the benefit budget is sufficiently small. However, when classification errors occur, B=0 only if there are no Type I errors and the benefit budget is sufficiently small, while the conditions under which C>0 depend on the enforcement of the ex-post conditionality. We consider two discrete alternatives. Under No Enforcement C>0 only if the test administering C has some discriminatory power. In addition, social welfare is decreasing in the propensity to make each type error. However, under Full Enforcement C>0 for all levels of discriminatory power. Furthermore, whilst social welfare is decreasing in the propensity to make Type I errors, there are certain conditions under which it is increasing in the propensity to make Type II errors. This implies that there may be conditions under which it would be welfare enhancing to lower the chosen eligibility threshold - support the suggestion by Goodin (1985) to "err on the side of kindness".
Resumo:
This paper investigates how well-being varies with individual wage rates when individuals care about relative consumption and so there are Veblen effects – Keeping up with the Joneses – leading individuals to over-work. In the case where individuals compare themselves with their peers – those with the same wage-rate - it is shown that Keeping up with the Joneses leads some individuals to work who otherwise would have chosen not to. Moreover for these individuals well-being is a decreasing function of the wage rate - contrary to standard theory. So those who are worst-off in society are no longer those on the lowest wage.
Resumo:
In this paper we set out the welfare economics based case for imposing cartel penalties on the cartel overcharge rather than on the more conventional bases of revenue or profits (illegal gains). To do this we undertake a systematic comparison of a penalty based on the cartel overcharge with three other penalty regimes: fixed penalties; penalties based on revenue, and penalties based on profits. Our analysis is the first to compare these regimes in terms of their impact on both (i) the prices charged by those cartels that do form; and (ii) the number of stable cartels that form (deterrence). We show that the class of penalties based on profits is identical to the class of fixed penalties in all welfare-relevant respects. For the other three types of penalty we show that, for those cartels that do form, penalties based on the overcharge produce lower prices than those based on profit) while penalties based on revenue produce the highest prices. Further, in conjunction with the above result, our analysis of cartel stability (and thus deterrence), shows that penalties based on the overcharge out-perform those based on profits, which in turn out-perform those based on revenue in terms of their impact on each of the following welfare criteria: (a) average overcharge; (b) average consumer surplus; (c) average total welfare.