963 resultados para University of St. Andrews.
Resumo:
I develop a model of endogenous bounded rationality due to search costs, arising implicitly from the problems complexity. The decision maker is not required to know the entire structure of the problem when making choices but can think ahead, through costly search, to reveal more of it. However, the costs of search are not assumed exogenously; they are inferred from revealed preferences through her choices. Thus, bounded rationality and its extent emerge endogenously: as problems become simpler or as the benefits of deeper search become larger relative to its costs, the choices more closely resemble those of a rational agent. For a fixed decision problem, the costs of search will vary across agents. For a given decision maker, they will vary across problems. The model explains, therefore, why the disparity, between observed choices and those prescribed under rationality, varies across agents and problems. It also suggests, under reasonable assumptions, an identifying prediction: a relation between the benefits of deeper search and the depth of the search. As long as calibration of the search costs is possible, this can be tested on any agent-problem pair. My approach provides a common framework for depicting the underlying limitations that force departures from rationality in different and unrelated decision-making situations. Specifically, I show that it is consistent with violations of timing independence in temporal framing problems, dynamic inconsistency and diversification bias in sequential versus simultaneous choice problems, and with plausible but contrasting risk attitudes across small- and large-stakes gambles.
Resumo:
Recent risk sharing tests strongly reject the hypothesis of complete markets, because in the data: (1) the individual consumption comoves with income and (2) the consumption dispersion increases over the life cycle. In this paper, I revisit the implications of these risk sharing tests in the context of a complete market model with discount rate heterogeneity, which is extended to introduce the individual choices of effort in education. I .nd that a complete market model with discount rate heterogeneity can pass both types of the risk sharing tests. The endogenous positive correlation between income growth rate and patience makes the individual consumption comove with income, even if the markets are complete. I also show that this model is quantitatively admissible to account for both the observed comovement of consumption and income and the increase of consumption dispersion over the life cycle.
Resumo:
Although standard incomplete market models can account for the magnitude of the rise in consumption inequality over the life cycle, they generate unrealistically concave age pro.les of consumption inequality and unrealistically less wealth inequality. In this paper, I investigate the role of discount rate heterogeneity on consumption inequality in the context of incomplete market life cycle models. The distribution of discount rates is estimated using moments from the wealth distribution. I .nd that the model with heterogeneous income pro.les (HIP) and discount rate heterogeneity can successfully account for the empirical age pro.le of consumption inequality, both in its magnitude and in its non-concave shape. Generating realistic wealth inequality, this simulated model also highlights the importance of ex ante heterogeneities as main sources of life time inequality.
Resumo:
Using the framework of Desmet and Rossi-Hansberg (forthcoming), we present a model of spatial takeoff that is calibrated using spatially-disaggregated occupational data for England in c.1710. The model predicts changes in the spatial distribution of agricultural and manufacturing employment which match data for c.1817 and 1861. The model also matches a number of aggregate changes that characterise the first industrial revolution. Using counterfactual geographical distributions, we show that the initial concentration of productivity can matter for whether and when an industrial takeoff occurs. Subsidies to innovation in either sector can bring forward the date of takeoff while subsidies to the use of land by manufacturing firms can significantly delay a takeoff because it decreases spatial concentration of activity.
Resumo:
We model the choice behaviour of an agent who suffers from imperfect attention. We define inattention axiomatically through preference over menus and endowed alternatives: an agent is inattentive if it is better to be endowed with an alternative a than to be allowed to pick a from a menu in which a is is the best alternative. This property and vNM rationality on the domain of menus and alternatives imply that the agent notices each alternative with a given menu-dependent probability (attention parameter) and maximises a menu independent utility function over the alternatives he notices. Preference for flexibility restricts the model to menu independent attention parameters as in Manzini and Mariotti [19]. Our theory explains anomalies (e.g. the attraction and compromise effect) that the Random Utility Model cannot accommodate.
Resumo:
Using the framework of Desmet and Rossi-Hansberg (forthcoming), we present a model of spatial takeoff that is calibrated using spatially-disaggregated occupational data for England in c.1710. The model predicts changes in the spatial distribution of agricultural and manufacturing employment which match data for c.1817 and 1861. The model also matches a number of aggregate changes that characterise the first industrial revolution. Using counterfactual geographical distributions, we show that the initial concentration of productivity can matter for whether and when an industrial takeoff occurs. Subsidies to innovation in either sector can bring forward the date of takeoff while subsidies to the use of land by manufacturing firms can significantly delay a takeoff because it decreases spatial concentration of activity.
Resumo:
We introduce attention games. Alternatives ranked by quality (producers, politicians, sexual partners...) desire to be chosen and compete for the imperfect attention of a chooser by investing in their own salience. We prove that if alternatives can control the attention they get, then ”the showiest is the best: the equilibrium ordering of salience (weakly) reproduces the quality ranking and the best alternative is the one that gets picked most often. This result also holds under more general conditions. However, if those conditions fail, then even the worst alternative can be picked most often.
Resumo:
Two logically distinct and permissive extensions of iterative weak dominance are introduced for games with possibly vector-valued payoffs. The first, iterative partial dominance, builds on an easy-to check condition but may lead to solutions that do not include any (generalized) Nash equilibria. However, the second and intuitively more demanding extension, iterative essential dominance, is shown to be an equilibrium refinement. The latter result includes Moulin’s (1979) classic theorem as a special case when all players’ payoffs are real-valued. Therefore, essential dominance solvability can be a useful solution concept for making sharper predictions in multicriteria games that feature a plethora of equilibria.
Resumo:
This paper provides a general treatment of the implications for welfare of legal uncertainty. We distinguish legal uncertainty from decision errors: though the former can be influenced by the latter, the latter are neither necessary nor sufficient for the existence of legal uncertainty. We show that an increase in decision errors will always reduce welfare. However, for any given level of decision errors, information structures involving more legal uncertainty can improve welfare. This holds always, even when there is complete legal uncertainty, when sanctions on socially harmful actions are set at their optimal level. This transforms radically one’s perception about the “costs” of legal uncertainty. We also provide general proofs for two results, previously established under restrictive assumptions. The first is that Effects-Based enforcement procedures may welfare dominate Per Se (or object-based) procedures and will always do so when sanctions are optimally set. The second is that optimal sanctions may well be higher under enforcement procedures involving more legal uncertainty.
Resumo:
In this paper we summarise some of our recent work on consumer behaviour, drawing on recent developments in behavioural economics, in which consumers are embedded in a social context, so their behaviour is shaped by their interactions with other consumers. For the purpose of this paper we also allow consumption to cause environmental damage. Analysing the social context of consumption naturally lends itself to the use of game theoretic tools, and indicates that we seek to develop links between economics and sociology rather than economics and psychology, which has been the more predominant field for work in behavioural economics. We shall be concerned with three sets of issues: conspicuous consumption, consumption norms and altruistic behaviour. Our aim is to show that building links between sociological and economic approaches to the study of consumer behaviour can lead to significant and surprising implications for conventional economic policy prescriptions, especially with respect to environmental policy.
Resumo:
Free‐riding is often associated with self‐interested behaviour. However if there is a global mixed pollutant, free‐riding will arise if individuals calculate that their emissions are negligible relative to the total, so total emissions and hence any damage that they and others suffer will be unaffected by whatever consumption choice they make. In this context consumer behaviour and the optimal environmental tax are independent of the degree of altruism. For behaviour to change, individuals need to make their decisions in a different way. We propose a new theory of moral behaviour whereby individuals recognise that they will be worse off by not acting in their own self‐interest, and balance this cost off against the hypothetical moral value of adopting a Kantian form of behaviour, that is by calculating the consequences of their action by asking what would happen if everyone else acted in the same way as they did. We show that: (a) if individuals behave this way, then altruism matters and the greater the degree of altruism the more individuals cut back their consumption of a ’dirty’ good; (b) nevertheless the optimal environmental tax is exactly the same as that emerging from classical analysis where individuals act in self‐interested fashion.
Resumo:
In this paper we make three contributions to the literature on optimal Competition Law enforcement procedures. The first (which is of general interest beyond competition policy) is to clarify the concept of “legal uncertainty”, relating it to ideas in the literature on Law and Economics, but formalising the concept through various information structures which specify the probability that each firm attaches – at the time it takes an action – to the possibility of its being deemed anti-competitive were it to be investigated by a Competition Authority. We show that the existence of Type I and Type II decision errors by competition authorities is neither necessary nor sufficient for the existence of legal uncertainty, and that information structures with legal uncertainty can generate higher welfare than information structures with legal certainty – a result echoing a similar finding obtained in a completely different context and under different assumptions in earlier Law and Economics literature (Kaplow and Shavell, 1992). Our second contribution is to revisit and significantly generalise the analysis in our previous paper, Katsoulacos and Ulph (2009), involving a welfare comparison of Per Se and Effects- Based legal standards. In that analysis we considered just a single information structure under an Effects-Based standard and also penalties were exogenously fixed. Here we allow for (a) different information structures under an Effects-Based standard and (b) endogenous penalties. We obtain two main results: (i) considering all information structures a Per Se standard is never better than an Effects-Based standard; (ii) optimal penalties may be higher when there is legal uncertainty than when there is no legal uncertainty.
Resumo:
We determine he optimal combination of a universal benefit, B, and categorical benefit, C, for an economy in which individuals differ in both their ability to work - modelled as an exogenous zero quantity constraint on labour supply - and, conditional on being able to work, their productivity at work. C is targeted at those unable to work, and is conditioned in two dimensions: ex-ante an individual must be unable to work and be awarded the benefit, whilst ex-post a recipient must not subsequently work. However, the ex-ante conditionality may be imperfectly enforced due to Type I (false rejection) and Type II (false award) classification errors, whilst, in addition, the ex-post conditionality may be imperfectly enforced. If there are no classification errors - and thus no enforcement issues - it is always optimal to set C>0, whilst B=0 only if the benefit budget is sufficiently small. However, when classification errors occur, B=0 only if there are no Type I errors and the benefit budget is sufficiently small, while the conditions under which C>0 depend on the enforcement of the ex-post conditionality. We consider two discrete alternatives. Under No Enforcement C>0 only if the test administering C has some discriminatory power. In addition, social welfare is decreasing in the propensity to make each type error. However, under Full Enforcement C>0 for all levels of discriminatory power. Furthermore, whilst social welfare is decreasing in the propensity to make Type I errors, there are certain conditions under which it is increasing in the propensity to make Type II errors. This implies that there may be conditions under which it would be welfare enhancing to lower the chosen eligibility threshold - support the suggestion by Goodin (1985) to "err on the side of kindness".
Resumo:
This paper investigates how well-being varies with individual wage rates when individuals care about relative consumption and so there are Veblen effects – Keeping up with the Joneses – leading individuals to over-work. In the case where individuals compare themselves with their peers – those with the same wage-rate - it is shown that Keeping up with the Joneses leads some individuals to work who otherwise would have chosen not to. Moreover for these individuals well-being is a decreasing function of the wage rate - contrary to standard theory. So those who are worst-off in society are no longer those on the lowest wage.
Resumo:
In this paper we set out the welfare economics based case for imposing cartel penalties on the cartel overcharge rather than on the more conventional bases of revenue or profits (illegal gains). To do this we undertake a systematic comparison of a penalty based on the cartel overcharge with three other penalty regimes: fixed penalties; penalties based on revenue, and penalties based on profits. Our analysis is the first to compare these regimes in terms of their impact on both (i) the prices charged by those cartels that do form; and (ii) the number of stable cartels that form (deterrence). We show that the class of penalties based on profits is identical to the class of fixed penalties in all welfare-relevant respects. For the other three types of penalty we show that, for those cartels that do form, penalties based on the overcharge produce lower prices than those based on profit) while penalties based on revenue produce the highest prices. Further, in conjunction with the above result, our analysis of cartel stability (and thus deterrence), shows that penalties based on the overcharge out-perform those based on profits, which in turn out-perform those based on revenue in terms of their impact on each of the following welfare criteria: (a) average overcharge; (b) average consumer surplus; (c) average total welfare.