795 resultados para individual rationality
Resumo:
We consider general allocation problems with indivisibilities where agents' preferences possibly exhibit externalities. In such contexts many different core notions were proposed. One is the gamma-core whereby blocking is only allowed via allocations where the non-blocking agents receive their endowment. We show that if there exists an allocation rule satisfying ‘individual rationality’, ‘efficiency’, and ‘strategy-proofness’, then for any problem for which the gamma-core is non-empty, the allocation rule must choose a gamma-core allocation and all agents are indifferent between all allocations in the gamma-core. We apply our result to housing markets, coalition formation and networks.
Resumo:
We study the problem of assigning indivisible and heterogenous objects (e.g., houses, jobs, offices, school or university admissions etc.) to agents. Each agent receives at most one object and monetary compensations are not possible. We consider mechanisms satisfying a set of basic properties (unavailable-type-invariance, individual-rationality, weak non-wastefulness, or truncation-invariance). In the house allocation problem, where at most one copy of each object is available, deferred-acceptance (DA)-mechanisms allocate objects based on exogenously fixed objects' priorities over agents and the agent-proposing deferred-acceptance-algorithm. For house allocation we show that DA-mechanisms are characterized by our basic properties and (i) strategy-proofness and population-monotonicity or (ii) strategy-proofness and resource-monotonicity. Once we allow for multiple identical copies of objects, on the one hand the first characterization breaks down and there are unstable mechanisms satisfying our basic properties and (i) strategy-proofness and population-monotonicity. On the other hand, our basic properties and (ii) strategy-proofness and resource-monotonicity characterize (the most general) class of DA-mechanisms based on objects' fixed choice functions that are acceptant, monotonic, substitutable, and consistent. These choice functions are used by objects to reject agents in the agent-proposing deferred-acceptance-algorithm. Therefore, in the general model resource-monotonicity is the «stronger» comparative statics requirement because it characterizes (together with our basic requirements and strategy-proofness) choice-based DA-mechanisms whereas population-monotonicity (together with our basic properties and strategy-proofness) does not.
Resumo:
Das operative Torbelegungsproblem (TBP) z. B. an einem Distributions- oder Cross-dockingzentrum ist ein logistisches Problem, bei dem es gilt, an- und abfahrende Fahrzeuge zeitlich und räumlich so auf die Warenein- und -ausgangstore zu verteilen, dass eine mög-lichst kostengünstige Abfertigung ermöglicht wird. Bisherige Arbeiten, die sich mit dem TBP beschäftigen, lassen Aspekte der Kooperation außer Acht. Dieser Beitrag stellt ein Verfahren vor, durch das der Nachteil einseitig optimaler Torbelegungen überwunden werden kann. Dabei wird auf das Mittel der kombinatorischen Auktionen zurückgegriffen und das TBP als Allokationsproblem modelliert, bei dem Frachtführer um Bündel konsekutiver Einheitszeit-intervalle an den Toren konkurrieren. Mittels eines Vickrey-Clarke-Groves-Mechanismus wird einerseits die Anreizkompatibilität, andererseits die individuelle Rationalität des Auk-tionsverfahrens sichergestellt. Das Verfahren wurde in ILOG OPL Studio 3.6.1 implemen-tiert und die durch Testdaten gewonnenen Ergebnisse zeigen, dass die Laufzeiten gering genug sind, um das Verfahren für die operative (kurzfristige) Planung einzusetzen und so transportlogistische Prozesse für alle Beteiligten wirtschaftlicher zu gestalten.
Resumo:
Chapter 1: Under the average common value function, we select almost uniquely the mechanism that gives the seller the largest portion of the true value in the worst situation among all the direct mechanisms that are feasible, ex-post implementable and individually rational. Chapter 2: Strategy-proof, budget balanced, anonymous, envy-free linear mechanisms assign p identical objects to n agents. The efficiency loss is the largest ratio of surplus loss to efficient surplus, over all profiles of non-negative valuations. The smallest efficiency loss is uniquely achieved by the following simple allocation rule: assigns one object to each of the p−1 agents with the highest valuation, a large probability to the agent with the pth highest valuation, and the remaining probability to the agent with the (p+1)th highest valuation. When “envy freeness” is replaced by the weaker condition “voluntary participation”, the optimal mechanism differs only when p is much less than n. Chapter 3: One group is to be selected among a set of agents. Agents have preferences over the size of the group if they are selected; and preferences over size as well as the “stand-outside” option are single-peaked. We take a mechanism design approach and search for group selection mechanisms that are efficient, strategy-proof and individually rational. Two classes of such mechanisms are presented. The proposing mechanism allows agents to either maintain or shrink the group size following a fixed priority, and is characterized by group strategy-proofness. The voting mechanism enlarges the group size in each voting round, and achieves at least half of the maximum group size compatible with individual rationality.
Resumo:
Economics is a social science which, therefore, focuses on people and on the decisions they make, be it in an individual context, or in group situations. It studies human choices, in face of needs to be fulfilled, and a limited amount of resources to fulfill them. For a long time, there was a convergence between the normative and positive views of human behavior, in that the ideal and predicted decisions of agents in economic models were entangled in one single concept. That is, it was assumed that the best that could be done in each situation was exactly the choice that would prevail. Or, at least, that the facts that economics needed to explain could be understood in the light of models in which individual agents act as if they are able to make ideal decisions. However, in the last decades, the complexity of the environment in which economic decisions are made and the limits on the ability of agents to deal with it have been recognized, and incorporated into models of decision making in what came to be known as the bounded rationality paradigm. This was triggered by the incapacity of the unboundedly rationality paradigm to explain observed phenomena and behavior. This thesis contributes to the literature in three different ways. Chapter 1 is a survey on bounded rationality, which gathers and organizes the contributions to the field since Simon (1955) first recognized the necessity to account for the limits on human rationality. The focus of the survey is on theoretical work rather than the experimental literature which presents evidence of actual behavior that differs from what classic rationality predicts. The general framework is as follows. Given a set of exogenous variables, the economic agent needs to choose an element from the choice set that is avail- able to him, in order to optimize the expected value of an objective function (assuming his preferences are representable by such a function). If this problem is too complex for the agent to deal with, one or more of its elements is simplified. Each bounded rationality theory is categorized according to the most relevant element it simplifes. Chapter 2 proposes a novel theory of bounded rationality. Much in the same fashion as Conlisk (1980) and Gabaix (2014), we assume that thinking is costly in the sense that agents have to pay a cost for performing mental operations. In our model, if they choose not to think, such cost is avoided, but they are left with a single alternative, labeled the default choice. We exemplify the idea with a very simple model of consumer choice and identify the concept of isofin curves, i.e., sets of default choices which generate the same utility net of thinking cost. Then, we apply the idea to a linear symmetric Cournot duopoly, in which the default choice can be interpreted as the most natural quantity to be produced in the market. We find that, as the thinking cost increases, the number of firms thinking in equilibrium decreases. More interestingly, for intermediate levels of thinking cost, an equilibrium in which one of the firms chooses the default quantity and the other best responds to it exists, generating asymmetric choices in a symmetric model. Our model is able to explain well-known regularities identified in the Cournot experimental literature, such as the adoption of different strategies by players (Huck et al. , 1999), the inter temporal rigidity of choices (Bosch-Dom enech & Vriend, 2003) and the dispersion of quantities in the context of di cult decision making (Bosch-Dom enech & Vriend, 2003). Chapter 3 applies a model of bounded rationality in a game-theoretic set- ting to the well-known turnout paradox in large elections, pivotal probabilities vanish very quickly and no one should vote, in sharp contrast with the ob- served high levels of turnout. Inspired by the concept of rhizomatic thinking, introduced by Bravo-Furtado & Côrte-Real (2009a), we assume that each per- son is self-delusional in the sense that, when making a decision, she believes that a fraction of the people who support the same party decides alike, even if no communication is established between them. This kind of belief simplifies the decision of the agent, as it reduces the number of players he believes to be playing against { it is thus a bounded rationality approach. Studying a two-party first-past-the-post election with a continuum of self-delusional agents, we show that the turnout rate is positive in all the possible equilibria, and that it can be as high as 100%. The game displays multiple equilibria, at least one of which entails a victory of the bigger party. The smaller one may also win, provided its relative size is not too small; more self-delusional voters in the minority party decreases this threshold size. Our model is able to explain some empirical facts, such as the possibility that a close election leads to low turnout (Geys, 2006), a lower margin of victory when turnout is higher (Geys, 2006) and high turnout rates favoring the minority (Bernhagen & Marsh, 1997).
Resumo:
In this paper we explore the effect of bounded rationality on the convergence of individual behavior toward equilibrium. In the context of a Cournot game with a unique and symmetric Nash equilibrium, firms are modeled as adaptive economic agents through a genetic algorithm. Computational experiments show that (1) there is remarkable heterogeneity across identical but boundedly rational agents; (2) such individual heterogeneity is not simply a consequence of the random elements contained in the genetic algorithm; (3) the more rational agents are in terms of memory abilities and pre-play evaluation of strategies, the less heterogeneous they are in their actions. At the limit case of full rationality, the outcome converges to the standard result of uniform individual behavior.
Resumo:
I develop a model of endogenous bounded rationality due to search costs, arising implicitly from the problems complexity. The decision maker is not required to know the entire structure of the problem when making choices but can think ahead, through costly search, to reveal more of it. However, the costs of search are not assumed exogenously; they are inferred from revealed preferences through her choices. Thus, bounded rationality and its extent emerge endogenously: as problems become simpler or as the benefits of deeper search become larger relative to its costs, the choices more closely resemble those of a rational agent. For a fixed decision problem, the costs of search will vary across agents. For a given decision maker, they will vary across problems. The model explains, therefore, why the disparity, between observed choices and those prescribed under rationality, varies across agents and problems. It also suggests, under reasonable assumptions, an identifying prediction: a relation between the benefits of deeper search and the depth of the search. As long as calibration of the search costs is possible, this can be tested on any agent-problem pair. My approach provides a common framework for depicting the underlying limitations that force departures from rationality in different and unrelated decision-making situations. Specifically, I show that it is consistent with violations of timing independence in temporal framing problems, dynamic inconsistency and diversification bias in sequential versus simultaneous choice problems, and with plausible but contrasting risk attitudes across small- and large-stakes gambles.
Resumo:
La tesi doctoral aborda la relació entre coneixement i pràctica en l’àmbit de la pràctica professional dels mestres. Aquest problema s’aborda des de la psicologia històrico-cultural, la qual implica un trencament amb la “racionalitat tècnica” des de la que s’ha abordat el problema tradicionalment. Aquest nou abordatge de la problemàtica implica una considerable elaboració teòrica, que en la tesi es vehicula mitjançant dos constructes fonamentals: la situació i els conceptes pràctics. Les dades de la tesi consisteixen en les converses face-to-face i on-line entre un estudiant de Mestre en pràcticum i el seu tutor en el centre de pràctiques. L’estudi analitza dos casos d’uns quatre mesos de duració cadascun. L’anàlisi es porta a terme mitjançant anàlisi de discurs, utilitzant diverses unitats d’anàlisi i diverses dimensions, les quals fan possible una anàlisi integrada que permeti considerar de manera relacionada l’activitat conjunta (pla social) i l’ús individual de conceptes (pla individual). Els resultats preliminars suggereixen una especificitat estructural i genètica dels conceptes pràctics (en contrast amb els espontanis i científics), i l’existència de mecanismes d’activitat conjunta que potencien especialment el desenvolupament d’aquests conceptes.
Resumo:
There is evidence showing that individual behavior often deviates fromthe classical principle of maximization. This evidence raises at least two importantquestions: (i) how severe the deviations are and (ii) which method is the best forextracting relevant information from choice behavior for the purposes of welfare analysis.In this paper we address these two questions by identifying from a foundationalanalysis a new measure of the rationality of individuals that enables the analysis ofindividual welfare in potentially inconsistent subjects, all based on standard revealedpreference data. We call such measure minimal index.
Chinese Basic Pension Substitution Rate: A Monte Carlo Demonstration of the Individual Account Model
Resumo:
At the end of 2005, the State Council of China passed ”The Decision on adjusting the Individual Account of Basic Pension System”, which adjusted the individual account in the 1997 basic pension system. In this essay, we will analyze the adjustment above, and use Life Annuity Actuarial Theory to establish the basic pension substitution rate model. Monte Carlo simulation is also used to prove the rationality of the model. Some suggestions are put forward associated with the substitution rate according to the current policy.
Resumo:
We consider the situation where there are several alternatives for investing a quantity of money to achieve a set of objectives. The choice of which alternative to apply depends on how citizens and political representatives perceive that such objectives should be achieved. All citizens with the right to vote can express their preferences in the decision-making process. These preferences may be incomplete. Political representatives represent the citizens who have not taken part in the decision-making process. The weight corresponding to political representatives depends on the number of citizens that have intervened in the decision-making process. The methodology we propose needs the participants to specify for each alternative how they rate the different attributes and the relative importance of attributes. On the basis of this information an expected utility interval is output for each alternative. To do this, an evidential reasoning approach is applied. This approach improves the insightfulness and rationality of the decision-making process using a belief decision matrix for problem modeling and the Dempster?Shafer theory of evidence for attribute aggregation. Finally, we propose using the distances of each expected utility interval from the maximum and the minimum utilities to rank the alternative set. The basic idea is that an alternative is ranked first if its distance to the maximum utility is the smallest, and its distance to the minimum utility is the greatest. If only one of these conditions is satisfied, a distance ratio is then used.
Resumo:
Thesis (Ph. D.)--University of California, 1912.
Resumo:
Two studies in the context of English-French relations in Québec suggest that individuals who strongly identify with a group derive the individual-level costs and benefits that drive expectancy-value processes (rational decision-making) from group-level costs and benefits. In Study 1, high identifiers linked group- and individual-level outcomes of conflict choices whereas low identifiers did not. Group-level expectancy-value processes, in Study 2, mediated the relationship between social identity and perceptions that collective action benefits the individual actor and between social identity and intentions to act. These findings suggest the rational underpinnings of identity-driven political behavior, a relationship sometimes obscured in intergroup theory that focuses on cognitive processes of self-stereotyping. But the results also challenge the view that individuals' cost-benefit analyses are independent of identity processes. The findings suggest the importance of modeling the relationship of group and individual levels of expectancy-value processes as both hierarchical and contingent on social identity processes
Resumo:
An individual faced with intergroup conflict chooses A from a vast array of possible actions, ranging from grumbling among ingroup friends to voting and demonstrating to rioting and revolution. The present paper conceptualises these intergroup choices as rationally shaped by perceptions of the benefits and costs associated with the action (expectancy-value processes). However, in presenting a model of agentic normative influence, it is argued that in intergroup contexts group-level costs and benefits play a critical role in individuals' decision-making. In the context of English-French conflict in Quebec, in Canada, four studies provide evidence that group-level costs and benef influence individuals' decision-making in intergro conflict; that the individual level of analysis need mediate the group level of analysis; that group-level co and benefits mediate the relationship between soc identity and intentions to engage in collective action; a that perceptions of outgroup and ingroup norms for inte group behaviours are relatively invariant and predictal related to perceptions of the group- and individual-le, benefits and costs associated with individualistic vers collective actions. By modelling the relationship betwe group norms and group-level costs and benefits, soc psychologists may begin to address the processes th underlie identity-behaviour relationships in collecti action and intergroup conflict.
Resumo:
This study evaluated in vitro the shear bond strength (SBS) of a resin-based pit-and-fissure sealant [Fluroshield (F), Dentsply/Caulk] associated with either an etch-and-rinse [Adper Single Bond 2 (SB), 3M/ESPE] or a self-etching adhesive system [Clearfil S3 Bond (S3), Kuraray Co., Ltd.] to saliva-contaminated enamel, comparing two curing protocols: individual light curing of the adhesive system and the sealant or simultaneous curing of both materials. Mesial and distal enamel surfaces from 45 sound third molars were randomly assigned to 6 groups (n=15), according to the bonding technique: I - F was applied to 37% phosphoric acid etched enamel. The other groups were contaminated with fresh human saliva (0.01 mL; 10 s) after acid etching: II - SB and F were light cured separately; III - SB and F were light cured together; IV - S3 and F were light cured separately; V - S3 and F were light cured simultaneously; VI - F was applied to saliva-contaminated, acid-etched enamel without an intermediate bonding agent layer. SBS was tested to failure in a universal testing machine at 0.5 mm/min. Data were analyzed by one-way ANOVA and Fisher's test (α=0.05).The debonded specimens were examined with a stereomicroscope to assess the failure modes. Three representative specimens from each group were observed under scanning electron microscopy for a qualitative analysis. Mean SBS in MPa were: I-12.28 (±4.29); II-8.57 (±3.19); III-7.97 (±2.16); IV-12.56 (±3.11); V-11.45 (±3.77); and VI-7.47 (±1.99). In conclusion, individual or simultaneous curing of the intermediate bonding agent layer and the resin sealant did not seem to affect bond strength to saliva-contaminated enamel. S3/F presented significantly higher SBS than the that of the groups treated with SB etch-and-rinse adhesive system and similar SBS to that of the control group, in which the sealant was applied under ideal dry, noncontaminated conditions.