859 resultados para conditional expected utility
Resumo:
Tradizionalmente, l'obiettivo della calibrazione di un modello afflussi-deflussi è sempre stato quello di ottenere un set di parametri (o una distribuzione di probabilità dei parametri) che massimizzasse l'adattamento dei dati simulati alla realtà osservata, trattando parzialmente le finalità applicative del modello. Nel lavoro di tesi viene proposta una metodologia di calibrazione che trae spunto dell'evidenza che non sempre la corrispondenza tra dati osservati e simulati rappresenti il criterio più appropriato per calibrare un modello idrologico. Ai fini applicativi infatti, può risultare maggiormente utile una miglior rappresentazione di un determinato aspetto dell'idrogramma piuttosto che un altro. Il metodo di calibrazione che viene proposto mira a valutare le prestazioni del modello stimandone l'utilità nell'applicazione prevista. Tramite l'utilizzo di opportune funzioni, ad ogni passo temporale viene valutata l'utilità della simulazione ottenuta. La calibrazione viene quindi eseguita attraverso la massimizzazione di una funzione obiettivo costituita dalla somma delle utilità stimate nei singoli passi temporali. Le analisi mostrano come attraverso l'impiego di tali funzioni obiettivo sia possibile migliorare le prestazioni del modello laddove ritenute di maggior interesse per per le finalità applicative previste.
Resumo:
In this work I discuss several key aspects of welfare economics and policy analysis and I propose two original contributions to the growing field of behavioral public policymaking. After providing a historical perspective of welfare economics and an overview of policy analysis processes in the introductory chapter, in chapter 2 I discuss a debated issue of policymaking, the choice of the social welfare function. I contribute to this debate by proposing an original methodological contribution based on the analysis of the quantitative relationship among different social welfare functional forms commonly used by policy analysts. In chapter 3 I then discuss a behavioral policy to contrast indirect tax evasion based on the use of lotteries. I show that the predictions of my model based on non-expected utility are consistent with observed, and so far unexplained, empirical evidence of the policy success. Finally, in chapter 4 I investigate by mean of a laboratory experiment the effects of social influence on the individual likelihood to engage in altruistic punishment. I show that bystanders’ decision to engage in punishment is influenced by the punishment behavior of their peers and I suggest ways to enact behavioral policies that exploit this finding.
Resumo:
This article seeks to contribute to the illumination of the so-called 'paradox of voting' using the German Bundestag elections of 1998 as an empirical case. Downs' model of voter participation will be extended to include elements of the theory of subjective expected utility (SEU). This will allow a theoretical and empirical exploration of the crucial mechanisms of individual voters' decisions to participate, or abstain from voting, in the German general election of 1998. It will be argued that the infinitely low probability of an individual citizen's vote to decide the election outcome will not necessarily reduce the probability of electoral participation. The empirical analysis is largely based on data from the ALLBUS 1998. It confirms the predictions derived from SEU theory. The voters' expected benefits and their subjective expectation to be able to influence government policy by voting are the crucial mechanisms to explain participation. By contrast, the explanatory contribution of perceived information and opportunity costs is low.
Resumo:
Die vorliegende Untersuchung ist ein Beitrag, die Frage zu klären, warum in der westdeutschen Nachkriegszeit immer mehr Schulpflichtige nach Abschluss der Grundschule das Gymnasium besuchen. Ausgehend von einem entscheidungstheoretischen Modell der subjektiven Werterwartung werden Mechanismen der elterlichen Bildungsentscheidung aufgezeigt. Dabei wird davon ausgegangen, dass sowohl zunehmende Bildungsmotivationen als auch Veränderungen in der subjektiven Bewertung von Kosten und Nutzen für eine höhere Bildung wichtige Voraussetzungen für die zunehmende Bildungsbeteiligung, aber auch Folgen der Bildungsexpansion waren. Die empirischen Analysen für drei Zeitpunkte in den 60er, 70er und 80er Jahren bestätigen diese Annahmen weitgehend. Ebenso wurde empirisch belegt, welch wichtige Rolle neben den Bildungsintentionen von Eltern und dem vorhergehenden Bildungsverlauf ihrer Kinder auch strukturelle Momente der Bildungsexpansion und ihre Eigendynamik beim tatsächlichen Bildungsübergang spielen. Schließlich gibt es Hinweise dafür, dass die Persistenz klassenspezifischer Bildungsungleichheiten auf einer konstanten Balance von Nutzen und Kosten zwischen den sozialen Klassen basiert.
Resumo:
Although many studies find that voting in Africa approximates an ethnic census in that voting is primarily along ethnic lines, hardly any of the studies have sought to explain ethnic voting following a rational choice framework. Using data of voter opinions from a survey conducted two weeks before the December 2007 Kenyan elections, we find that the expected benefits associated with a win by each of the presidential candidates varied significantly across voters from different ethnic groups. We hypothesize that decision to participate in the elections was influenced by the expected benefits as per the minimax-regret voting model. We test the predictions of this model using data of voter turnout in the December 2007 elections and find that turnout across ethnic groups varied systematically with expected benefits. The results suggest that individuals participated in the elections primarily to avoid the maximum regret should a candidate from another ethnic group win. The results therefore offer credence to the minimax regret model as proposed by Ferejohn and Fiorina (1974) and refute the Downsian expected utility model.
Resumo:
El propósito de la presente investigación es determinar si, a través del estudio y análisis de los estudios de tráfico en autopistas de peaje, se pueden determinar las razones de los incumplimientos en las previsiones de estos estudios. La metodología se basa en un análisis empírico ex- post facto de los estudios de tráfico contenidos en los anteproyectos de las autopistas Radial 3 y Radial 5 y los datos realmente verificados. Tras una introducción para presentar las principales características de las autopistas de peaje, se realiza una revisión de la bibliografía sobre el cumplimiento de las previsiones de tráfico. Lo anterior permite establecer una serie de aspectos que pueden contribuir a estos incumplimientos, así como una serie de medidas encontradas para mejorar las futuras previsiones. Ya en el núcleo fundamental de la investigación, esta se centra en el análisis del cumplimiento de las previsiones de tráfico contenidas en los anteproyectos de la Radial 3 y Radial 5. Se realiza un análisis crítico de la metodología adoptada, así como de las variables e hipótesis realizadas. Tras este primer análisis, se profundiza en la fase de asignación de los estudios. Siempre con base a los tráficos reales y para el año 2006, se cuantifica el efecto en los incumplimientos, por un lado de las variables utilizadas, y por otro, del propio método ó curva de asignación. Finalmente, y con base en los hallazgos anteriores, se determinan una serie de limitaciones en el método de asignación de tráficos entre recorridos alternativos para el caso de entornos urbanos usado. El planteamiento con base a las teorías del agente racional y maximización de la utilidad esperada es criticado desde la perspectiva de la teoría de decisión bajo condiciones de riesgo planteada por Kahneman y Tversky. Para superar las limitaciones anteriores, se propone una nueva curva de asignación semi empírica que relaciona la proporción del tráfico que circula por la autopista de peaje con la velocidad media en la autovía libre alternativa. ABSTRACT The aim of this research is to confirm whether the forensic analysis of the traffic forecast studies for tolled highways may bring to light the reasons behind the lack of accuracy. The methodology used on this research is empirical and is based on the ex –post facto analysis of the real traffic numbers compared to the forecasted for the tolled highways Radial 3 and Radial 5. Firstly the main features of tolled highways are presented as an introductory chapter. Secondly a broad bibliographic search is presented, this is done from a global perspective and from the Spanish perspective too. From this review, a list of the main causes behind the systematic inaccuracy together with measures to improve future traffic forecast exercises are shown. In what we could consider as the core of the research, it focuses on the ratios of actual / forecast traffic for the tolled highways Radial 3 y Radial 5 in Madrid outskirts. From a critical perspective, the methodology and inputs used in the traffic studies are analysed. In a further step, the trip assignment stage is scrutinised to quantify the influence of the inputs and the assignment model itself in the accuracy of the traffic studies. This exercise is bases on the year 2006. Finally, the assignment model used is criticised for its application in tolled urban highways. The assumptions behind the model, rational agent and expected utility maximization theories, are questioned under the theories presented by Kahneman and Tversky (Prospect Theory). To overcome these assignment model limitations, the author presents a semi empiric new diversion curve. This curve links the traffic proportion using the tolled highway and the average speed in the toll free alternative highway.
Resumo:
Irrigators face the risk of not having enough water to meet their crops’ demand. There are different mechanisms to cope with this risk, including water markets (option contracts) or insurance. A farmer will purchase them when the expected utility change derived from the tool is positive. This paper presents a theoretical assessment of the farmer’s expected utility under two different option contracts, a drought insurance and a combination of an option contract and the insurance. We analyze the conditions that determine farmer’s reference for one instrument or the other and perform a numerical application that is relevant for a Spanish region.
Resumo:
We consider the situation where there are several alternatives for investing a quantity of money to achieve a set of objectives. The choice of which alternative to apply depends on how citizens and political representatives perceive that such objectives should be achieved. All citizens with the right to vote can express their preferences in the decision-making process. These preferences may be incomplete. Political representatives represent the citizens who have not taken part in the decision-making process. The weight corresponding to political representatives depends on the number of citizens that have intervened in the decision-making process. The methodology we propose needs the participants to specify for each alternative how they rate the different attributes and the relative importance of attributes. On the basis of this information an expected utility interval is output for each alternative. To do this, an evidential reasoning approach is applied. This approach improves the insightfulness and rationality of the decision-making process using a belief decision matrix for problem modeling and the Dempster?Shafer theory of evidence for attribute aggregation. Finally, we propose using the distances of each expected utility interval from the maximum and the minimum utilities to rank the alternative set. The basic idea is that an alternative is ranked first if its distance to the maximum utility is the smallest, and its distance to the minimum utility is the greatest. If only one of these conditions is satisfied, a distance ratio is then used.
Resumo:
We model social choices as acts mapping states of the world to (social) outcomes. A (social choice) rule assigns an act to every profile of subjective expected utility preferences over acts. A rule is strategy-proof if no agent ever has an incentive to misrepresent her beliefs about the world or her valuation of the outcomes; it is ex-post efficient if the act selected at any given preference profile picks a Pareto-efficient outcome in every state of the world. We show that every two-agent ex-post efficient and strategy-proof rule is a top selection: the chosen act picks the most preferred outcome of some (possibly different) agent in every state of the world. The states in which an agent’s top outcome is selected cannot vary with the reported valuations of the outcomes but may change with the reported beliefs. We give a complete characterization of the ex-post efficient and strategy-proof rules in the two-agent, two-state case, and we identify a rich class of such rules in the two-agent case.
Resumo:
Voters try to avoid wasting their votes even in PR systems. In this paper we make a case that this type of strategic voting can be observed and predicted even in PR systems. Contrary to the literature we do not see weak institutional incentive structures as indicative of a hopeless endeavor for studying strategic voting. The crucial question for strategic voting is how institutional incentives constrain an individual’s decision-making process. Based on expected utility maximization we put forward a micro-logic of an individual’s expectation formation process driven by institutional and dispositional incentives. All well-known institutional incentives to vote strategically that get channelled through the district magnitude are moderated by dispositional factors in order to become relevant for voting decisions. Employing data from Finland – because of its electoral system a particularly hard testing ground - we find considerable evidence for observable implications of our theory.
Resumo:
We model social choices as acts mapping states of the world to (social) outcomes. A (social choice) rule assigns an act to every profile of subjective expected utility preferences over acts. A rule is strategy-proof if no agent ever has an incentive to misrepresent her beliefs about the world or her valuation of the outcomes; it is ex-post efficient if the act selected at any given preference profile picks a Pareto-efficient outcome in every state of the world. We show that every two-agent ex-post efficient and strategy-proof rule is a top selection: the chosen act picks the most preferred outcome of some (possibly different) agent in every state of the world. The states in which an agent’s top outcome is selected cannot vary with the reported valuations of the outcomes but may change with the reported beliefs. We give a complete characterization of the ex-post efficient and strategy-proof rules in the two-agent, two-state case, and we identify a rich class of such rules in the two-agent case.
Resumo:
This paper introduces the rank-dependent quality-adjusted life-years (QALY) model, a new method to aggregate QALYs in economic evaluations of health care. The rank-dependent QALY model permits the formalization of influential concepts of equity in the allocation of health care, such as the fair innings approach, and it includes as special cases many of the social welfare functions that have been proposed in the literature. An important advantage of the rank-dependent QALY model is that it offers a straightforward procedure to estimate equity weights for QALYs. We characterize the rank-dependent QALY model and argue that its central condition has normative appeal. (C) 2003 Elsevier B.V. All rights reserved.
Resumo:
We present a definition of increasing uncertainty, in which an elementary increase in the uncertainty of any act corresponds to the addition of an 'elementary bet' that increases consumption by a fixed amount in (relatively) 'good' states and decreases consumption by a fixed (and possibly different) amount in (relatively) 'bad' states. This definition naturally gives rise to a dual definition of comparative aversion to uncertainty. We characterize this definition for a popular class of generalized models of choice under uncertainty.
Resumo:
In this paper, we consider the relationship between supermodularity and risk aversion. We show that supermodularity of the certainty equivalent implies that the certainty equivalent of any random variable is less than its mean. We also derive conditions under which supermodularity of the certainty equivalent is equivalent to aversion to mean-preserving spreads in the sense of Rothschild and Stiglitz. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
Bayesian nonparametric models, such as the Gaussian process and the Dirichlet process, have been extensively applied for target kinematics modeling in various applications including environmental monitoring, traffic planning, endangered species tracking, dynamic scene analysis, autonomous robot navigation, and human motion modeling. As shown by these successful applications, Bayesian nonparametric models are able to adjust their complexities adaptively from data as necessary, and are resistant to overfitting or underfitting. However, most existing works assume that the sensor measurements used to learn the Bayesian nonparametric target kinematics models are obtained a priori or that the target kinematics can be measured by the sensor at any given time throughout the task. Little work has been done for controlling the sensor with bounded field of view to obtain measurements of mobile targets that are most informative for reducing the uncertainty of the Bayesian nonparametric models. To present the systematic sensor planning approach to leaning Bayesian nonparametric models, the Gaussian process target kinematics model is introduced at first, which is capable of describing time-invariant spatial phenomena, such as ocean currents, temperature distributions and wind velocity fields. The Dirichlet process-Gaussian process target kinematics model is subsequently discussed for modeling mixture of mobile targets, such as pedestrian motion patterns.
Novel information theoretic functions are developed for these introduced Bayesian nonparametric target kinematics models to represent the expected utility of measurements as a function of sensor control inputs and random environmental variables. A Gaussian process expected Kullback Leibler divergence is developed as the expectation of the KL divergence between the current (prior) and posterior Gaussian process target kinematics models with respect to the future measurements. Then, this approach is extended to develop a new information value function that can be used to estimate target kinematics described by a Dirichlet process-Gaussian process mixture model. A theorem is proposed that shows the novel information theoretic functions are bounded. Based on this theorem, efficient estimators of the new information theoretic functions are designed, which are proved to be unbiased with the variance of the resultant approximation error decreasing linearly as the number of samples increases. Computational complexities for optimizing the novel information theoretic functions under sensor dynamics constraints are studied, and are proved to be NP-hard. A cumulative lower bound is then proposed to reduce the computational complexity to polynomial time.
Three sensor planning algorithms are developed according to the assumptions on the target kinematics and the sensor dynamics. For problems where the control space of the sensor is discrete, a greedy algorithm is proposed. The efficiency of the greedy algorithm is demonstrated by a numerical experiment with data of ocean currents obtained by moored buoys. A sweep line algorithm is developed for applications where the sensor control space is continuous and unconstrained. Synthetic simulations as well as physical experiments with ground robots and a surveillance camera are conducted to evaluate the performance of the sweep line algorithm. Moreover, a lexicographic algorithm is designed based on the cumulative lower bound of the novel information theoretic functions, for the scenario where the sensor dynamics are constrained. Numerical experiments with real data collected from indoor pedestrians by a commercial pan-tilt camera are performed to examine the lexicographic algorithm. Results from both the numerical simulations and the physical experiments show that the three sensor planning algorithms proposed in this dissertation based on the novel information theoretic functions are superior at learning the target kinematics with
little or no prior knowledge