984 resultados para Continuous Utility Functions
Resumo:
We propose a new method for ranking alternatives in multicriteria decision-making problems when there is imprecision concerning the alternative performances, component utility functions and weights. We assume decision maker?s preferences are represented by an additive multiattribute utility function, in which weights can be modeled by independent normal variables, fuzzy numbers, value intervals or by an ordinal relation. The approaches are based on dominance measures or exploring the weight space in order to describe which ratings would make each alternative the preferred one. On the one hand, the approaches based on dominance measures compute the minimum utility difference among pairs of alternatives. Then, they compute a measure by which to rank the alternatives. On the other hand, the approaches based on exploring the weight space compute confidence factors describing the reliability of the analysis. These methods are compared using Monte Carlo simulation.
Resumo:
El conjunto eficiente en la Teoría de la Decisión Multicriterio juega un papel fundamental en los procesos de solución ya que es en este conjunto donde el decisor debe hacer su elección más preferida. Sin embargo, la generación de tal conjunto puede ser difícil, especialmente en problemas continuos y/o no lineales. El primer capítulo de esta memoria, es introductorio a la Decisión Multicriterio y en él se exponen aquellos conceptos y herramientas que se van a utilizar en desarrollos posteriores. El segundo capítulo estudia los problemas de Toma de Decisiones en ambiente de certidumbre. La herramienta básica y punto de partida es la función de valor vectorial que refleja imprecisión sobre las preferencias del decisor. Se propone una caracterización del conjunto de valor eficiente y diferentes aproximaciones con sus propiedades de encaje y convergencia. Varios algoritmos interactivos de solución complementan los desarrollos teóricos. El tercer capítulo está dedicado al caso de ambiente de incertidumbre. Tiene un desarrollo parcialmente paralelo al anterior y utiliza la función de utilidad vectorial como herramienta de modelización de preferencias del decisor. A partir de la consideración de las distribuciones simples se introduce la eficiencia en utilidad, su caracterización y aproximaciones, que posteriormente se extienden a los casos de distribuciones discretas y continuas. En el cuarto capítulo se estudia el problema en ambiente difuso, aunque de manera introductoria. Concluimos sugiriendo distintos problemas abiertos.---ABSTRACT---The efficient set of a Multicriteria Decicion-Making Problem plays a fundamental role in the solution process since the Decisión Maker's preferred choice should be in this set. However, the computation of that set may be difficult, specially in continuous and/or nonlinear problems. Chapter one introduces Multicriteria Decision-Making. We review basic concepts and tools for later developments. Chapter two studies Decision-Making problems under certainty. The basic tool is the vector valué function, which represents imprecisión in the DM's preferences. We propose a characterization of the valué efficient set and different approximations with nesting and convergence properties. Several interactive algorithms complement the theoretical results. We devote Chapter three to problems under uncertainty. The development is parallel to the former and uses vector utility functions to model the DM's preferences. We introduce utility efficiency for simple distributions, its characterization and some approximations, which we partially extend to discrete and continuous classes of distributions. Chapter four studies the problem under fuzziness, at an exploratory level. We conclude with several open problems.
Resumo:
Stability of nonlinear impulsive differential equations with "supremum" is studied. A special type of stability, combining two different measures and a dot product on a cone, is defined. Perturbing cone-valued piecewise continuous Lyapunov functions have been applied. Method of Razumikhin as well as comparison method for scalar impulsive ordinary differential equations have been employed.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
It is common to find in experimental data persistent oscillations in the aggregate outcomes and high levels of heterogeneity in individual behavior. Furthermore, it is not unusual to find significant deviations from aggregate Nash equilibrium predictions. In this paper, we employ an evolutionary model with boundedly rational agents to explain these findings. We use data from common property resource experiments (Casari and Plott, 2003). Instead of positing individual-specific utility functions, we model decision makers as selfish and identical. Agent interaction is simulated using an individual learning genetic algorithm, where agents have constraints in their working memory, a limited ability to maximize, and experiment with new strategies. We show that the model replicates most of the patterns that can be found in common property resource experiments.
Resumo:
In this paper we propose the infimum of the Arrow-Pratt index of absolute risk aversion as a measure of global risk aversion of a utility function. We then show that, for any given arbitrary pair of distributions, there exists a threshold level of global risk aversion such that all increasing concave utility functions with at least as much global risk aversion would rank the two distributions in the same way. Furthermore, this threshold level is sharp in the sense that, for any lower level of global risk aversion, we can find two utility functions in this class yielding opposite preference relations for the two distributions.
Resumo:
Préface My thesis consists of three essays where I consider equilibrium asset prices and investment strategies when the market is likely to experience crashes and possibly sharp windfalls. Although each part is written as an independent and self contained article, the papers share a common behavioral approach in representing investors preferences regarding to extremal returns. Investors utility is defined over their relative performance rather than over their final wealth position, a method first proposed by Markowitz (1952b) and by Kahneman and Tversky (1979), that I extend to incorporate preferences over extremal outcomes. With the failure of the traditional expected utility models in reproducing the observed stylized features of financial markets, the Prospect theory of Kahneman and Tversky (1979) offered the first significant alternative to the expected utility paradigm by considering that people focus on gains and losses rather than on final positions. Under this setting, Barberis, Huang, and Santos (2000) and McQueen and Vorkink (2004) were able to build a representative agent optimization model which solution reproduced some of the observed risk premium and excess volatility. The research in behavioral finance is relatively new and its potential still to explore. The three essays composing my thesis propose to use and extend this setting to study investors behavior and investment strategies in a market where crashes and sharp windfalls are likely to occur. In the first paper, the preferences of a representative agent, relative to time varying positive and negative extremal thresholds are modelled and estimated. A new utility function that conciliates between expected utility maximization and tail-related performance measures is proposed. The model estimation shows that the representative agent preferences reveals a significant level of crash aversion and lottery-pursuit. Assuming a single risky asset economy the proposed specification is able to reproduce some of the distributional features exhibited by financial return series. The second part proposes and illustrates a preference-based asset allocation model taking into account investors crash aversion. Using the skewed t distribution, optimal allocations are characterized as a resulting tradeoff between the distribution four moments. The specification highlights the preference for odd moments and the aversion for even moments. Qualitatively, optimal portfolios are analyzed in terms of firm characteristics and in a setting that reflects real-time asset allocation, a systematic over-performance is obtained compared to the aggregate stock market. Finally, in my third article, dynamic option-based investment strategies are derived and illustrated for investors presenting downside loss aversion. The problem is solved in closed form when the stock market exhibits stochastic volatility and jumps. The specification of downside loss averse utility functions allows corresponding terminal wealth profiles to be expressed as options on the stochastic discount factor contingent on the loss aversion level. Therefore dynamic strategies reduce to the replicating portfolio using exchange traded and well selected options, and the risky stock.
Resumo:
In this paper we proose the infimum of the Arrow-Pratt index of absoluterisk aversion as a measure of global risk aversion of a utility function.We then show that, for any given arbitrary pair of distributions, thereexists a threshold level of global risk aversion such that all increasingconcave utility functions with at least as much global risk aversion wouldrank the two distributions in the same way. Furthermore, this thresholdlevel is sharp in the sense that, for any lower level of global riskaversion, we can find two utility functions in this class yielding oppositepreference relations for the two distributions.
Resumo:
In this paper we describe the existence of financial illusion in public accountingand we comment on its effects for the future sustainability of local publicservices. We relate these features to the lack of incentives amongst publicmanagers for improving the financial reporting and thus management of publicassets. Financial illusion pays off for politicians and managers since it allowsfor larger public expenditure increases and managerial slack, these beingarguments in their utility functions. This preference is strengthen by the shorttime perspective of politically appointed public managers. Both factors runagainst public accountability. This hypothesis is tested for Spain by using anunique sample. We take data from around forty Catalan local authorities withpopulation above 20,000 for the financial years 1993-98. We build this databasis from the Catalan Auditing Office Reports in a way that it can be linkedto some other local social and economic variables in order to test ourassumptions. The results confirm that there is a statistical relationship between the financialillusion index (FI as constructed in the paper) and higher current expenditure.This reflects on important overruns and increases of the delay in payingsuppliers, as well as on a higher difficulties to face capital finance. Mechanismsfor FI creation have to do among other factors, with delays in paying suppliers(and thereafter higher future financial costs per unit of service), no adequateprovision for bad debts and lack of appropriate capital funding either forreposition or for new equipments. For this, it is crucial to monitor the way inwhich capital transfers are accounted in local public sheet balances. As a result,for most of the Municipalities we analyse, the funds for guaranteeing continuityand sustainability of public services provision are today at risk.Given managerial incentives at present in public institutions, we conclude thatpublic regulation recently enforced for assuring better information systems inlocal public management may not be enough to change the current state of affairs.
Resumo:
Scoring rules that elicit an entire belief distribution through the elicitation of point beliefsare time-consuming and demand considerable cognitive e¤ort. Moreover, the results are validonly when agents are risk-neutral or when one uses probabilistic rules. We investigate a classof rules in which the agent has to choose an interval and is rewarded (deterministically) onthe basis of the chosen interval and the realization of the random variable. We formulatean e¢ ciency criterion for such rules and present a speci.c interval scoring rule. For single-peaked beliefs, our rule gives information about both the location and the dispersion of thebelief distribution. These results hold for all concave utility functions.
Resumo:
This paper shows an equivalence result between the utility functions of secularagents who abide by a moral obligation to accumulate wealth and those of religiousagents who believe that salvation is immutable and preordained by God. Thisresult formalizes Weber's renowned thesis on the connection between the worldlyasceticism of Protestants and the religious premises of Calvinism. Furthermore,ongoing economies are often modeled with preference relations such as "Keeping upwith the Joneses" which are not associated with religion. Our results relate thesesecular economies of today and economies of the past shaped by religious ideas.
Resumo:
Kahneman and Tversky asserted a fundamental asymmetry between gains and losses, namely a reflection effect which occurs when an individual prefers a sure gain of $ pz to anuncertain gain of $ z with probability p, while preferring an uncertain loss of $z with probability p to a certain loss of $ pz.We focus on this class of choices (actuarially fair), and explore the extent to which thereflection effect, understood as occurring at a range of wealth levels, is compatible with single-self preferences.We decompose the reflection effect into two components, a probability switch effect,which is compatible with single-self preferences, and a translation effect, which is not. To argue the first point, we analyze two classes of single-self, nonexpected utility preferences, which we label homothetic and weakly homothetic. In both cases, we characterize the switch effect as well as the dependence of risk attitudes on wealth.We also discuss two types of utility functions of a form reminiscent of expected utility but with distorted probabilities. Type I always distorts the probability of the worst outcome downwards, yielding attraction to small risks for all probabilities. Type II distorts low probabilities upwards, and high probabilities downwards, implying risk aversion when the probability of the worst outcome is low. By combining homothetic or weak homothetic preferences with Type I or Type II distortion functions, we present four explicit examples: All four display a switch effect and, hence, a form of reflection effect consistent a single self preferences.
Resumo:
Let there be a positive (exogenous) probability that, at each date, the human species will disappear.We postulate an Ethical Observer (EO) who maximizes intertemporal welfare under thisuncertainty, with expected-utility preferences. Various social welfare criteria entail alternativevon Neumann- Morgenstern utility functions for the EO: utilitarian, Rawlsian, and an extensionof the latter that corrects for the size of population. Our analysis covers, first, a cake-eating economy(without production), where the utilitarian and Rawlsian recommend the same allocation.Second, a productive economy with education and capital, where it turns out that the recommendationsof the two EOs are in general different. But when the utilitarian program diverges, thenwe prove it is optimal for the extended Rawlsian to ignore the uncertainty concerning the possibledisappearance of the human species in the future. We conclude by discussing the implicationsfor intergenerational welfare maximization in the presence of global warming.
Resumo:
McCausland (2004a) describes a new theory of random consumer demand. Theoretically consistent random demand can be represented by a \"regular\" \"L-utility\" function on the consumption set X. The present paper is about Bayesian inference for regular L-utility functions. We express prior and posterior uncertainty in terms of distributions over the indefinite-dimensional parameter set of a flexible functional form. We propose a class of proper priors on the parameter set. The priors are flexible, in the sense that they put positive probability in the neighborhood of any L-utility function that is regular on a large subset bar(X) of X; and regular, in the sense that they assign zero probability to the set of L-utility functions that are irregular on bar(X). We propose methods of Bayesian inference for an environment with indivisible goods, leaving the more difficult case of indefinitely divisible goods for another paper. We analyse individual choice data from a consumer experiment described in Harbaugh et al. (2001).
Resumo:
Cette thèse est composée de trois essais liés à la conception de mécanisme et aux enchères. Dans le premier essai j'étudie la conception de mécanismes bayésiens efficaces dans des environnements où les fonctions d'utilité des agents dépendent de l'alternative choisie même lorsque ceux-ci ne participent pas au mécanisme. En plus d'une règle d'attribution et d'une règle de paiement le planificateur peut proférer des menaces afin d'inciter les agents à participer au mécanisme et de maximiser son propre surplus; Le planificateur peut présumer du type d'un agent qui ne participe pas. Je prouve que la solution du problème de conception peut être trouvée par un choix max-min des types présumés et des menaces. J'applique ceci à la conception d'une enchère multiple efficace lorsque la possession du bien par un acheteur a des externalités négatives sur les autres acheteurs. Le deuxième essai considère la règle du juste retour employée par l'agence spatiale européenne (ESA). Elle assure à chaque état membre un retour proportionnel à sa contribution, sous forme de contrats attribués à des sociétés venant de cet état. La règle du juste retour est en conflit avec le principe de la libre concurrence puisque des contrats ne sont pas nécessairement attribués aux sociétés qui font les offres les plus basses. Ceci a soulevé des discussions sur l'utilisation de cette règle: les grands états ayant des programmes spatiaux nationaux forts, voient sa stricte utilisation comme un obstacle à la compétitivité et à la rentabilité. Apriori cette règle semble plus coûteuse à l'agence que les enchères traditionnelles. Nous prouvons au contraire qu'une implémentation appropriée de la règle du juste retour peut la rendre moins coûteuse que des enchères traditionnelles de libre concurrence. Nous considérons le cas de l'information complète où les niveaux de technologie des firmes sont de notoriété publique, et le cas de l'information incomplète où les sociétés observent en privée leurs coûts de production. Enfin, dans le troisième essai je dérive un mécanisme optimal d'appel d'offre dans un environnement où un acheteur d'articles hétérogènes fait face a de potentiels fournisseurs de différents groupes, et est contraint de choisir une liste de gagnants qui est compatible avec des quotas assignés aux différents groupes. La règle optimale d'attribution consiste à assigner des niveaux de priorité aux fournisseurs sur la base des coûts individuels qu'ils rapportent au décideur. La manière dont ces niveaux de priorité sont déterminés est subjective mais connue de tous avant le déroulement de l'appel d'offre. Les différents coûts rapportés induisent des scores pour chaque liste potentielle de gagnant. Les articles sont alors achetés à la liste ayant les meilleurs scores, s'il n'est pas plus grand que la valeur de l'acheteur. Je montre également qu'en général il n'est pas optimal d'acheter les articles par des enchères séparées.