990 resultados para conditional expected utility


Relevância:

80.00% 80.00%

Publicador:

Resumo:

One of the most tangled fields of research is the field of defining and modeling affective concepts, i. e. concepts regarding emotions and feelings. The subject can be approached from many disciplines. The main problem is lack of generally approved definitions. However, e.g. linguists have recently started to check the consistency of their theories with the help of computer simulations. Definitions of affective concepts are needed for performing similar simulations in behavioral sciences. In this thesis, preliminary computational definitions of affects for a simple utility-maximizing agent are given. The definitions have been produced by synthetizing ideas from theories from several fields of research. The class of affects is defined as a superclass of emotions and feelings. Affect is defined as a process, in which a change in an agent's expected utility causes a bodily change. If the process is currently under the attention of the agent (i.e. the agent is conscious of it), the process is a feeling. If it is not, but can in principle be taken into attention (i.e. it is preconscious), the process is an emotion. Thus, affects do not presuppose consciousness, but emotions and affects do. Affects directed at unexpected materialized (i.e. past) events are delight and fright. Delight is the consequence of an unexpected positive event and fright is the consequence of an unexpected negative event. Affects directed at expected materialized (i.e. past) events are happiness (expected positive event materialized), disappointment (expected positive event did not materialize), sadness (expected negative event materialized) and relief (expected negative event did not materialize). Affects directed at expected unrealized (i.e. future) events are fear and hope. Some other affects can be defined as directed towards originators of the events. The affect classification has also been implemented as a computer program, the purpose of which is to ensure the coherence of the definitions and also to illustrate the capabilities of the model. The exact content of bodily changes associated with specific affects is not considered relevant from the point of view of the logical structure of affective phenomena. The utility function need also not be defined, since the target of examination is only its dynamics.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Optimal Punishment of Economic Crime: A Study on Bankruptcy Crime This thesis researches whether the punishment practise of bankruptcy crimes is optimal in light of Gary S. Becker’s theory of optimal punishment. According to Becker, a punishment is optimal if it eliminates the expected utility of the crime for the offender and - on the other hand - minimizes the cost of the crime to society. The decision process of the offender is observed through their expected utility of the crime. The expected utility is calculated based on the offender's probability of getting caught, the cost of getting caught and the profit from the crime. All objects including the punishment are measured in cash. The cost of crimes to the society is observed defining the disutility caused by the crime to the society. The disutility is calculated based on the cost of crime prevention, crime damages, punishment execution and the probability of getting caught. If the goal is to minimize the crime profits, the punishments of bankruptcy crimes are not optimal. If the debtors would decide whether or not to commit the crime solely based on economical consideration, the crime rate would be multiple times higher than the current rate is. The prospective offender relies heavily on non-economic aspects in their decision. Most probably social pressure and personal commitment to oblige the laws are major factors in the prospective criminal’s decision-making. The function developed by Becker measuring the cost to society was not useful in the measurement of the optimality of a punishment. The premise of the function that the costs of the society correlate to the costs for the offender from the punishment proves to be unrealistic in observation of the bankruptcy crimes. However, it was observed that majority of the cost of crime for the society are caused by the crime damages. This finding supports the preventive criminal politics.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Es útil para estudiantes de postgrado (Master y Doctorado) en cursos de Economía o de Microeconomía en los que se analicen problemas de Decisión en condiciones de Riesgo o Incertidumbre. El documento comienza explicando la Teoría de la Utilidad Esperada. A continuación se estudian la aversión al riesgo, los coeficientes de aversión absoluta y relativa al riesgo, la relación “más averso que” entre agentes económicos y los efectos riqueza sobre las decisiones en algunas relaciones de preferencia utilizadas frecuentemente en el análisis económico. La sección 4 se centra en la comparación entre alternativas arriesgadas en términos de rendimiento y riesgo, considerando la dominancia estocástica de primer y segundo orden y algunas extensiones posteriores de esas relaciones de orden. El documento concluye con doce ejercicios resueltos en los que se aplican los conceptos y resultados expuestos en las secciones anteriores a problemas de decisión en varios contextos

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This work presents the basic elements for the analysis of decision under uncertainty: Expected Utility Theory and its citicisms and risk aversion and its measurement. The concepts of certainty equivalent, risk premium, absolute risk aversion and relative risk aversion, and the "more risk averse than" relation are discussed. The work is completed with several applications of decision making under uncertainty to different economic problems: investment in risky assets and portfolio selection, risk sharing, investment to reduce risk, insurance, taxes and income underreporting, deposit insurance and the value of information.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Ao longo da história, poucos fenômenos despertaram tanto interesse dos cidadãos, dispenderam tantos recursos do Estado e contribuíram tanto para o atraso de seu desenvolvimento quanto o problema da corrupção dos agentes públicos. Nesta tese, uso o conceito de improbidade para definir um tipo particular de fenômeno e distingui-lo de outros geralmente abrangidos pelo conceito de corrupção. A partir daí procuro responder as seguintes questões: quais elementos influenciam o processo de tomada de decisão do agente público para que considere o engajamento em uma improbidade? Que fatores estão associados à ocorrência de improbidades na Administração Pública municipal brasileira? Sob a perspectiva da nova economia institucional, a primeira parte da tese procede com a análise conceitual e metodológica do fenômeno, consubstanciado no referencial analíticos das improbidades, sustentando que: a) as improbidades correspondem a um gênero de atitudes oportunistas, dentre as quais se destaca uma espécie denominada transação corrupta; b) o processo de tomada de decisão do agente público, inserido no contexto de racionalidade limitada, orienta-se igualmente por elementos da análise custo-benefício (maximização da utilidade esperada), da dinâmica dos processos de aprendizagem e da barreira ético-moral dos próprios indivíduos. As demais partes da tese apresentam os resultados de uma investigação empírica sistemática, baseada nas informações de uma amostra aleatória de 960 municípios brasileiros auditados pela Controladoria-Geral da União. A análise evidencia os fatores associados à ocorrência das improbidades, tanto sob o ponto vista da literatura tradicional (modernização, capital social e rent-seeking), quanto da nova perspectiva analítica proposta, baseada nos mecanismos de governança. O teste a partir dos modelos tradicionais da literatura demonstra: a) a associação negativa entre a ocorrência das improbidades e os indicadores de desempenho institucional e de desenvolvimento socioeconômico dos municípios (em consonância com os efeitos comumente atribuídos as improbidades); b) a associação negativa entre a ocorrência das improbidades e os indicadores de modernização e de capital social dos municípios (em consonância com as causas comumente atribuídas as improbidades); c) a não associação entre a ocorrência das improbidades nos municípios brasileiros e os indicadores de incentivo ao comportamento rent-seeking (em oposição à clássica proposição de que quanto maior o tamanho do Estado, maior será a ocorrência de improbidades em razão dos incentivos oriundos de seu monopólio). Com base nos resultados obtidos, incorporo os supostos neoinstitucionalistas a análise das improbidades, interpretando-os como decorrência da inadequação das estruturas de governança dos contratos. Assim, para além do impacto do arranjo institucional, que abarca o controle parlamentar, administrativo e jurisdicional interno e externo dos recursos públicos federais transferidos aos municípios, apresento evidências de que as variações observadas na contagem de improbidades nos municípios brasileiros estão diretamente relacionadas à qualidade de seus mecanismos de governança. Dentre esses, destacam-se aqueles de natureza democrática: os mecanismos de controle social (os conselhos municipais de políticas públicas); os mecanismos de promoção da transparência (a qualidade do governo eletrônico); e os mecanismos de accountability (a disputa político-eleitoral). De acordo com o referencial analítico das improbidades, a existência e operação desses mecanismos elevam os custos de transação do agente público que, mesmo superando as limitações da barreira ético-moral e de aprendizado, ainda considera um eventual engajamento nesse gênero de atitudes oportunistas.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In many real world situations, we make decisions in the presence of multiple, often conflicting and non-commensurate objectives. The process of optimizing systematically and simultaneously over a set of objective functions is known as multi-objective optimization. In multi-objective optimization, we have a (possibly exponentially large) set of decisions and each decision has a set of alternatives. Each alternative depends on the state of the world, and is evaluated with respect to a number of criteria. In this thesis, we consider the decision making problems in two scenarios. In the first scenario, the current state of the world, under which the decisions are to be made, is known in advance. In the second scenario, the current state of the world is unknown at the time of making decisions. For decision making under certainty, we consider the framework of multiobjective constraint optimization and focus on extending the algorithms to solve these models to the case where there are additional trade-offs. We focus especially on branch-and-bound algorithms that use a mini-buckets algorithm for generating the upper bound at each node of the search tree (in the context of maximizing values of objectives). Since the size of the guiding upper bound sets can become very large during the search, we introduce efficient methods for reducing these sets, yet still maintaining the upper bound property. We define a formalism for imprecise trade-offs, which allows the decision maker during the elicitation stage, to specify a preference for one multi-objective utility vector over another, and use such preferences to infer other preferences. The induced preference relation then is used to eliminate the dominated utility vectors during the computation. For testing the dominance between multi-objective utility vectors, we present three different approaches. The first is based on a linear programming approach, the second is by use of distance-based algorithm (which uses a measure of the distance between a point and a convex cone); the third approach makes use of a matrix multiplication, which results in much faster dominance checks with respect to the preference relation induced by the trade-offs. Furthermore, we show that our trade-offs approach, which is based on a preference inference technique, can also be given an alternative semantics based on the well known Multi-Attribute Utility Theory. Our comprehensive experimental results on common multi-objective constraint optimization benchmarks demonstrate that the proposed enhancements allow the algorithms to scale up to much larger problems than before. For decision making problems under uncertainty, we describe multi-objective influence diagrams, based on a set of p objectives, where utility values are vectors in Rp, and are typically only partially ordered. These can be solved by a variable elimination algorithm, leading to a set of maximal values of expected utility. If the Pareto ordering is used this set can often be prohibitively large. We consider approximate representations of the Pareto set based on ϵ-coverings, allowing much larger problems to be solved. In addition, we define a method for incorporating user trade-offs, which also greatly improves the efficiency.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Patients with life-threatening conditions sometimes appear to make risky treatment decisions as their condition declines, contradicting the risk-averse behavior predicted by expected utility theory. Prospect theory accommodates such decisions by describing how individuals evaluate outcomes relative to a reference point and how they exhibit risk-seeking behavior over losses relative to that point. The authors show that a patient's reference point for his or her health is a key factor in determining which treatment option the patient selects, and they examine under what circumstances the more risky option is selected. The authors argue that patients' reference points may take time to adjust following a change in diagnosis, with implications for predicting under what circumstances a patient may select experimental or conventional therapies or select no treatment.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Partially ordered preferences generally lead to choices that do not abide by standard expected utility guidelines; often such preferences are revealed by imprecision in probability values. We investigate five criteria for strategy selection in decision trees with imprecision in probabilities: “extensive” Γ-maximin and Γ-maximax, interval dominance, maximality and E-admissibility. We present algorithms that generate strategies for all these criteria; our main contribution is an algorithm for Eadmissibility that runs over admissible strategies rather than over sets of probability distributions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper examines the role of higher-order moments in portfolio choice within an expected-utility framework. We consider two-, three-, four- and five-parameter density functions for portfolio returns and derive exact conditions under which investors would all be optimally plungers rather than diversifiers. Through comparative statics we show the importance of higher-order risk preference properties, such as riskiness, prudence and temperance, in determining plunging behaviour. Empirical estimates for the S&P500 provide evidence for the optimality of diversification.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Préface My thesis consists of three essays where I consider equilibrium asset prices and investment strategies when the market is likely to experience crashes and possibly sharp windfalls. Although each part is written as an independent and self contained article, the papers share a common behavioral approach in representing investors preferences regarding to extremal returns. Investors utility is defined over their relative performance rather than over their final wealth position, a method first proposed by Markowitz (1952b) and by Kahneman and Tversky (1979), that I extend to incorporate preferences over extremal outcomes. With the failure of the traditional expected utility models in reproducing the observed stylized features of financial markets, the Prospect theory of Kahneman and Tversky (1979) offered the first significant alternative to the expected utility paradigm by considering that people focus on gains and losses rather than on final positions. Under this setting, Barberis, Huang, and Santos (2000) and McQueen and Vorkink (2004) were able to build a representative agent optimization model which solution reproduced some of the observed risk premium and excess volatility. The research in behavioral finance is relatively new and its potential still to explore. The three essays composing my thesis propose to use and extend this setting to study investors behavior and investment strategies in a market where crashes and sharp windfalls are likely to occur. In the first paper, the preferences of a representative agent, relative to time varying positive and negative extremal thresholds are modelled and estimated. A new utility function that conciliates between expected utility maximization and tail-related performance measures is proposed. The model estimation shows that the representative agent preferences reveals a significant level of crash aversion and lottery-pursuit. Assuming a single risky asset economy the proposed specification is able to reproduce some of the distributional features exhibited by financial return series. The second part proposes and illustrates a preference-based asset allocation model taking into account investors crash aversion. Using the skewed t distribution, optimal allocations are characterized as a resulting tradeoff between the distribution four moments. The specification highlights the preference for odd moments and the aversion for even moments. Qualitatively, optimal portfolios are analyzed in terms of firm characteristics and in a setting that reflects real-time asset allocation, a systematic over-performance is obtained compared to the aggregate stock market. Finally, in my third article, dynamic option-based investment strategies are derived and illustrated for investors presenting downside loss aversion. The problem is solved in closed form when the stock market exhibits stochastic volatility and jumps. The specification of downside loss averse utility functions allows corresponding terminal wealth profiles to be expressed as options on the stochastic discount factor contingent on the loss aversion level. Therefore dynamic strategies reduce to the replicating portfolio using exchange traded and well selected options, and the risky stock.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Contexte et objectif. L’évasion fiscale a généré des pertes annuelles variant entre 2 et 44 milliards au Canada entre 1976 et 1995. Avec la croissance de l’évasion fiscale dans les années 1980 et 1990 plusieurs législations se sont attaquées à ce phénomène en mettant en place des mesures telles que les amnisties, les réformes fiscales et les nouvelles lois. Ces dernières reposent non seulement sur des principes théoriques distincts, mais leur efficacité même est remise en question. Bien que plusieurs auteurs affirment que les criminels en col blanc sont réceptifs aux sanctions pénales, une telle affirmation repose sur peu de preuves empiriques. L’objectif de ce mémoire est donc de réaliser une synthèse systématique des études évaluatives afin de faire un bilan des lois fiscales et d’évaluer leurs effets sur la fraude fiscale. Méthodologie. La synthèse systématique est la méthodologie considérée comme la plus rigoureuse pour se prononcer sur l’effet produit par une population relativement homogène d’études. Ainsi, 18 bases de données ont été consultées et huit études ont été retenues sur les 23 723 références. Ces huit études contiennent neuf évaluations qui ont estimé les retombés des lois sur 17 indicateurs de fraude fiscale. L’ensemble des études ont été codifiées en fonction du type de loi et leur rigueur méthodologique. La méthode du vote-count fut employée pour se prononcer sur l’efficacité des lois. Résultats. Sur les 17 indicateurs, sept indiquent que les lois n’ont eu aucun effet sur l’évasion fiscale tandis que six témoignent d’effets pervers. Seulement quatre résultats sont favorables aux lois, ce qui laisse présager que ces dernières sont peu efficaces. Toutefois, en scindant les résultats en fonction du type de loi, les réformes fiscales apparaissent comme une mesure efficace contrairement aux lois et amnisties. Conclusion. Les résultats démontrent que les mesures basées sur le modèle économique de Becker et qui rendent le système plus équitable sont prometteuses. Les amnisties qui visent à aller chercher des fraudeurs en leur offrant des avantages économiques et en suspendant les peines sont non seulement inefficaces, mais menaceraient le principe d’autocotisation basé sur l’équité.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A classical argument of de Finetti holds that Rationality implies Subjective Expected Utility (SEU). In contrast, the Knightian distinction between Risk and Ambiguity suggests that a rational decision maker would obey the SEU paradigm when the information available is in some sense good, and would depart from it when the information available is not good. Unlike de Finetti's, however, this view does not rely on a formal argument. In this paper, we study the set of all information structures that might be availabe to a decision maker, and show that they are of two types: those compatible with SEU theory and those for which SEU theory must fail. We also show that the former correspond to "good" information, while the latter correspond to information that is not good. Thus, our results provide a formalization of the distinction between Risk and Ambiguity. As a consequence of our main theorem (Theorem 2, Section 8), behavior not-conforming to SEU theory is bound to emerge in the presence of Ambiguity. We give two examples of situations of Ambiguity. One concerns the uncertainty on the class of measure zero events, the other is a variation on Ellberg's three-color urn experiment. We also briefly link our results to two other strands of literature: the study of ambiguous events and the problem of unforeseen contingencies. We conclude the paper by re-considering de Finetti's argument in light of our findings.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Les implications philosophiques de la Théorie de la Perspective de 1979, notamment celles qui concernent l’introduction d’une fonction de valeur sur les résultats et d’un coefficient de pondération sur les probabilités, n’ont à ce jour jamais été explorées. Le but de ce travail est de construire une théorie philosophique de la volonté à partir des résultats de la Théorie de la Perspective. Afin de comprendre comment cette théorie a pu être élaborée il faut étudier la Théorie de l’Utilité Attendue dont elle est l’aboutissement critique majeur, c’est-à-dire les axiomatisations de la décision de Ramsey (1926), von Neumann et Morgenstern (1947), et enfin Savage (1954), qui constituent les fondements de la théorie classique de la décision. C’est entre autres la critique – par l’économie et la psychologie cognitive – du principe d’indépendance, des axiomes d’ordonnancement et de transitivité qui a permis de faire émerger les éléments représentationnels subjectifs à partir desquels la Théorie de la Perspective a pu être élaborée. Ces critiques ont été menées par Allais (1953), Edwards (1954), Ellsberg (1961), et enfin Slovic et Lichtenstein (1968), l’étude de ces articles permet de comprendre comment s’est opéré le passage de la Théorie de l’Utilité Attendue, à la Théorie de la Perspective. À l’issue de ces analyses et de celle de la Théorie de la Perspective est introduite la notion de Système de Référence Décisionnel, qui est la généralisation naturelle des concepts de fonction de valeur et de coefficient de pondération issus de la Théorie de la Perspective. Ce système, dont le fonctionnement est parfois heuristique, sert à modéliser la prise de décision dans l’élément de la représentation, il s’articule autour de trois phases : la visée, l’édition et l’évaluation. À partir de cette structure est proposée une nouvelle typologie des décisions et une explication inédite des phénomènes d’akrasie et de procrastination fondée sur les concepts d’aversion au risque et de surévaluation du présent, tous deux issus de la Théorie de la Perspective.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The concept of Ambiguity designates those situations where the information available to the decision maker is insufficient to form a probabilistic view of the world. Thus, it has provided the motivation for departing from the Subjective Expected Utility (SEU) paradigm. Yet, the formalization of the concept is missing. This is a grave omission as it leaves non-expected utility models hanging on a shaky ground. In particular, it leaves unanswered basic questions such as: (1) Does Ambiguity exist?; (2) If so, which situations should be labeled as "ambiguous"?; (3) Why should one depart from Subjective Expected Utility (SEU) in the presence of Ambiguity?; and (4) If so, what kind of behavior should emerge in the presence of Ambiguity? The present paper fills these gaps. Specifically, it identifies those information structures that are incompatible with SEU theory, and shows that their mathematical properties are the formal counterpart of the intuitive idea of insufficient information. These are used to give a formal definition of Ambiguity and, consequently, to distinguish between ambiguous and unambiguous situations. Finally, the paper shows that behavior not conforming to SEU theory must emerge in correspondence of insufficient information and identifies the class of non-EU models that emerge in the face of Ambiguity. The paper also proposes a new comparative definition of Ambiguity, and discusses its relation with some of the existing literature.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Empirical evidence suggests that ambiguity is prevalent in insurance pricing and underwriting, and that often insurers tend to exhibit more ambiguity than the insured individuals (e.g., [23]). Motivated by these findings, we consider a problem of demand for insurance indemnity schedules, where the insurer has ambiguous beliefs about the realizations of the insurable loss, whereas the insured is an expected-utility maximizer. We show that if the ambiguous beliefs of the insurer satisfy a property of compatibility with the non-ambiguous beliefs of the insured, then there exist optimal monotonic indemnity schedules. By virtue of monotonicity, no ex-post moral hazard issues arise at our solutions (e.g., [25]). In addition, in the case where the insurer is either ambiguity-seeking or ambiguity-averse, we show that the problem of determining the optimal indemnity schedule reduces to that of solving an auxiliary problem that is simpler than the original one in that it does not involve ambiguity. Finally, under additional assumptions, we give an explicit characterization of the optimal indemnity schedule for the insured, and we show how our results naturally extend the classical result of Arrow [5] on the optimality of the deductible indemnity schedule.