867 resultados para Conditional expected utility


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Let 'epsilon' be a class of event. Conditionally Expected Utility decision makers are decision makers whose conditional preferences ≿E, E є 'epsilon', satisfy the axioms of Subjective Expected Utility theory (SEU). We extend the notion of unconditional preference that is conditionally EU to unconditional preferences that are not necessarily SEU. We give a representation theorem for a class of such preferences, and show that they are Invariant Bi-separable in the sense of Ghirardato et al.[7]. Then, we consider the special case where the unconditional preference is itself SEU, and compare our results with those of Fishburn [6].

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Standard tools for the analysis of economic problems involving uncertainty, including risk premiums, certainty equivalents and the notions of absolute and relative risk aversion, are developed without making specific assumptions on functional form beyond the basic requirements of monotonicity, transitivity, continuity, and the presumption that individuals prefer certainty to risk. Individuals are not required to display probabilistic sophistication. The approach relies on the distance and benefit functions to characterize preferences relative to a given state-contingent vector of outcomes. The distance and benefit functions are used to derive absolute and relative risk premiums and to characterize preferences exhibiting constant absolute risk aversion (CARA) and constant relative risk aversion (CRRA). A generalization of the notion of Schur-concavity is presented. If preferences are generalized Schur concave, the absolute and relative risk premiums are generalized Schur convex, and the certainty equivalents are generalized Schur concave.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a personal view of the interaction between the analysis of choice under uncertainty and the analysis of production under uncertainty. Interest in the foundations of the theory of choice under uncertainty was stimulated by applications of expected utility theory such as the Sandmo model of production under uncertainty. This interest led to the development of generalized models including rank-dependent expected utility theory. In turn, the development of generalized expected utility models raised the question of whether such models could be used in the analysis of applied problems such as those involving production under uncertainty. Finally, the revival of the state-contingent approach led to the recognition of a fundamental duality between choice problems and production problems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, it is shown that, for a wide range of risk-averse generalized expected utility preferences, independent risks are complementary, contrary to the results for expected utility preferences satisfying conditions such as proper and standard risk aversion.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper explores biases in the elicitation of utilities under risk and the contribution that generalizations of expected utility can make to the resolution of these biases. We used five methods to measure utilities under risk and found clear violations of expected utility. Of the theories studies, prospect theory was most consistent with our data. The main improvement of prospect theory over expected utility was in comparisons between a riskless and a risky prospect(riskless-risk methods). We observed no improvement over expected utility in comparisons between two risky prospects (risk-risk methods). An explanation why we found no improvement of prospect theory over expected utility in risk-risk methods may be that there was less overweighting of small probabilities in our study than has commonly been observed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We generalize the classical expected-utility criterion by weakening transitivity to Suzumura consistency. In the absence of full transitivity, reflexivity and completeness no longer follow as a consequence of the system of axioms employed and a richer class of rankings of probability distributions results. This class is characterized by means of standard expected-utility axioms in addition to Suzumura consistency. An important feature of some members of our new class is that they allow us to soften the negative impact of wellknown paradoxes without abandoning the expected-utility framework altogether.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This investigation attempts to answer the question why more and more parents have chosen the Gymnasium for their children's secondary school education in post‐war West Germany. Based on the theory of subjective expected utility, the crucial mechanisms of parental educational decisions have been emphasized. From this perspective it is assumed that increasing educational motivation coupled with changes in the subjective evaluation of the cost–benefit of education were important conditions for an increasing participation in upper secondary schools. These were, however, in turn, the result of educational expansion. The empirical analyses for three time‐periods in the 1960s, 1970s, and 1980s confirm these assumptions to a large degree. Additionally, empirical evidence was found to suggest that in addition to the intentions of parents and the educational career of their children, structural moments of educational expansion and their own inertia played an important role in the pupils' transition from one educational level to the next. Finally, evidence was found that persistent class‐specific educational inequality stems from a constant balance in the relative cost–benefit advantages between social classes as well as from an increasing difference of primary origin effect between social classes in the realization of their educational choice.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Four variations on Two Envelope Paradox are stated and compared. The variations are employed to provide a diagnosis and an explanation of what has gone awry in the paradoxical modeling of the decision problem that the paradox poses. The canonical formulation of the paradox underdescribes the ways in which one envelope can have twice the amount that is in the other. Some ways one envelope can have twice the amount that is in the other make it rational to prefer the envelope that was originally rejected. Some do not, and it is a mistake to treat them alike. The nature of the mistake is diagnosed by the different roles that rigid designators and definite descriptions play in unproblematic and in untoward formulations of decision tables that are employed in setting out the decision problem that gives rise to the paradox. The decision maker’s knowledge or ignorance of how one envelope came to have twice the amount that is in the other determines which of the different ways of modeling his decision problem is correct. Under this diagnosis, the paradoxical modeling of the Two Envelope problem is incoherent.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper provides a characterization of QALYs, the most important outcome measure in medical decision making, in the context of a general rank dependent utility model. We show that both for chronic and for nonchronic health states the characterization of QALYs depends on intuitive conditions. This facilitates the assessment of the validity of QALYs in rank dependent non-expected utility theories and a comparison with other utility based measures of health.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Expected utility theory (EUT) has been challenged as a descriptive theoryin many contexts. The medical decision analysis context is not an exception.Several researchers have suggested that rank dependent utility theory (RDUT)may accurately describe how people evaluate alternative medical treatments.Recent research in this domain has addressed a relevant feature of RDU models-probability weighting-but to date no direct test of this theoryhas been made. This paper provides a test of the main axiomatic differencebetween EUT and RDUT when health profiles are used as outcomes of riskytreatments. Overall, EU best described the data. However, evidence on theediting and cancellation operation hypothesized in Prospect Theory andCumulative Prospect Theory was apparent in our study. we found that RDUoutperformed EU in the presentation of the risky treatment pairs in whichthe common outcome was not obvious. The influence of framing effects onthe performance of RDU and their importance as a topic for future researchis discussed.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper argues that any specific utility or disutility for gamblingmust be excluded from expected utility because such a theory is consequentialwhile a pleasure or displeasure for gambling is a matter of process, notof consequences. A (dis)utility for gambling is modeled as a process utilitywhich monotonically combines with expected utility restricted to consequences.This allows for a process (dis)utility for gambling to be revealed. Asan illustration, the model shows how empirical observations in the Allaisparadox can reveal a process disutility of gambling. A more general modelof rational behavior combining processes and consequences is then proposedand discussed.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A trade-off between return and risk plays a central role in financial economics. The intertemporal capital asset pricing model (ICAPM) proposed by Merton (1973) provides a neoclassical theory for expected returns on risky assets. The model assumes that risk-averse investors (seeking to maximize their expected utility of lifetime consumption) demand compensation for bearing systematic market risk and the risk of unfavorable shifts in the investment opportunity set. Although the ICAPM postulates a positive relation between the conditional expected market return and its conditional variance, the empirical evidence on the sign of the risk-return trade-off is conflicting. In contrast, autocorrelation in stock returns is one of the most consistent and robust findings in empirical finance. While autocorrelation is often interpreted as a violation of market efficiency, it can also reflect factors such as market microstructure or time-varying risk premia. This doctoral thesis investigates a relation between the mixed risk-return trade-off results and autocorrelation in stock returns. The results suggest that, in the case of the US stock market, the relative contribution of the risk-return trade-off and autocorrelation in explaining the aggregate return fluctuates with volatility. This effect is then shown to be even more pronounced in the case of emerging stock markets. During high-volatility periods, expected returns can be described using rational (intertemporal) investors acting to maximize their expected utility. During lowvolatility periods, market-wide persistence in returns increases, leading to a failure of traditional equilibrium-model descriptions for expected returns. Consistent with this finding, traditional models yield conflicting evidence concerning the sign of the risk-return trade-off. The changing relevance of the risk-return trade-off and autocorrelation can be explained by heterogeneous agents or, more generally, by the inadequacy of the neoclassical view on asset pricing with unboundedly rational investors and perfect market efficiency. In the latter case, the empirical results imply that the neoclassical view is valid only under certain market conditions. This offers an economic explanation as to why it has been so difficult to detect a positive tradeoff between the conditional mean and variance of the aggregate stock return. The results highlight the importance, especially in the case of emerging stock markets, of noting both the risk-return trade-off and autocorrelation in applications that require estimates for expected returns.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Cette thèse de doctorat consiste en trois chapitres qui traitent des sujets de choix de portefeuilles de grande taille, et de mesure de risque. Le premier chapitre traite du problème d’erreur d’estimation dans les portefeuilles de grande taille, et utilise le cadre d'analyse moyenne-variance. Le second chapitre explore l'importance du risque de devise pour les portefeuilles d'actifs domestiques, et étudie les liens entre la stabilité des poids de portefeuille de grande taille et le risque de devise. Pour finir, sous l'hypothèse que le preneur de décision est pessimiste, le troisième chapitre dérive la prime de risque, une mesure du pessimisme, et propose une méthodologie pour estimer les mesures dérivées. Le premier chapitre améliore le choix optimal de portefeuille dans le cadre du principe moyenne-variance de Markowitz (1952). Ceci est motivé par les résultats très décevants obtenus, lorsque la moyenne et la variance sont remplacées par leurs estimations empiriques. Ce problème est amplifié lorsque le nombre d’actifs est grand et que la matrice de covariance empirique est singulière ou presque singulière. Dans ce chapitre, nous examinons quatre techniques de régularisation pour stabiliser l’inverse de la matrice de covariance: le ridge, spectral cut-off, Landweber-Fridman et LARS Lasso. Ces méthodes font chacune intervenir un paramètre d’ajustement, qui doit être sélectionné. La contribution principale de cette partie, est de dériver une méthode basée uniquement sur les données pour sélectionner le paramètre de régularisation de manière optimale, i.e. pour minimiser la perte espérée d’utilité. Précisément, un critère de validation croisée qui prend une même forme pour les quatre méthodes de régularisation est dérivé. Les règles régularisées obtenues sont alors comparées à la règle utilisant directement les données et à la stratégie naïve 1/N, selon leur perte espérée d’utilité et leur ratio de Sharpe. Ces performances sont mesurée dans l’échantillon (in-sample) et hors-échantillon (out-of-sample) en considérant différentes tailles d’échantillon et nombre d’actifs. Des simulations et de l’illustration empirique menées, il ressort principalement que la régularisation de la matrice de covariance améliore de manière significative la règle de Markowitz basée sur les données, et donne de meilleurs résultats que le portefeuille naïf, surtout dans les cas le problème d’erreur d’estimation est très sévère. Dans le second chapitre, nous investiguons dans quelle mesure, les portefeuilles optimaux et stables d'actifs domestiques, peuvent réduire ou éliminer le risque de devise. Pour cela nous utilisons des rendements mensuelles de 48 industries américaines, au cours de la période 1976-2008. Pour résoudre les problèmes d'instabilité inhérents aux portefeuilles de grandes tailles, nous adoptons la méthode de régularisation spectral cut-off. Ceci aboutit à une famille de portefeuilles optimaux et stables, en permettant aux investisseurs de choisir différents pourcentages des composantes principales (ou dégrées de stabilité). Nos tests empiriques sont basés sur un modèle International d'évaluation d'actifs financiers (IAPM). Dans ce modèle, le risque de devise est décomposé en deux facteurs représentant les devises des pays industrialisés d'une part, et celles des pays émergents d'autres part. Nos résultats indiquent que le risque de devise est primé et varie à travers le temps pour les portefeuilles stables de risque minimum. De plus ces stratégies conduisent à une réduction significative de l'exposition au risque de change, tandis que la contribution de la prime risque de change reste en moyenne inchangée. Les poids de portefeuille optimaux sont une alternative aux poids de capitalisation boursière. Par conséquent ce chapitre complète la littérature selon laquelle la prime de risque est importante au niveau de l'industrie et au niveau national dans la plupart des pays. Dans le dernier chapitre, nous dérivons une mesure de la prime de risque pour des préférences dépendent du rang et proposons une mesure du degré de pessimisme, étant donné une fonction de distorsion. Les mesures introduites généralisent la mesure de prime de risque dérivée dans le cadre de la théorie de l'utilité espérée, qui est fréquemment violée aussi bien dans des situations expérimentales que dans des situations réelles. Dans la grande famille des préférences considérées, une attention particulière est accordée à la CVaR (valeur à risque conditionnelle). Cette dernière mesure de risque est de plus en plus utilisée pour la construction de portefeuilles et est préconisée pour compléter la VaR (valeur à risque) utilisée depuis 1996 par le comité de Bâle. De plus, nous fournissons le cadre statistique nécessaire pour faire de l’inférence sur les mesures proposées. Pour finir, les propriétés des estimateurs proposés sont évaluées à travers une étude Monte-Carlo, et une illustration empirique en utilisant les rendements journaliers du marché boursier américain sur de la période 2000-2011.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

O objetivo do presente trabalho é verificar se, ao levar-se em consideração momentos de ordem superior (assimetria e curtose) na alocação de uma carteira de carry trade, há ganhos em relação à alocação tradicional que prioriza somente os dois primeiros momentos (média e variância). A hipótese da pesquisa é que moedas de carry trade apresentam retornos com distribuição não-Normal, e os momentos de ordem superior desta têm uma dinâmica, a qual pode ser modelada através de um modelo da família GARCH, neste caso IC-GARCHSK. Este modelo consiste em uma equação para cada momento condicional dos componentes independentes, explicitamente: o retorno, a variância, a assimetria, e a curtose. Outra hipótese é que um investidor com uma função utilidade do tipo CARA (constant absolute risk aversion), pode tê-la aproximada por uma expansão de Taylor de 4ª ordem. A estratégia do trabalho é modelar a dinâmica dos momentos da série dos logartimos neperianos dos retornos diários de algumas moedas de carry trade através do modelo IC-GARCHSK, e estimar a alocação ótima da carteira dinamicamente, de tal forma que se maximize a função utilidade do investidor. Os resultados mostram que há ganhos sim, ao levar-se em consideração os momentos de ordem superior, uma vez que o custo de oportunidade desta foi menor que o de uma carteira construída somente utilizando como critérios média e variância.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This note shows that, under appropriate conditions, preferences may be locally approximated by the linear utility or risk-neutral preference functional associated with a local probability transformation.