987 resultados para Subjective expected utility


Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper examines the role of higher-order moments in portfolio choice within an expected-utility framework. We consider two-, three-, four- and five-parameter density functions for portfolio returns and derive exact conditions under which investors would all be optimally plungers rather than diversifiers. Through comparative statics we show the importance of higher-order risk preference properties, such as riskiness, prudence and temperance, in determining plunging behaviour. Empirical estimates for the S&P500 provide evidence for the optimality of diversification.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Préface My thesis consists of three essays where I consider equilibrium asset prices and investment strategies when the market is likely to experience crashes and possibly sharp windfalls. Although each part is written as an independent and self contained article, the papers share a common behavioral approach in representing investors preferences regarding to extremal returns. Investors utility is defined over their relative performance rather than over their final wealth position, a method first proposed by Markowitz (1952b) and by Kahneman and Tversky (1979), that I extend to incorporate preferences over extremal outcomes. With the failure of the traditional expected utility models in reproducing the observed stylized features of financial markets, the Prospect theory of Kahneman and Tversky (1979) offered the first significant alternative to the expected utility paradigm by considering that people focus on gains and losses rather than on final positions. Under this setting, Barberis, Huang, and Santos (2000) and McQueen and Vorkink (2004) were able to build a representative agent optimization model which solution reproduced some of the observed risk premium and excess volatility. The research in behavioral finance is relatively new and its potential still to explore. The three essays composing my thesis propose to use and extend this setting to study investors behavior and investment strategies in a market where crashes and sharp windfalls are likely to occur. In the first paper, the preferences of a representative agent, relative to time varying positive and negative extremal thresholds are modelled and estimated. A new utility function that conciliates between expected utility maximization and tail-related performance measures is proposed. The model estimation shows that the representative agent preferences reveals a significant level of crash aversion and lottery-pursuit. Assuming a single risky asset economy the proposed specification is able to reproduce some of the distributional features exhibited by financial return series. The second part proposes and illustrates a preference-based asset allocation model taking into account investors crash aversion. Using the skewed t distribution, optimal allocations are characterized as a resulting tradeoff between the distribution four moments. The specification highlights the preference for odd moments and the aversion for even moments. Qualitatively, optimal portfolios are analyzed in terms of firm characteristics and in a setting that reflects real-time asset allocation, a systematic over-performance is obtained compared to the aggregate stock market. Finally, in my third article, dynamic option-based investment strategies are derived and illustrated for investors presenting downside loss aversion. The problem is solved in closed form when the stock market exhibits stochastic volatility and jumps. The specification of downside loss averse utility functions allows corresponding terminal wealth profiles to be expressed as options on the stochastic discount factor contingent on the loss aversion level. Therefore dynamic strategies reduce to the replicating portfolio using exchange traded and well selected options, and the risky stock.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Contexte et objectif. L’évasion fiscale a généré des pertes annuelles variant entre 2 et 44 milliards au Canada entre 1976 et 1995. Avec la croissance de l’évasion fiscale dans les années 1980 et 1990 plusieurs législations se sont attaquées à ce phénomène en mettant en place des mesures telles que les amnisties, les réformes fiscales et les nouvelles lois. Ces dernières reposent non seulement sur des principes théoriques distincts, mais leur efficacité même est remise en question. Bien que plusieurs auteurs affirment que les criminels en col blanc sont réceptifs aux sanctions pénales, une telle affirmation repose sur peu de preuves empiriques. L’objectif de ce mémoire est donc de réaliser une synthèse systématique des études évaluatives afin de faire un bilan des lois fiscales et d’évaluer leurs effets sur la fraude fiscale. Méthodologie. La synthèse systématique est la méthodologie considérée comme la plus rigoureuse pour se prononcer sur l’effet produit par une population relativement homogène d’études. Ainsi, 18 bases de données ont été consultées et huit études ont été retenues sur les 23 723 références. Ces huit études contiennent neuf évaluations qui ont estimé les retombés des lois sur 17 indicateurs de fraude fiscale. L’ensemble des études ont été codifiées en fonction du type de loi et leur rigueur méthodologique. La méthode du vote-count fut employée pour se prononcer sur l’efficacité des lois. Résultats. Sur les 17 indicateurs, sept indiquent que les lois n’ont eu aucun effet sur l’évasion fiscale tandis que six témoignent d’effets pervers. Seulement quatre résultats sont favorables aux lois, ce qui laisse présager que ces dernières sont peu efficaces. Toutefois, en scindant les résultats en fonction du type de loi, les réformes fiscales apparaissent comme une mesure efficace contrairement aux lois et amnisties. Conclusion. Les résultats démontrent que les mesures basées sur le modèle économique de Becker et qui rendent le système plus équitable sont prometteuses. Les amnisties qui visent à aller chercher des fraudeurs en leur offrant des avantages économiques et en suspendant les peines sont non seulement inefficaces, mais menaceraient le principe d’autocotisation basé sur l’équité.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Cette thèse de doctorat consiste en trois chapitres qui traitent des sujets de choix de portefeuilles de grande taille, et de mesure de risque. Le premier chapitre traite du problème d’erreur d’estimation dans les portefeuilles de grande taille, et utilise le cadre d'analyse moyenne-variance. Le second chapitre explore l'importance du risque de devise pour les portefeuilles d'actifs domestiques, et étudie les liens entre la stabilité des poids de portefeuille de grande taille et le risque de devise. Pour finir, sous l'hypothèse que le preneur de décision est pessimiste, le troisième chapitre dérive la prime de risque, une mesure du pessimisme, et propose une méthodologie pour estimer les mesures dérivées. Le premier chapitre améliore le choix optimal de portefeuille dans le cadre du principe moyenne-variance de Markowitz (1952). Ceci est motivé par les résultats très décevants obtenus, lorsque la moyenne et la variance sont remplacées par leurs estimations empiriques. Ce problème est amplifié lorsque le nombre d’actifs est grand et que la matrice de covariance empirique est singulière ou presque singulière. Dans ce chapitre, nous examinons quatre techniques de régularisation pour stabiliser l’inverse de la matrice de covariance: le ridge, spectral cut-off, Landweber-Fridman et LARS Lasso. Ces méthodes font chacune intervenir un paramètre d’ajustement, qui doit être sélectionné. La contribution principale de cette partie, est de dériver une méthode basée uniquement sur les données pour sélectionner le paramètre de régularisation de manière optimale, i.e. pour minimiser la perte espérée d’utilité. Précisément, un critère de validation croisée qui prend une même forme pour les quatre méthodes de régularisation est dérivé. Les règles régularisées obtenues sont alors comparées à la règle utilisant directement les données et à la stratégie naïve 1/N, selon leur perte espérée d’utilité et leur ratio de Sharpe. Ces performances sont mesurée dans l’échantillon (in-sample) et hors-échantillon (out-of-sample) en considérant différentes tailles d’échantillon et nombre d’actifs. Des simulations et de l’illustration empirique menées, il ressort principalement que la régularisation de la matrice de covariance améliore de manière significative la règle de Markowitz basée sur les données, et donne de meilleurs résultats que le portefeuille naïf, surtout dans les cas le problème d’erreur d’estimation est très sévère. Dans le second chapitre, nous investiguons dans quelle mesure, les portefeuilles optimaux et stables d'actifs domestiques, peuvent réduire ou éliminer le risque de devise. Pour cela nous utilisons des rendements mensuelles de 48 industries américaines, au cours de la période 1976-2008. Pour résoudre les problèmes d'instabilité inhérents aux portefeuilles de grandes tailles, nous adoptons la méthode de régularisation spectral cut-off. Ceci aboutit à une famille de portefeuilles optimaux et stables, en permettant aux investisseurs de choisir différents pourcentages des composantes principales (ou dégrées de stabilité). Nos tests empiriques sont basés sur un modèle International d'évaluation d'actifs financiers (IAPM). Dans ce modèle, le risque de devise est décomposé en deux facteurs représentant les devises des pays industrialisés d'une part, et celles des pays émergents d'autres part. Nos résultats indiquent que le risque de devise est primé et varie à travers le temps pour les portefeuilles stables de risque minimum. De plus ces stratégies conduisent à une réduction significative de l'exposition au risque de change, tandis que la contribution de la prime risque de change reste en moyenne inchangée. Les poids de portefeuille optimaux sont une alternative aux poids de capitalisation boursière. Par conséquent ce chapitre complète la littérature selon laquelle la prime de risque est importante au niveau de l'industrie et au niveau national dans la plupart des pays. Dans le dernier chapitre, nous dérivons une mesure de la prime de risque pour des préférences dépendent du rang et proposons une mesure du degré de pessimisme, étant donné une fonction de distorsion. Les mesures introduites généralisent la mesure de prime de risque dérivée dans le cadre de la théorie de l'utilité espérée, qui est fréquemment violée aussi bien dans des situations expérimentales que dans des situations réelles. Dans la grande famille des préférences considérées, une attention particulière est accordée à la CVaR (valeur à risque conditionnelle). Cette dernière mesure de risque est de plus en plus utilisée pour la construction de portefeuilles et est préconisée pour compléter la VaR (valeur à risque) utilisée depuis 1996 par le comité de Bâle. De plus, nous fournissons le cadre statistique nécessaire pour faire de l’inférence sur les mesures proposées. Pour finir, les propriétés des estimateurs proposés sont évaluées à travers une étude Monte-Carlo, et une illustration empirique en utilisant les rendements journaliers du marché boursier américain sur de la période 2000-2011.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Empirical evidence suggests that ambiguity is prevalent in insurance pricing and underwriting, and that often insurers tend to exhibit more ambiguity than the insured individuals (e.g., [23]). Motivated by these findings, we consider a problem of demand for insurance indemnity schedules, where the insurer has ambiguous beliefs about the realizations of the insurable loss, whereas the insured is an expected-utility maximizer. We show that if the ambiguous beliefs of the insurer satisfy a property of compatibility with the non-ambiguous beliefs of the insured, then there exist optimal monotonic indemnity schedules. By virtue of monotonicity, no ex-post moral hazard issues arise at our solutions (e.g., [25]). In addition, in the case where the insurer is either ambiguity-seeking or ambiguity-averse, we show that the problem of determining the optimal indemnity schedule reduces to that of solving an auxiliary problem that is simpler than the original one in that it does not involve ambiguity. Finally, under additional assumptions, we give an explicit characterization of the optimal indemnity schedule for the insured, and we show how our results naturally extend the classical result of Arrow [5] on the optimality of the deductible indemnity schedule.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Empirical evidence suggests that ambiguity is prevalent in insurance pricing and underwriting, and that often insurers tend to exhibit more ambiguity than the insured individuals (e.g., [23]). Motivated by these findings, we consider a problem of demand for insurance indemnity schedules, where the insurer has ambiguous beliefs about the realizations of the insurable loss, whereas the insured is an expected-utility maximizer. We show that if the ambiguous beliefs of the insurer satisfy a property of compatibility with the non-ambiguous beliefs of the insured, then there exist optimal monotonic indemnity schedules. By virtue of monotonicity, no ex-post moral hazard issues arise at our solutions (e.g., [25]). In addition, in the case where the insurer is either ambiguity-seeking or ambiguity-averse, we show that the problem of determining the optimal indemnity schedule reduces to that of solving an auxiliary problem that is simpler than the original one in that it does not involve ambiguity. Finally, under additional assumptions, we give an explicit characterization of the optimal indemnity schedule for the insured, and we show how our results naturally extend the classical result of Arrow [5] on the optimality of the deductible indemnity schedule.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The paper reviews recent models that have applied the techniques of behavioural economics to the analysis of the tax compliance choice of an individual taxpayer. The construction of these models is motivated by the failure of the Yitzhaki version of the Allingham–Sandmo model to predict correctly the proportion of taxpayers who will evade and the effect of an increase in the tax rate upon the chosen level of evasion. Recent approaches have applied non-expected utility theory to the compliance decision and have addressed social interaction. The models we describe are able to match the observed extent of evasion and correctly predict the tax effect but do not have the parsimony or precision of the Yitzhaki model.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this article we review the evolution of economic theory on decision making under uncertainty. After a brief reference to Expected Utility Theory, we refer to behavioural paradoxes, forcing the theorists to adopt less restrictive approaches, allowing us to explain a broader spectrum of phenomena. The complexity entailed in the new theories requires a multidimensional description of human attitudes towards risk. Nevertheless, measurement of this attitudes has not followed the desired path, with most elicitation methods remaining uni-dimensional.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

One important aspect of the economic theory of criminal court delay is to understand how the prosecutor and the defendant make their decisions, and how these respond to changes in trial delay. If both parties jointly maximise expected utility, trial delay may increase or decrease the number of trials, depending upon the decision makers' attitudes towards risk. The main policy implication is that providing the criminal courts with more resources in the form of additional judges and court capacity may lengthen the trial queue rather than shorten it. This is a counterintuitive result contrary to popular belief.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In a multi-agent environment, there is often the need for an agent to cooperate with others so as to ensure that a given task is achieved timely and cost-effectively. Present agent systems currently maximizes this through mechanisms such as trust and risk assessments. In this paper, we extend this mechanism by introducing the concept of insurance, in which the insurance agents act as a bridge between agents who require resources from others. Unlike traditional systems, agents purchase insurance so as to guarantee to have the requested resources during the task execution time and thus minimize the risk in task failure. The novelty of this proposal is that it ensures agents continuously to exchange resources and to seek maximum expected utility in a dynamic environment at the same time. Our experimental results confirm the feasibility of our approach.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In multi-agent systems, there is often the need for an agent to cooperate with others so as to ensure that a given task is achieved timely and cost effectively. Currently multi-agent systems maximize this through mechanisms such as coalition formation, trust and risk assessments, etc. In this paper, we incorporate the concept of insurance with trust and risk mechanisms in multi-agent systems. The novelty of this proposal is that it ensures continuous sharing of resources while encouraging expected utility to be maximized in a dynamic environment. Our experimental results confirm the feasibility of our approach.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This study is about store names as brand signals. It focuses on the effects of store name investments on store name credibility and perceived store quality. Using the theoretical framework of Erdem and Swait (1998), hypotheses are developed vis-à-vis the effects of store name investments on consumers’ perceived store quality. The proposed hypotheses are empirically tested on data collected from a sample of students. The study is part of a project that looks at how store name and brand name credibility affect consumers’ expected utility.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper, we propose buying and selling models for agents to trade in the open multi-agent marketplace. Unlike auctions, we take into account of the fact that agents trading in such open environments has to maximize their profits and at the same time, protect themselves from fraud and deception. We attempt to address this issue by incorporating the element of trust and risk management into our proposed buying and selling model. During buying, agents learn to select their partners based on the trustworthiness of the potential partner as well as its personal risk attitude. During selling, agents learn to increase the chances of winning a deal by adjusting their profit rate, which is a measure that considers both trust and risk. The novelty of this proposal is that it ensures agents continuing to seek maximum expected utility in a dynamic trading environment. Our experimental results confirm the feasibility of our approach.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We use a human-subjects experiment to investigate how bargaining outcomes are affected by changes in bargainers’ disagreement payoffs. Subjects bargain against changing opponents, with randomly drawn asymmetric disagreement outcomes that vary over plays of the game, and with complete information about disagreement payoffs and the cake size. We find that subjects only respond about half as much as theoretically predicted to changes in their own disagreement payoff and to changes in their opponent’s disagreement payoff. This effect is observed in a standard Nash demand game and a related unstructured bargaining game, in both early and late rounds, and is robust to moderate changes in stake sizes. We show theoretically that standard models of expected utility maximisation are unable to account for this under-responsiveness, even when generalised to allow for risk aversion. We also show that quantal-response equilibrium has, at best, mixed success in characterising our results. However, a simple model of other-regarding preferences can explain our main results.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Dynamic treatment regimes are set rules for sequential decision making based on patient covariate history. Observational studies are well suited for the investigation of the effects of dynamic treatment regimes because of the variability in treatment decisions found in them. This variability exists because different physicians make different decisions in the face of similar patient histories. In this article we describe an approach to estimate the optimal dynamic treatment regime among a set of enforceable regimes. This set is comprised by regimes defined by simple rules based on a subset of past information. The regimes in the set are indexed by a Euclidean vector. The optimal regime is the one that maximizes the expected counterfactual utility over all regimes in the set. We discuss assumptions under which it is possible to identify the optimal regime from observational longitudinal data. Murphy et al. (2001) developed efficient augmented inverse probability weighted estimators of the expected utility of one fixed regime. Our methods are based on an extension of the marginal structural mean model of Robins (1998, 1999) which incorporate the estimation ideas of Murphy et al. (2001). Our models, which we call dynamic regime marginal structural mean models, are specially suitable for estimating the optimal treatment regime in a moderately small class of enforceable regimes of interest. We consider both parametric and semiparametric dynamic regime marginal structural models. We discuss locally efficient, double-robust estimation of the model parameters and of the index of the optimal treatment regime in the set. In a companion paper in this issue of the journal we provide proofs of the main results.