159 resultados para utility theory
Resumo:
Executive Summary The unifying theme of this thesis is the pursuit of a satisfactory ways to quantify the riskureward trade-off in financial economics. First in the context of a general asset pricing model, then across models and finally across country borders. The guiding principle in that pursuit was to seek innovative solutions by combining ideas from different fields in economics and broad scientific research. For example, in the first part of this thesis we sought a fruitful application of strong existence results in utility theory to topics in asset pricing. In the second part we implement an idea from the field of fuzzy set theory to the optimal portfolio selection problem, while the third part of this thesis is to the best of our knowledge, the first empirical application of some general results in asset pricing in incomplete markets to the important topic of measurement of financial integration. While the first two parts of this thesis effectively combine well-known ways to quantify the risk-reward trade-offs the third one can be viewed as an empirical verification of the usefulness of the so-called "good deal bounds" theory in designing risk-sensitive pricing bounds. Chapter 1 develops a discrete-time asset pricing model, based on a novel ordinally equivalent representation of recursive utility. To the best of our knowledge, we are the first to use a member of a novel class of recursive utility generators to construct a representative agent model to address some long-lasting issues in asset pricing. Applying strong representation results allows us to show that the model features countercyclical risk premia, for both consumption and financial risk, together with low and procyclical risk free rate. As the recursive utility used nests as a special case the well-known time-state separable utility, all results nest the corresponding ones from the standard model and thus shed light on its well-known shortcomings. The empirical investigation to support these theoretical results, however, showed that as long as one resorts to econometric methods based on approximating conditional moments with unconditional ones, it is not possible to distinguish the model we propose from the standard one. Chapter 2 is a join work with Sergei Sontchik. There we provide theoretical and empirical motivation for aggregation of performance measures. The main idea is that as it makes sense to apply several performance measures ex-post, it also makes sense to base optimal portfolio selection on ex-ante maximization of as many possible performance measures as desired. We thus offer a concrete algorithm for optimal portfolio selection via ex-ante optimization over different horizons of several risk-return trade-offs simultaneously. An empirical application of that algorithm, using seven popular performance measures, suggests that realized returns feature better distributional characteristics relative to those of realized returns from portfolio strategies optimal with respect to single performance measures. When comparing the distributions of realized returns we used two partial risk-reward orderings first and second order stochastic dominance. We first used the Kolmogorov Smirnov test to determine if the two distributions are indeed different, which combined with a visual inspection allowed us to demonstrate that the way we propose to aggregate performance measures leads to portfolio realized returns that first order stochastically dominate the ones that result from optimization only with respect to, for example, Treynor ratio and Jensen's alpha. We checked for second order stochastic dominance via point wise comparison of the so-called absolute Lorenz curve, or the sequence of expected shortfalls for a range of quantiles. As soon as the plot of the absolute Lorenz curve for the aggregated performance measures was above the one corresponding to each individual measure, we were tempted to conclude that the algorithm we propose leads to portfolio returns distribution that second order stochastically dominates virtually all performance measures considered. Chapter 3 proposes a measure of financial integration, based on recent advances in asset pricing in incomplete markets. Given a base market (a set of traded assets) and an index of another market, we propose to measure financial integration through time by the size of the spread between the pricing bounds of the market index, relative to the base market. The bigger the spread around country index A, viewed from market B, the less integrated markets A and B are. We investigate the presence of structural breaks in the size of the spread for EMU member country indices before and after the introduction of the Euro. We find evidence that both the level and the volatility of our financial integration measure increased after the introduction of the Euro. That counterintuitive result suggests the presence of an inherent weakness in the attempt to measure financial integration independently of economic fundamentals. Nevertheless, the results about the bounds on the risk free rate appear plausible from the view point of existing economic theory about the impact of integration on interest rates.
Resumo:
ABSTRACT: BACKGROUND: Decision curve analysis has been introduced as a method to evaluate prediction models in terms of their clinical consequences if used for a binary classification of subjects into a group who should and into a group who should not be treated. The key concept for this type of evaluation is the "net benefit", a concept borrowed from utility theory. METHODS: We recall the foundations of decision curve analysis and discuss some new aspects. First, we stress the formal distinction between the net benefit for the treated and for the untreated and define the concept of the "overall net benefit". Next, we revisit the important distinction between the concept of accuracy, as typically assessed using the Youden index and a receiver operating characteristic (ROC) analysis, and the concept of utility of a prediction model, as assessed using decision curve analysis. Finally, we provide an explicit implementation of decision curve analysis to be applied in the context of case-control studies. RESULTS: We show that the overall net benefit, which combines the net benefit for the treated and the untreated, is a natural alternative to the benefit achieved by a model, being invariant with respect to the coding of the outcome, and conveying a more comprehensive picture of the situation. Further, within the framework of decision curve analysis, we illustrate the important difference between the accuracy and the utility of a model, demonstrating how poor an accurate model may be in terms of its net benefit. Eventually, we expose that the application of decision curve analysis to case-control studies, where an accurate estimate of the true prevalence of a disease cannot be obtained from the data, is achieved with a few modifications to the original calculation procedure. CONCLUSIONS: We present several interrelated extensions to decision curve analysis that will both facilitate its interpretation and broaden its potential area of application.
Resumo:
We study the interaction between nonprice public rationing and prices in the private market. Under a limited budget, the public supplier uses a rationing policy. A private firm may supply the good to those consumers who are rationed by the public system. Consumers have different amounts of wealth, and costs of providing the good to them vary. We consider two regimes. First, the public supplier observes consumers' wealth information; second, the public supplier observes both wealth and cost information. The public supplier chooses a rationing policy, and, simultaneously, the private firm, observing only cost but not wealth information, chooses a pricing policy. In the first regime, there is a continuum of equilibria. The Pareto dominant equilibrium is a means-test equilibrium: poor consumers are supplied while rich consumers are rationed. Prices in the private market increase with the budget. In the second regime, there is a unique equilibrium. This exhibits a cost-effectiveness rationing rule; consumers are supplied if and only if their costbenefit ratios are low. Prices in the private market do not change with the budget. Equilibrium consumer utility is higher in the cost-effectiveness equilibrium than the means-test equilibrium [Authors]
Resumo:
We survey the population genetic basis of social evolution, using a logically consistent set of arguments to cover a wide range of biological scenarios. We start by reconsidering Hamilton's (Hamilton 1964 J. Theoret. Biol. 7, 1-16 (doi:10.1016/0022-5193(64)90038-4)) results for selection on a social trait under the assumptions of additive gene action, weak selection and constant environment and demography. This yields a prediction for the direction of allele frequency change in terms of phenotypic costs and benefits and genealogical concepts of relatedness, which holds for any frequency of the trait in the population, and provides the foundation for further developments and extensions. We then allow for any type of gene interaction within and between individuals, strong selection and fluctuating environments and demography, which may depend on the evolving trait itself. We reach three conclusions pertaining to selection on social behaviours under broad conditions. (i) Selection can be understood by focusing on a one-generation change in mean allele frequency, a computation which underpins the utility of reproductive value weights; (ii) in large populations under the assumptions of additive gene action and weak selection, this change is of constant sign for any allele frequency and is predicted by a phenotypic selection gradient; (iii) under the assumptions of trait substitution sequences, such phenotypic selection gradients suffice to characterize long-term multi-dimensional stochastic evolution, with almost no knowledge about the genetic details underlying the coevolving traits. Having such simple results about the effect of selection regardless of population structure and type of social interactions can help to delineate the common features of distinct biological processes. Finally, we clarify some persistent divergences within social evolution theory, with respect to exactness, synergies, maximization, dynamic sufficiency and the role of genetic arguments.
Resumo:
This article builds on the recent policy diffusion literature and attempts to overcome one of its major problems, namely the lack of a coherent theoretical framework. The literature defines policy diffusion as a process where policy choices are interdependent, and identifies several diffusion mechanisms that specify the link between the policy choices of the various actors. As these mechanisms are grounded in different theories, theoretical accounts of diffusion currently have little internal coherence. In this article we put forward an expected-utility model of policy change that is able to subsume all the diffusion mechanisms. We argue that the expected utility of a policy depends on both its effectiveness and the payoffs it yields, and we show that the various diffusion mechanisms operate by altering these two parameters. Each mechanism affects one of the two parameters, and does so in distinct ways. To account for aggregate patterns of diffusion, we embed our model in a simple threshold model of diffusion. Given the high complexity of the process that results, strong analytical conclusions on aggregate patterns cannot be drawn without more extensive analysis which is beyond the scope of this article. However, preliminary considerations indicate that a wide range of diffusion processes may exist and that convergence is only one possible outcome.
Resumo:
A 67-year-old woman was referred for staging of a mucosa-associated lymphoid tumor lymphoma involving the left conjunctiva. CT scan had shown paravertebral and pelvic masses, and a breast nodule. FDG PET/CT demonstrated moderately increased uptake in the left ocular conjunctiva and confirmed the paravertebral and pelvic masses and the breast nodule. Moreover, abnormal FDG uptake was shown in 2 breast nodules, the flank, the gluteus maximus, and the gastric cardia. The patient received 6 cycles of rituximab-bendamustine chemotherapy with a complete clinical and metabolic response at the 6-month follow-up PET/CT and remained relapse-free without visual acuity problem after a 36-month follow-up.
Resumo:
Préface My thesis consists of three essays where I consider equilibrium asset prices and investment strategies when the market is likely to experience crashes and possibly sharp windfalls. Although each part is written as an independent and self contained article, the papers share a common behavioral approach in representing investors preferences regarding to extremal returns. Investors utility is defined over their relative performance rather than over their final wealth position, a method first proposed by Markowitz (1952b) and by Kahneman and Tversky (1979), that I extend to incorporate preferences over extremal outcomes. With the failure of the traditional expected utility models in reproducing the observed stylized features of financial markets, the Prospect theory of Kahneman and Tversky (1979) offered the first significant alternative to the expected utility paradigm by considering that people focus on gains and losses rather than on final positions. Under this setting, Barberis, Huang, and Santos (2000) and McQueen and Vorkink (2004) were able to build a representative agent optimization model which solution reproduced some of the observed risk premium and excess volatility. The research in behavioral finance is relatively new and its potential still to explore. The three essays composing my thesis propose to use and extend this setting to study investors behavior and investment strategies in a market where crashes and sharp windfalls are likely to occur. In the first paper, the preferences of a representative agent, relative to time varying positive and negative extremal thresholds are modelled and estimated. A new utility function that conciliates between expected utility maximization and tail-related performance measures is proposed. The model estimation shows that the representative agent preferences reveals a significant level of crash aversion and lottery-pursuit. Assuming a single risky asset economy the proposed specification is able to reproduce some of the distributional features exhibited by financial return series. The second part proposes and illustrates a preference-based asset allocation model taking into account investors crash aversion. Using the skewed t distribution, optimal allocations are characterized as a resulting tradeoff between the distribution four moments. The specification highlights the preference for odd moments and the aversion for even moments. Qualitatively, optimal portfolios are analyzed in terms of firm characteristics and in a setting that reflects real-time asset allocation, a systematic over-performance is obtained compared to the aggregate stock market. Finally, in my third article, dynamic option-based investment strategies are derived and illustrated for investors presenting downside loss aversion. The problem is solved in closed form when the stock market exhibits stochastic volatility and jumps. The specification of downside loss averse utility functions allows corresponding terminal wealth profiles to be expressed as options on the stochastic discount factor contingent on the loss aversion level. Therefore dynamic strategies reduce to the replicating portfolio using exchange traded and well selected options, and the risky stock.
Resumo:
Purpose: To evaluate the feasibility, determine the optimal b-value, and assess the utility of 3-T diffusion-weighted MR imaging (DWI) of the spine in differentiating benign from pathologic vertebral compression fractures.Methods and Materials: Twenty patients with 38 vertebral compression fractures (24 benign, 14 pathologic) and 20 controls (total: 23 men, 17 women, mean age 56.2years) were included from December 2010 to May 2011 in this IRB-approved prospective study. MR imaging of the spine was performed on a 3-T unit with T1-w, fat-suppressed T2-w, gadolinium-enhanced fat-suppressed T1-w and zoomed-EPI (2D RF excitation pulse combined with reduced field-of-view single-shot echo-planar readout) diffusion-w (b-values: 0, 300, 500 and 700s/mm2) sequences. Two radiologists independently assessed zoomed-EPI image quality in random order using a 4-point scale: 1=excellent to 4=poor. They subsequently measured apparent diffusion coefficients (ADCs) in normal vertebral bodies and compression fractures, in consensus.Results: Lower b-values correlated with better image quality scores, with significant differences between b=300 (mean±SD=2.6±0.8), b=500 (3.0±0.7) and b=700 (3.6±0.6) (all p<0.001). Mean ADCs of normal vertebral bodies (n=162) were 0.23, 0.17 and 0.11×10-3mm2/s with b=300, 500 and 700s/mm2, respectively. In contrast, mean ADCs were 0.89, 0.70 and 0.59×10-3mm2/s for benign vertebral compression fractures and 0.79, 0.66 and 0.51×10-3mm2/s for pathologic fractures with b=300, 500 and 700s/mm2, respectively. No significant difference was found between ADCs of benign and pathologic fractures.Conclusion: 3-T DWI of the spine is feasible and lower b-values (300s/mm2) are recommended. However, our preliminary results show no advantage of DWI in differentiating benign from pathologic vertebral compression fractures.
Resumo:
Game theory describes and analyzes strategic interaction. It is usually distinguished between static games, which are strategic situations in which the players choose only once as well as simultaneously, and dynamic games, which are strategic situations involving sequential choices. In addition, dynamic games can be further classified according to perfect and imperfect information. Indeed, a dynamic game is said to exhibit perfect information, whenever at any point of the game every player has full informational access to all choices that have been conducted so far. However, in the case of imperfect information some players are not fully informed about some choices. Game-theoretic analysis proceeds in two steps. Firstly, games are modelled by so-called form structures which extract and formalize the significant parts of the underlying strategic interaction. The basic and most commonly used models of games are the normal form, which rather sparsely describes a game merely in terms of the players' strategy sets and utilities, and the extensive form, which models a game in a more detailed way as a tree. In fact, it is standard to formalize static games with the normal form and dynamic games with the extensive form. Secondly, solution concepts are developed to solve models of games in the sense of identifying the choices that should be taken by rational players. Indeed, the ultimate objective of the classical approach to game theory, which is of normative character, is the development of a solution concept that is capable of identifying a unique choice for every player in an arbitrary game. However, given the large variety of games, it is not at all certain whether it is possible to device a solution concept with such universal capability. Alternatively, interactive epistemology provides an epistemic approach to game theory of descriptive character. This rather recent discipline analyzes the relation between knowledge, belief and choice of game-playing agents in an epistemic framework. The description of the players' choices in a given game relative to various epistemic assumptions constitutes the fundamental problem addressed by an epistemic approach to game theory. In a general sense, the objective of interactive epistemology consists in characterizing existing game-theoretic solution concepts in terms of epistemic assumptions as well as in proposing novel solution concepts by studying the game-theoretic implications of refined or new epistemic hypotheses. Intuitively, an epistemic model of a game can be interpreted as representing the reasoning of the players. Indeed, before making a decision in a game, the players reason about the game and their respective opponents, given their knowledge and beliefs. Precisely these epistemic mental states on which players base their decisions are explicitly expressible in an epistemic framework. In this PhD thesis, we consider an epistemic approach to game theory from a foundational point of view. In Chapter 1, basic game-theoretic notions as well as Aumann's epistemic framework for games are expounded and illustrated. Also, Aumann's sufficient conditions for backward induction are presented and his conceptual views discussed. In Chapter 2, Aumann's interactive epistemology is conceptually analyzed. In Chapter 3, which is based on joint work with Conrad Heilmann, a three-stage account for dynamic games is introduced and a type-based epistemic model is extended with a notion of agent connectedness. Then, sufficient conditions for backward induction are derived. In Chapter 4, which is based on joint work with Jérémie Cabessa, a topological approach to interactive epistemology is initiated. In particular, the epistemic-topological operator limit knowledge is defined and some implications for games considered. In Chapter 5, which is based on joint work with Jérémie Cabessa and Andrés Perea, Aumann's impossibility theorem on agreeing to disagree is revisited and weakened in the sense that possible contexts are provided in which agents can indeed agree to disagree.
Resumo:
In principle, we should be glad that Eric Kmiec and his colleagues published in Science's STKE (1) a detailed experimental protocol of their gene repair method (2, 3). However, a careful reading of their contribution raises more doubts about the method. The research published in Science five years ago by Kmiec and his colleagues was said to demonstrate that chimeric RNA-DNA oligonucleotides could correct the mutation responsible for sickle cell anemia with 50% efficiency (4). Such a remarkable result prompted many laboratories to attempt to replicate the research or utilize the method on their own systems. However, if the method worked at all, which it rarely did, the achieved efficiency was usually lower by several orders of magnitude. Now, in the Science's STKE protocol, we are given crucial information about the method and why it is so important to utilize these expensive chimeric RNA-DNA constructs. In the introduction we are told that the RNA-DNA duplex is more stable than a DNA-DNA duplex and so extends the half-life of the complexes formed between the targeted DNA and the chimeric RNA-DNA oligonucleotides. This logical explanation, however, conflicts with the statement in the section entitled "Transfection with Oligonucleotides and Plasmid DNA" that Kmiec and colleagues have recently demonstrated that classical single-stranded DNA oligonucleotides with a few protective phosphothioate linkages have a "gene repair conversion frequency rivaling that of the RNA/DNA chimera". Indeed, the research cited for that result actually states that single-stranded DNA oligonucleotides are in fact several-fold more efficient (3.7-fold) than the RNA-DNA chimeric constructs (5). If that is the case, it raises the question of why Kmiec and colleagues emphasize the importance of the RNA in their original chimeric constructs. Their own new results show that modified single-stranded DNA oligonucleotides are more effective than the expensive RNA-DNA hybrids. Moreover, the current efficiency of the gene repair by RNA-DNA hybrids, according to Kmiec and colleagues in their recent paper is only 4×10-4 even after several hours of pre-selection permitting multiplification of bacterial cells with the corrected plasmid (5). This efficiency is much lower than the 50% value reported five years ago, but is assuredly much closer to the reality.