915 resultados para Conditional volatility
Resumo:
In this paper we address a problem arising in risk management; namely the study of price variations of different contingent claims in the Black-Scholes model due to anticipating future events. The method we propose to use is an extension of the classical Vega index, i.e. the price derivative with respect to the constant volatility, in thesense that we perturb the volatility in different directions. Thisdirectional derivative, which we denote the local Vega index, will serve as the main object in the paper and one of the purposes is to relate it to the classical Vega index. We show that for all contingent claims studied in this paper the local Vega index can be expressed as a weighted average of the perturbation in volatility. In the particular case where the interest rate and the volatility are constant and the perturbation is deterministic, the local Vega index is an average of this perturbation multiplied by the classical Vega index. We also study the well-known goal problem of maximizing the probability of a perfect hedge and show that the speed of convergence is in fact dependent of the local Vega index.
Resumo:
In this paper, generalizing results in Alòs, León and Vives (2007b), we see that the dependence of jumps in the volatility under a jump-diffusion stochastic volatility model, has no effect on the short-time behaviour of the at-the-money implied volatility skew, although the corresponding Hull and White formula depends on the jumps. Towards this end, we use Malliavin calculus techniques for Lévy processes based on Løkka (2004), Petrou (2006), and Solé, Utzet and Vives (2007).
Resumo:
We lay out a small open economy version of the Calvo sticky price model, and show how the equilibrium dynamics can be reduced to simple representation in domestic inflation and the output gap. We use the resulting framework to analyze the macroeconomic implications of three alternative rule-based policy regimes for the small open economy: domestic inflation and CPI-based Taylor rules, and an exchange rate peg. We show that a key difference amongthese regimes lies in the relative amount of exchange rate volatility that they entail. We also discuss a special case for which domestic inflation targeting constitutes the optimal policy, and where a simple second order approximation to the utility of the representative consumer can be derived and used to evaluate the welfare losses associated with the suboptimal rules.
Resumo:
The remarkable decline in macroeconomic volatility experienced by the U.S. economy since the mid-80s (the so-called Great Moderation) has been accompanied by large changes in the patterns of comovements among output, hours and labor productivity. Those changes are reflected in both conditional and unconditional second moments as well as in the impulse responses to identified shocks. That evidencepoints to structural change, as opposed to just good luck, as an explanation for the Great Moderation. We use a simple macro model to suggest some of the immediate sources which are likely to be behindthe observed changes.
Resumo:
In this paper we use Malliavin calculus techniques to obtain an expression for the short-time behavior of the at-the-money implied volatility skew for a generalization of the Bates model, where the volatility does not need to be neither a difussion, nor a Markov process as the examples in section 7 show. This expression depends on the derivative of the volatility in the sense of Malliavin calculus.
Resumo:
We show that the Heston volatility or equivalently the Cox-Ingersoll-Ross process is Malliavin differentiable and give an explicit expression for the derivative. This result assures the applicability of Malliavin calculus in the framework of the Heston stochastic volatility model and the Cox-Ingersoll-Ross model for interest rates.
Resumo:
This paper discusses the analysis of cases in which the inclusion or exclusion of a particular suspect, as a possible contributor to a DNA mixture, depends on the value of a variable (the number of contributors) that cannot be determined with certainty. It offers alternative ways to deal with such cases, including sensitivity analysis and object-oriented Bayesian networks, that separate uncertainty about the inclusion of the suspect from uncertainty about other variables. The paper presents a case study in which the value of DNA evidence varies radically depending on the number of contributors to a DNA mixture: if there are two contributors, the suspect is excluded; if there are three or more, the suspect is included; but the number of contributors cannot be determined with certainty. It shows how an object-oriented Bayesian network can accommodate and integrate varying perspectives on the unknown variable and how it can reduce the potential for bias by directing attention to relevant considerations and distinguishing different sources of uncertainty. It also discusses the challenge of presenting such evidence to lay audiences.
Resumo:
Background Brain-Derived Neurotrophic Factor (BDNF) is the main candidate for neuroprotective therapy for Huntington's disease (HD), but its conditional administration is one of its most challenging problems. Results Here we used transgenic mice that over-express BDNF under the control of the Glial Fibrillary Acidic Protein (GFAP) promoter (pGFAP-BDNF mice) to test whether up-regulation and release of BDNF, dependent on astrogliosis, could be protective in HD. Thus, we cross-mated pGFAP-BDNF mice with R6/2 mice to generate a double-mutant mouse with mutant huntingtin protein and with a conditional over-expression of BDNF, only under pathological conditions. In these R6/2:pGFAP-BDNF animals, the decrease in striatal BDNF levels induced by mutant huntingtin was prevented in comparison to R6/2 animals at 12 weeks of age. The recovery of the neurotrophin levels in R6/2:pGFAP-BDNF mice correlated with an improvement in several motor coordination tasks and with a significant delay in anxiety and clasping alterations. Therefore, we next examined a possible improvement in cortico-striatal connectivity in R62:pGFAP-BDNF mice. Interestingly, we found that the over-expression of BDNF prevented the decrease of cortico-striatal presynaptic (VGLUT1) and postsynaptic (PSD-95) markers in the R6/2:pGFAP-BDNF striatum. Electrophysiological studies also showed that basal synaptic transmission and synaptic fatigue both improved in R6/2:pGAP-BDNF mice. Conclusions These results indicate that the conditional administration of BDNF under the GFAP promoter could become a therapeutic strategy for HD due to its positive effects on synaptic plasticity.
Resumo:
The ability to express tightly controlled amounts of endogenous and recombinant proteins in plant cells is an essential tool for research and biotechnology. Here, the inducibility of the soybean heat-shock Gmhsp17.3B promoter was addressed in the moss Physcomitrella patens, using beta-glucuronidase (GUS) and an F-actin marker (GFP-talin) as reporter proteins. In stably transformed moss lines, Gmhsp17.3B-driven GUS expression was extremely low at 25 degrees C. In contrast, a short non-damaging heat-treatment at 38 degrees C rapidly induced reporter expression over three orders of magnitude, enabling GUS accumulation and the labelling of F-actin cytoskeleton in all cell types and tissues. Induction levels were tightly proportional to the temperature and duration of the heat treatment, allowing fine-tuning of protein expression. Repeated heating/cooling cycles led to the massive GUS accumulation, up to 2.3% of the total soluble proteins. The anti-inflammatory drug acetyl salicylic acid (ASA) and the membrane-fluidiser benzyl alcohol (BA) also induced GUS expression at 25 degrees C, allowing the production of recombinant proteins without heat-treatment. The Gmhsp17.3B promoter thus provides a reliable versatile conditional promoter for the controlled expression of recombinant proteins in the moss P. patens.
Resumo:
Executive Summary The unifying theme of this thesis is the pursuit of a satisfactory ways to quantify the riskureward trade-off in financial economics. First in the context of a general asset pricing model, then across models and finally across country borders. The guiding principle in that pursuit was to seek innovative solutions by combining ideas from different fields in economics and broad scientific research. For example, in the first part of this thesis we sought a fruitful application of strong existence results in utility theory to topics in asset pricing. In the second part we implement an idea from the field of fuzzy set theory to the optimal portfolio selection problem, while the third part of this thesis is to the best of our knowledge, the first empirical application of some general results in asset pricing in incomplete markets to the important topic of measurement of financial integration. While the first two parts of this thesis effectively combine well-known ways to quantify the risk-reward trade-offs the third one can be viewed as an empirical verification of the usefulness of the so-called "good deal bounds" theory in designing risk-sensitive pricing bounds. Chapter 1 develops a discrete-time asset pricing model, based on a novel ordinally equivalent representation of recursive utility. To the best of our knowledge, we are the first to use a member of a novel class of recursive utility generators to construct a representative agent model to address some long-lasting issues in asset pricing. Applying strong representation results allows us to show that the model features countercyclical risk premia, for both consumption and financial risk, together with low and procyclical risk free rate. As the recursive utility used nests as a special case the well-known time-state separable utility, all results nest the corresponding ones from the standard model and thus shed light on its well-known shortcomings. The empirical investigation to support these theoretical results, however, showed that as long as one resorts to econometric methods based on approximating conditional moments with unconditional ones, it is not possible to distinguish the model we propose from the standard one. Chapter 2 is a join work with Sergei Sontchik. There we provide theoretical and empirical motivation for aggregation of performance measures. The main idea is that as it makes sense to apply several performance measures ex-post, it also makes sense to base optimal portfolio selection on ex-ante maximization of as many possible performance measures as desired. We thus offer a concrete algorithm for optimal portfolio selection via ex-ante optimization over different horizons of several risk-return trade-offs simultaneously. An empirical application of that algorithm, using seven popular performance measures, suggests that realized returns feature better distributional characteristics relative to those of realized returns from portfolio strategies optimal with respect to single performance measures. When comparing the distributions of realized returns we used two partial risk-reward orderings first and second order stochastic dominance. We first used the Kolmogorov Smirnov test to determine if the two distributions are indeed different, which combined with a visual inspection allowed us to demonstrate that the way we propose to aggregate performance measures leads to portfolio realized returns that first order stochastically dominate the ones that result from optimization only with respect to, for example, Treynor ratio and Jensen's alpha. We checked for second order stochastic dominance via point wise comparison of the so-called absolute Lorenz curve, or the sequence of expected shortfalls for a range of quantiles. As soon as the plot of the absolute Lorenz curve for the aggregated performance measures was above the one corresponding to each individual measure, we were tempted to conclude that the algorithm we propose leads to portfolio returns distribution that second order stochastically dominates virtually all performance measures considered. Chapter 3 proposes a measure of financial integration, based on recent advances in asset pricing in incomplete markets. Given a base market (a set of traded assets) and an index of another market, we propose to measure financial integration through time by the size of the spread between the pricing bounds of the market index, relative to the base market. The bigger the spread around country index A, viewed from market B, the less integrated markets A and B are. We investigate the presence of structural breaks in the size of the spread for EMU member country indices before and after the introduction of the Euro. We find evidence that both the level and the volatility of our financial integration measure increased after the introduction of the Euro. That counterintuitive result suggests the presence of an inherent weakness in the attempt to measure financial integration independently of economic fundamentals. Nevertheless, the results about the bounds on the risk free rate appear plausible from the view point of existing economic theory about the impact of integration on interest rates.
Resumo:
Extreme times techniques, generally applied to nonequilibrium statistical mechanical processes, are also useful for a better understanding of financial markets. We present a detailed study on the mean first-passage time for the volatility of return time series. The empirical results extracted from daily data of major indices seem to follow the same law regardless of the kind of index thus suggesting an universal pattern. The empirical mean first-passage time to a certain level L is fairly different from that of the Wiener process showing a dissimilar behavior depending on whether L is higher or lower than the average volatility. All of this indicates a more complex dynamics in which a reverting force drives volatility toward its mean value. We thus present the mean first-passage time expressions of the most common stochastic volatility models whose approach is comparable to the random diffusion description. We discuss asymptotic approximations of these models and confront them to empirical results with a good agreement with the exponential Ornstein-Uhlenbeck model.
Resumo:
Variable queen mating frequencies provide a unique opportunity to study the resolution of worker-queen conflict over sex ratio in social Hymenoptera, because the conflict is maximal in colonies headed by a singly mated queen and is weak or nonexistent in colonies headed by a multiply mated queen. In the wood ant Formica exsecta, workers in colonies with a singly mated queen, but not those in colonies with a multiply mated queen, altered the sex ratio of queen-laid eggs by eliminating males to preferentially raise queens. By this conditional response to queen mating frequency, workers enhance their inclusive fitness.
Resumo:
Epithelial sodium channels (ENaC) are members of the degenerin/ENaC superfamily of non-voltage-gated, highly amiloride-sensitive cation channels that are composed of three subunits (alpha-, beta-, and gamma-ENaC). Since complete gene inactivation of the beta- and gamma-ENaC subunit genes (Scnn1b and Scnn1g) leads to early postnatal death, we generated conditional alleles and obtained mice harboring floxed and null alleles for both gene loci. Using quantitative RT-PCR analysis, we showed that the introduction of the loxP sites did not interfere with the mRNA transcript expression level of the Scnn1b and Scnn1g gene locus, respectively. Upon a regular and salt-deficient diet, both beta- and gamma-ENaC floxed mice showed no difference in their mRNA transcript expression levels, plasma electrolytes, and aldosterone concentrations as well as weight changes compared with control animals. These mice can now be utilized to dissect the role of ENaC function in classical and nonclassic target organs/tissues.
Resumo:
We propose new methods for evaluating predictive densities. The methods includeKolmogorov-Smirnov and Cram?r-von Mises-type tests for the correct specification ofpredictive densities robust to dynamic mis-specification. The novelty is that the testscan detect mis-specification in the predictive densities even if it appears only overa fraction of the sample, due to the presence of instabilities. Our results indicatethat our tests are well sized and have good power in detecting mis-specification inpredictive densities, even when it is time-varying. An application to density forecastsof the Survey of Professional Forecasters demonstrates the usefulness of the proposedmethodologies.