989 resultados para risk-neutral densities
Resumo:
We develop an affine jump diffusion (AJD) model with the jump-risk premium being determined by both idiosyncratic and systematic sources of risk. While we maintain the classical affine setting of the model, we add a finite set of new state variables that affect the paths of the primitive, under both the actual and the risk-neutral measure, by being related to the primitive's jump process. Those new variables are assumed to be commom to all the primitives. We present simulations to ensure that the model generates the volatility smile and compute the "discounted conditional characteristic function'' transform that permits the pricing of a wide range of derivatives.
Resumo:
The main purpose of this paper is to propose a methodology to obtain a hedge fund tail risk measure. Our measure builds on the methodologies proposed by Almeida and Garcia (2015) and Almeida, Ardison, Garcia, and Vicente (2016), which rely in solving dual minimization problems of Cressie Read discrepancy functions in spaces of probability measures. Due to the recently documented robustness of the Hellinger estimator (Kitamura et al., 2013), we adopt within the Cressie Read family, this specific discrepancy as loss function. From this choice, we derive a minimum Hellinger risk-neutral measure that correctly prices an observed panel of hedge fund returns. The estimated risk-neutral measure is used to construct our tail risk measure by pricing synthetic out-of-the-money put options on hedge fund returns of ten specific categories. We provide a detailed description of our methodology, extract the aggregate Tail risk hedge fund factor for Brazilian funds, and as a by product, a set of individual Tail risk factors for each specific hedge fund category.
Resumo:
A címben említett három fogalom a közgazdasági elméletben központi szerepet foglal el. Ezek viszonya elsősorban a közgazdaságtudományi megismerés határait feszegeti. Mit tudunk a gazdasági döntésekről? Milyen információk alapján születnek a döntések? Lehet-e a gazdasági döntéseket „tudományos” alapra helyezni? A bizonytalanság kérdéséről az 1920-as években való megjelenése óta mindent elmondtak. Megvizsgálták a kérdést filozófiailag, matematikailag. Tárgyalták a kérdés számtalan elméleti és gyakorlati aspektusát. Akkor miért kell sokadszorra is foglalkozni a témával? A válasz igen egyszerű: azért, mert a kérdés minden szempontból ténylegesen alapvető, és mindenkor releváns. Úgy hírlik, hogy a római diadalmenetekben a győztes szekerén mindig volt egy rabszolga is, aki folyamatosan figyelmeztette a diadaltól megmámorosodott vezért, hogy ő is csak egy ember, ezt ne feledje el. A gazdasági döntéshozókat hasonló módon újra és újra figyelmeztetni kell arra, hogy a gazdasági döntések a bizonytalanság jegyében születnek. A gazdasági folyamatok megérthetőségének és kontrollálhatóságának van egy igen szoros korlátja. Ezt a korlátot a folyamatok inherens bizonytalansága adja. A gazdasági döntéshozók fülébe folyamatosan duruzsolni kell: ők is csak emberek, és ezért ismereteik igen korlátozottak. A „bátor” döntések során az eredmény bizonytalan, a tévedés azonban bizonyosra vehető. / === / In the article the author presents some remarks on the application of probability theory in financial decision making. From mathematical point of view the risk neutral measures used in finance are some version of separating hyperplanes used in optimization theory and in general equilibrium theory. Therefore they are just formally a probabilities. They interpretation as probabilities are misleading analogies leading to wrong decisions.
Resumo:
A pénzügyi kockázatok szerepe, modellezése, kezelése az utóbbi évtizedekben vált egyre hangsúlyosabbá az elméletben és a gyakorlatban egyaránt. A 2007-ben kezdődő pénzügyi válság egyik kiváltó oka a kockázatok nem megfelelő felmérése volt. A válság egyik tanulsága, hogy bár a matematika és a fizika hozzájárulása rendkívül mély módszertani apparátust biztosított a kockázatok számszerűsítésére, ezen eredmények pénzügyi alkalmazása csak akkor sikeres, ha pontosan értjük a modellek feltételeit és korlátait. Jelen cikk a pénzügyi derivatívák értékelésének alapelveit, valamint a származtatott ügyletekben megjelenő kockázatokat tekinti át, illetve bemutatja azokat a bizonytalansági tényezőket, amelyek megkérdőjelezik az értékelés objektivitását. / === / The modeling and management of financial risks became one of the most important topics of the last decade both in theory and fi nancial practice. The mismanagement of fi nancial risks can be mentioned among the reasons contributing to the eruption of the recent crisis. In order to use successfully the methodology of mathematics and physics in pricing of derivatives, we have to consider the assumptions and limits of the models. This paper introduces the main concepts – no arbitrage pricing and risk neutral valuation – in derivatives’ pricing, then presents and quantifies the risk of some derivative products. I am arguing that the assumptions of the Black–Scholes and Merton model are injured at several points, so the pricing can not be perfectly cleared from all the risk preferences. All those risks, deriving from the difference of the reality and the model are priced in the volatility parameter in the practice.
Resumo:
We present a general multistage stochastic mixed 0-1 problem where the uncertainty appears everywhere in the objective function, constraints matrix and right-hand-side. The uncertainty is represented by a scenario tree that can be a symmetric or a nonsymmetric one. The stochastic model is converted in a mixed 0-1 Deterministic Equivalent Model in compact representation. Due to the difficulty of the problem, the solution offered by the stochastic model has been traditionally obtained by optimizing the objective function expected value (i.e., mean) over the scenarios, usually, along a time horizon. This approach (so named risk neutral) has the inconvenience of providing a solution that ignores the variance of the objective value of the scenarios and, so, the occurrence of scenarios with an objective value below the expected one. Alternatively, we present several approaches for risk averse management, namely, a scenario immunization strategy, the optimization of the well known Value-at-Risk (VaR) and several variants of the Conditional Value-at-Risk strategies, the optimization of the expected mean minus the weighted probability of having a "bad" scenario to occur for the given solution provided by the model, the optimization of the objective function expected value subject to stochastic dominance constraints (SDC) for a set of profiles given by the pairs of threshold objective values and either bounds on the probability of not reaching the thresholds or the expected shortfall over them, and the optimization of a mixture of the VaR and SDC strategies.
Resumo:
A "self-exciting" market is one in which the probability of observing a crash increases in response to the occurrence of a crash. It essentially describes cases where the initial crash serves to weaken the system to some extent, making subsequent crashes more likely. This thesis investigates if equity markets possess this property. A self-exciting extension of the well-known jump-based Bates (1996) model is used as the workhorse model for this thesis, and a particle-filtering algorithm is used to facilitate estimation by means of maximum likelihood. The estimation method is developed so that option prices are easily included in the dataset, leading to higher quality estimates. Equilibrium arguments are used to price the risks associated with the time-varying crash probability, and in turn to motivate a risk-neutral system for use in option pricing. The option pricing function for the model is obtained via the application of widely-used Fourier techniques. An application to S&P500 index returns and a panel of S&P500 index option prices reveals evidence of self excitation.
Resumo:
This article describes a maximum likelihood method for estimating the parameters of the standard square-root stochastic volatility model and a variant of the model that includes jumps in equity prices. The model is fitted to data on the S&P 500 Index and the prices of vanilla options written on the index, for the period 1990 to 2011. The method is able to estimate both the parameters of the physical measure (associated with the index) and the parameters of the risk-neutral measure (associated with the options), including the volatility and jump risk premia. The estimation is implemented using a particle filter whose efficacy is demonstrated under simulation. The computational load of this estimation method, which previously has been prohibitive, is managed by the effective use of parallel computing using graphics processing units (GPUs). The empirical results indicate that the parameters of the models are reliably estimated and consistent with values reported in previous work. In particular, both the volatility risk premium and the jump risk premium are found to be significant.
Resumo:
In this article, we look at the political business cycle problem through the lens of uncertainty. The feedback control used by us is the famous NKPC with stochasticity and wage rigidities. We extend the New Keynesian Phillips Curve model to the continuous time stochastic set up with an Ornstein-Uhlenbeck process. We minimize relevant expected quadratic cost by solving the corresponding Hamilton-Jacobi-Bellman equation. The basic intuition of the classical model is qualitatively carried forward in our set up but uncertainty also plays an important role in determining the optimal trajectory of the voter support function. The internal variability of the system acts as a base shifter for the support function in the risk neutral case. The role of uncertainty is even more prominent in the risk averse case where all the shape parameters are directly dependent on variability. Thus, in this case variability controls both the rates of change as well as the base shift parameters. To gain more insight we have also studied the model when the coefficients are time invariant and studied numerical solutions. The close relationship between the unemployment rate and the support function for the incumbent party is highlighted. The role of uncertainty in creating sampling fluctuation in this set up, possibly towards apparently anomalous results, is also explored.
Resumo:
[ES] Los modelos implícitos constituyen uno de los enfoques de valoración de opciones alternativos al modelo de Black-Scholes que ha conocido un mayor desarrollo en los últimos años. Dentro de este planteamiento existen diferentes alternativas: los árboles implícitos, los modelos con función de volatilidad determinista y los modelos con función de volatilidad implícita. Todos ellos se construyen a partir de una estimación de la distribución de probabilidades riesgo-neutral del precio futuro del activo subyacente, congruente con los precios de mercado de las opciones negociadas. En consecuencia, los modelos implícitos proporcionan buenos resultados en la valoración de opciones dentro de la muestra. Sin embargo, su comportamiento como instrumento de predicción para opciones fuera de muestra no resulta satisfactorio. En este artículo se analiza la medida en la que este enfoque contribuye a la mejora de la valoración de opciones, tanto desde un punto de vista teórico como práctico.
Resumo:
Government procurement of a new good or service is a process that usually includes basic research, development, and production. Empirical evidences indicate that investments in research and development (R and D) before production are significant in many defense procurements. Thus, optimal procurement policy should not be only to select the most efficient producer, but also to induce the contractors to design the best product and to develop the best technology. It is difficult to apply the current economic theory of optimal procurement and contracting, which has emphasized production, but ignored R and D, to many cases of procurement.
In this thesis, I provide basic models of both R and D and production in the procurement process where a number of firms invest in private R and D and compete for a government contract. R and D is modeled as a stochastic cost-reduction process. The government is considered both as a profit-maximizer and a procurement cost minimizer. In comparison to the literature, the following results derived from my models are significant. First, R and D matters in procurement contracting. When offering the optimal contract the government will be better off if it correctly takes into account costly private R and D investment. Second, competition matters. The optimal contract and the total equilibrium R and D expenditures vary with the number of firms. The government usually does not prefer infinite competition among firms. Instead, it prefers free entry of firms. Third, under a R and D technology with the constant marginal returns-to-scale, it is socially optimal to have only one firm to conduct all of the R and D and production. Fourth, in an independent private values environment with risk-neutral firms, an informed government should select one of four standard auction procedures with an appropriate announced reserve price, acting as if it does not have any private information.
Resumo:
In this work we extend to the multistage case two recent risk averse measures for two-stage stochastic programs based on first- and second-order stochastic dominance constraints induced by mixed-integer linear recourse. Additionally, we consider Time Stochastic Dominance (TSD) along a given horizon. Given the dimensions of medium-sized problems augmented by the new variables and constraints required by those risk measures, it is unrealistic to solve the problem up to optimality by plain use of MIP solvers in a reasonable computing time, at least. Instead of it, decomposition algorithms of some type should be used. We present an extension of our Branch-and-Fix Coordination algorithm, so named BFC-TSD, where a special treatment is given to cross scenario group constraints that link variables from different scenario groups. A broad computational experience is presented by comparing the risk neutral approach and the tested risk averse strategies. The performance of the new version of the BFC algorithm versus the plain use of a state-of-the-artMIP solver is also reported.
Resumo:
Comparisons of 2D fluid simulations with experimental measurements of Ar/Cl-2 plasmas in a low-pressure inductively coupled reactor are reported. Simulations show that the wall recombination coefficient of Cl atom (gamma) is a crucial parameter of the model and that neutral densities are very sensitive to its variations. The best agreement between model and experiment is obtained for gamma = 0.02, which is much lower than the value predicted for stainless steel walls (gamma = 0.6). This is consistent with reactor wall contaminations classically observed in such discharges. The electron density, negative ion fraction and Cl atom density have been investigated under various conditions of chlorine and argon concentrations, gas pressure and applied rf input power. The plasma electronegativity decreases with rf power and increases with chlorine concentration. At high pressure, the power absorption and distribution of charged particles become more localized below the quartz window. Although the experimental trends are well reproduced by the simulations, the calculated charged particle densities are systematically overestimated by a factor of 3-5. The reasons for this discrepancy are discussed in the paper.
Resumo:
The interaction of a high-intensity laser pulse with a plasma density channel preformed in a gas jet target has been studied. At neutral densities below 3.0 X 10(19) cm(-3) a strong interaction between the pulse and the channel walls was observed, there was clear evidence of pulse confinement, and the laser irradiance was significantly increased compared to an interaction with neutral gas. At higher gas densities, however, the radial uniformity and length of the channel were both found to be adversely affected by refractive defocusing of the prepulse used to generate the channel.
Resumo:
Tese de mestrado. Biologia (Ecologia e Gestão Ambiental). Universidade de Lisboa, Faculdade de Ciências, 2014
Resumo:
In this article, we calibrate the Vasicek interest rate model under the risk neutral measure by learning the model parameters using Gaussian processes for machine learning regression. The calibration is done by maximizing the likelihood of zero coupon bond log prices, using mean and covariance functions computed analytically, as well as likelihood derivatives with respect to the parameters. The maximization method used is the conjugate gradients. The only prices needed for calibration are zero coupon bond prices and the parameters are directly obtained in the arbitrage free risk neutral measure.