996 resultados para Index options
Resumo:
Volatility has a central role in various theoretical and practical applications in financial markets. These include the applications related to portfolio theory, derivatives pricing and financial risk management. Both theoretical and practical applications require good estimates and forecasts for the asset return volatility. The goal of this study is to examine the forecast performance of one of the more recent volatility measures, model-free implied volatility. Model-free implied volatility is extracted from the prices in the option markets, and it aims to provide an unbiased estimate for the market’s expectation on the future level of volatility. Since it is extracted from the option prices, model-free implied volatility should contain all the relevant information that the market participants have. Moreover, model-free implied volatility requires less restrictive assumptions than the commonly used Black-Scholes implied volatility, which means that it should be less biased estimate for the market’s expectations. Therefore, it should also be a better forecast for the future volatility. The forecast performance of model-free implied volatility is evaluated by comparing it to the forecast performance of Black-Scholes implied volatility and GARCH(1,1) forecast. Weekly forecasts for six years period were calculated for the forecasted variable, German stock market index DAX. The data consisted of price observations for DAX index options. The forecast performance was measured using econometric methods, which aimed to capture the biasedness, accuracy and the information content of the forecasts. The results of the study suggest that the forecast performance of model-free implied volatility is superior to forecast performance of GARCH(1,1) forecast. However, the results also suggest that the forecast performance of model-free implied volatility is not as good as the forecast performance of Black-Scholes implied volatility, which is against the hypotheses based on theory. The results of this study are consistent with the majority of prior research on the subject.
Resumo:
We estimate the shape of the distribution of stock prices using data from options on the underlying asset, and test whether this distribution is distorted in a systematic manner each time a particular news event occurs. In particular we look at the response of the FTSE100 index to market wide announcements of key macroeconomic indicators and policy variables. We show that the whole distribution of stock prices can be distorted on an event day. The shift in distributional shape happens whether the event is characterized as an announcement occurrence or as a measured surprise. We find that larger surprises have proportionately greater impact, and that higher moments are more sensitive to events however characterised.
Resumo:
We use a novel pricing model to imply time series of diffusive volatility and jump intensity from S&P 500 index options. These two measures capture the ex ante risk assessed by investors. Using a simple general equilibrium model, we translate the implied measures of ex ante risk into an ex ante risk premium. The average premium that compensates the investor for the ex ante risks is 70% higher than the premium for realized volatility. The equity premium implied from option prices is shown to significantly predict subsequent stock market returns.
Resumo:
In this dissertation, I investigate three related topics on asset pricing: the consumption-based asset pricing under long-run risks and fat tails, the pricing of VIX (CBOE Volatility Index) options and the market price of risk embedded in stock returns and stock options. These three topics are fully explored in Chapter II through IV. Chapter V summarizes the main conclusions. In Chapter II, I explore the effects of fat tails on the equilibrium implications of the long run risks model of asset pricing by introducing innovations with dampened power law to consumption and dividends growth processes. I estimate the structural parameters of the proposed model by maximum likelihood. I find that the stochastic volatility model with fat tails can, without resorting to high risk aversion, generate implied risk premium, expected risk free rate and their volatilities comparable to the magnitudes observed in data. In Chapter III, I examine the pricing performance of VIX option models. The contention that simpler-is-better is supported by the empirical evidence using actual VIX option market data. I find that no model has small pricing errors over the entire range of strike prices and times to expiration. In general, Whaley’s Black-like option model produces the best overall results, supporting the simpler-is-better contention. However, the Whaley model does under/overprice out-of-the-money call/put VIX options, which is contrary to the behavior of stock index option pricing models. In Chapter IV, I explore risk pricing through a model of time-changed Lvy processes based on the joint evidence from individual stock options and underlying stocks. I specify a pricing kernel that prices idiosyncratic and systematic risks. This approach to examining risk premia on stocks deviates from existing studies. The empirical results show that the market pays positive premia for idiosyncratic and market jump-diffusion risk, and idiosyncratic volatility risk. However, there is no consensus on the premium for market volatility risk. It can be positive or negative. The positive premium on idiosyncratic risk runs contrary to the implications of traditional capital asset pricing theory.
Resumo:
In this dissertation, I investigate three related topics on asset pricing: the consumption-based asset pricing under long-run risks and fat tails, the pricing of VIX (CBOE Volatility Index) options and the market price of risk embedded in stock returns and stock options. These three topics are fully explored in Chapter II through IV. Chapter V summarizes the main conclusions. In Chapter II, I explore the effects of fat tails on the equilibrium implications of the long run risks model of asset pricing by introducing innovations with dampened power law to consumption and dividends growth processes. I estimate the structural parameters of the proposed model by maximum likelihood. I find that the stochastic volatility model with fat tails can, without resorting to high risk aversion, generate implied risk premium, expected risk free rate and their volatilities comparable to the magnitudes observed in data. In Chapter III, I examine the pricing performance of VIX option models. The contention that simpler-is-better is supported by the empirical evidence using actual VIX option market data. I find that no model has small pricing errors over the entire range of strike prices and times to expiration. In general, Whaley’s Black-like option model produces the best overall results, supporting the simpler-is-better contention. However, the Whaley model does under/overprice out-of-the-money call/put VIX options, which is contrary to the behavior of stock index option pricing models. In Chapter IV, I explore risk pricing through a model of time-changed Lévy processes based on the joint evidence from individual stock options and underlying stocks. I specify a pricing kernel that prices idiosyncratic and systematic risks. This approach to examining risk premia on stocks deviates from existing studies. The empirical results show that the market pays positive premia for idiosyncratic and market jump-diffusion risk, and idiosyncratic volatility risk. However, there is no consensus on the premium for market volatility risk. It can be positive or negative. The positive premium on idiosyncratic risk runs contrary to the implications of traditional capital asset pricing theory.
Resumo:
The aim of this thesis is to price options on equity index futures with an application to standard options on S&P 500 futures traded on the Chicago Mercantile Exchange. Our methodology is based on stochastic dynamic programming, which can accommodate European as well as American options. The model accommodates dividends from the underlying asset. It also captures the optimal exercise strategy and the fair value of the option. This approach is an alternative to available numerical pricing methods such as binomial trees, finite differences, and ad-hoc numerical approximation techniques. Our numerical and empirical investigations demonstrate convergence, robustness, and efficiency. We use this methodology to value exchange-listed options. The European option premiums thus obtained are compared to Black's closed-form formula. They are accurate to four digits. The American option premiums also have a similar level of accuracy compared to premiums obtained using finite differences and binomial trees with a large number of time steps. The proposed model accounts for deterministic, seasonally varying dividend yield. In pricing futures options, we discover that what matters is the sum of the dividend yields over the life of the futures contract and not their distribution.
Resumo:
A Work Project, presented as part of the requirements for the Award of a Masters Degree in Finance from the NOVA – School of Business and Economics
Resumo:
Terminal heart failure can be the cause or the result of major dysfunctions of the organisms. Although, the outcome of the natural history is the same in both situations, it is of prime importance to differentiate the two, as only heart failure as the primary cause allows for successful mechanical circulatory support as bridge to transplantation or towards recovery. Various objective parameters allow for the establishment of the diagnosis of terminal heart failure despite optimal medical treatment. A cardiac index <2.0 l/min, and a mixed venous oxygen saturation <60%, in combination with progressive renal failure, should trigger a diagnostic work-up in order to identify cardiac defects that can be corrected or to list the patient for transplantation with/without mechanical circulatory support.
Resumo:
The goal of this research was to make an overall sight to VIX® and how it can be used as a stock market indicator. Volatility index often referred as the fear index, measures how much it costs for investor to protect his/her S&P 500 position from fluctuations with options. Over the relatively short history of VIX it has been a successful timing coordinator and it has given incremental information about the market state adding its own psychological view of the amount of fear and greed. Correctly utilized VIX information gives a considerable advantage in timing market actions. In this paper we test how VIX works as a leading indicator of broad stock market index such as S&P 500 (SPX). The purpose of this paper is to find a working way to interpret VIX. The various tests are made on time series data ranging from the year 1990 to the year 2010. The 10-day simple moving average strategy gave significant profits from the whole time when VIX data is available. Strategy was able to utilize the increases of SPX in example portfolio value and was able to step aside when SPX was declining. At the times when portfolio was aside of S it was on safety fund like on treasury bills getting an annual yield of 3 percent. On the other side just a static number’s of VIX did not work as indicators in a profit making way.
Resumo:
This study investigates futures market efficiency and optimal hedge ratio estimation. First, cointegration between spot and futures prices is studied using Johansen method, with two different model specifications. If prices are found cointegrated, restrictions on cointegrating vector and adjustment coefficients are imposed, to account for unbiasedness, weak exogeneity and prediction hypothesis. Second, optimal hedge ratios are estimated using static OLS, and time-varying DVEC and CCC models. In-sample and out-of-sample results for one, two and five period ahead are reported. The futures used in thesis are RTS index, EUR/RUB exchange rate and Brent oil, traded in Futures and options on RTS.(FORTS) For in-sample period, data points were acquired from start of trading of each futures contract, RTS index from August 2005, EUR/RUB exchange rate March 2009 and Brent oil October 2008, lasting till end of May 2011. Out-of-sample period covers start of June 2011, till end of December 2011. Our results indicate that all three asset pairs, spot and futures, are cointegrated. We found RTS index futures to be unbiased predictor of spot price, mixed evidence for exchange rate, and for Brent oil futures unbiasedness was not supported. Weak exogeneity results for all pairs indicated spot price to lead in price discovery process. Prediction hypothesis, unbiasedness and weak exogeneity of futures, was rejected for all asset pairs. Variance reduction results varied between assets, in-sample in range of 40-85 percent and out-of sample in range of 40-96 percent. Differences between models were found small, except for Brent oil in which OLS clearly dominated. Out-of-sample results indicated exceptionally high variance reduction for RTS index, approximately 95 percent.
Resumo:
Le biais de confusion est un défi majeur des études observationnelles, surtout s'ils sont induits par des caractéristiques difficiles, voire impossibles, à mesurer dans les banques de données administratives de soins de santé. Un des biais de confusion souvent présents dans les études pharmacoépidémiologiques est la prescription sélective (en anglais « prescription channeling »), qui se manifeste lorsque le choix du traitement dépend de l'état de santé du patient et/ou de son expérience antérieure avec diverses options thérapeutiques. Parmi les méthodes de contrôle de ce biais, on retrouve le score de comorbidité, qui caractérise l'état de santé d'un patient à partir de médicaments délivrés ou de diagnostics médicaux rapportés dans les données de facturations des médecins. La performance des scores de comorbidité fait cependant l'objet de controverses car elle semble varier de façon importante selon la population d'intérêt. Les objectifs de cette thèse étaient de développer, valider, et comparer les performances de deux scores de comorbidité (un qui prédit le décès et l’autre qui prédit l’institutionnalisation), développés à partir des banques de services pharmaceutiques de la Régie de l'assurance-maladie du Québec (RAMQ) pour leur utilisation dans la population âgée. Cette thèse vise également à déterminer si l'inclusion de caractéristiques non rapportées ou peu valides dans les banques de données administratives (caractéristiques socio-démographiques, troubles mentaux ou du sommeil), améliore la performance des scores de comorbidité dans la population âgée. Une étude cas-témoins intra-cohorte fut réalisée. La cohorte source consistait en un échantillon aléatoire de 87 389 personnes âgées vivant à domicile, répartie en une cohorte de développement (n=61 172; 70%) et une cohorte de validation (n=26 217; 30%). Les données ont été obtenues à partir des banques de données de la RAMQ. Pour être inclus dans l’étude, les sujets devaient être âgés de 66 ans et plus, et être membres du régime public d'assurance-médicaments du Québec entre le 1er janvier 2000 et le 31 décembre 2009. Les scores ont été développés à partir de la méthode du Framingham Heart Study, et leur performance évaluée par la c-statistique et l’aire sous les courbes « Receiver Operating Curves ». Pour le dernier objectif qui est de documenter l’impact de l’ajout de variables non-mesurées ou peu valides dans les banques de données au score de comorbidité développé, une étude de cohorte prospective (2005-2008) a été réalisée. La population à l'étude, de même que les données, sont issues de l'Étude sur la Santé des Aînés (n=1 494). Les variables d'intérêt incluaient statut marital, soutien social, présence de troubles de santé mentale ainsi que troubles du sommeil. Tel que décrit dans l'article 1, le Geriatric Comorbidity Score (GCS) basé sur le décès, a été développé et a présenté une bonne performance (c-statistique=0.75; IC95% 0.73-0.78). Cette performance s'est avérée supérieure à celle du Chronic Disease Score (CDS) lorsqu'appliqué dans la population à l'étude (c-statistique du CDS : 0.47; IC 95%: 0.45-0.49). Une revue de littérature exhaustive a montré que les facteurs associés au décès étaient très différents de ceux associés à l’institutionnalisation, justifiant ainsi le développement d'un score spécifique pour prédire le risque d'institutionnalisation. La performance de ce dernier s'est avérée non statistiquement différente de celle du score de décès (c-statistique institutionnalisation : 0.79 IC95% 0.77-0.81). L'inclusion de variables non rapportées dans les banques de données administratives n'a amélioré que de 11% la performance du score de décès; le statut marital et le soutien social ayant le plus contribué à l'amélioration observée. En conclusion, de cette thèse, sont issues trois contributions majeures. D'une part, il a été démontré que la performance des scores de comorbidité basés sur le décès dépend de la population cible, d'où l'intérêt du Geriatric Comorbidity Score, qui fut développé pour la population âgée vivant à domicile. D'autre part, les médicaments associés au risque d'institutionnalisation diffèrent de ceux associés au risque de décès dans la population âgé, justifiant ainsi le développement de deux scores distincts. Cependant, les performances des deux scores sont semblables. Enfin, les résultats indiquent que, dans la population âgée, l'absence de certaines caractéristiques ne compromet pas de façon importante la performance des scores de comorbidité déterminés à partir de banques de données d'ordonnances. Par conséquent, les scores de comorbidité demeurent un outil de recherche important pour les études observationnelles.
Resumo:
The Paper unfolds the paradox that exists in the tribal community with respect to the development indicators and hence tries to cull out the difference in the standard of living of the tribes in a dichotomous framework, forward and backward. Four variables have been considered for ascertaining the standard of living and socio-economic conditions of the tribes. The data for the study is obtained from a primary survey in the three tribal predominant districts of Wayanad, Idukki and Palakkad. Wayanad was selected for studying six tribal communities (Paniya, Adiya, Kuruma, Kurichya, Urali and Kattunaika), Idukki for two communities (Malayarayan and Muthuvan) and Palakkad for one community (Irula). 500 samples from 9 prominent tribal communities of Kerala have been collected according to multistage proportionate random sample framework. The analysis highlights the disproportionate nature of socio-economic indicators within the tribes in Kerala owing to the failure of governmental schemes and assistances meant for their empowerment. The socio-economic variables, such as education, health, and livelihood have been augmented with SLI based on correlation analysis gives interesting inference for policy options as high educated tribal communities are positively correlated with high SLI and livelihood. Further, each of the SLI variable is decomposed using Correlation and Correspondence analysis for understanding the relative standing of the nine tribal sub communities in the three dimensional framework of high, medium and low SLI levels. Tribes with good education and employment (Malayarayan, Kuruma and Kurichya) have a better living standard and hence they can generally be termed as forward tribes whereas those with a low or poor education, employment and living standard indicators (Paniya, Adiya, Urali, Kattunaika, Muthuvans and Irula) are categorized as backward tribes
Resumo:
The principal objective of this paper is to identify the relationship between the results of the Canadian policies implemented to protect female workers against the impact of globalization on the garment industry and the institutional setting in which this labour market is immersed in Winnipeg. This research paper begins with a brief summary of the institutional theory approach that sheds light on the analysis of the effects of institutions on the policy options to protect female workers of the Winnipeg garment industry. Next, this paper identifies the set of beliefs, formal procedures, routines, norms and conventions that characterize the institutional environment of the female workers of Winnipeg’s garment industry. Subsequently, this paper describes the impact of free trade policies on the garment industry of Winnipeg. Afterward, this paper presents an analysis of the barriers that the institutional features of the garment sector in Winnipeg can set to the successful achievement of policy options addressed to protect the female workforce of this sector. Three policy options are considered: ethical purchasing; training/retraining programs and social engagement support for garment workers; and protection of migrated workers through promoting and facilitating bonds between Canada’s trade unions and trade unions of the labour sending countries. Finally, this paper concludes that the formation of isolated cultural groups inside of factories; the belief that there is gender and race discrimination on the part of the garment industry management against workers; the powerless social conditions of immigrant women; the economic rationality of garment factories’ managers; and the lack of political will on the part of Canada and the labour sending countries to set effective bilateral agreements to protect migrate workers, are the principal barriers that divide the actors involved in the garment industry in Winnipeg. This division among the principal actors of Winnipeg’s garment industry impedes the change toward more efficient institutions and, hence, the successful achievement of policy options addressed to protect women workers.
Resumo:
En les últimes dècades, l'increment dels nivells de radiació solar ultraviolada (UVR) que arriba a la Terra (principalment degut a la disminució d'ozó estratosfèric) juntament amb l'augment detectat en malalties relacionades amb l'exposició a la UVR, ha portat a un gran volum d'investigacions sobre la radiació solar en aquesta banda i els seus efectes en els humans. L'índex ultraviolat (UVI), que ha estat adoptat internacionalment, va ser definit amb el propòsit d'informar al públic general sobre els riscos d'exposar el cos nu a la UVR i per tal d'enviar missatges preventius. L'UVI es va definir inicialment com el valor màxim diari. No obstant, el seu ús actual s'ha ampliat i té sentit referir-se a un valor instantani o a una evolució diària del valor d'UVI mesurat, modelitzat o predit. El valor concret d'UVI està afectat per la geometria Sol-Terra, els núvols, l'ozó, els aerosols, l'altitud i l'albedo superficial. Les mesures d'UVI d'alta qualitat són essencials com a referència i per estudiar tendències a llarg termini; es necessiten també tècniques acurades de modelització per tal d'entendre els factors que afecten la UVR, per predir l'UVI i com a control de qualitat de les mesures. És d'esperar que les mesures més acurades d'UVI s'obtinguin amb espectroradiòmetres. No obstant, com que els costs d'aquests dispositius són elevats, és més habitual trobar dades d'UVI de radiòmetres eritemàtics (de fet, la majoria de les xarxes d'UVI estan equipades amb aquest tipus de sensors). Els millors resultats en modelització s'obtenen amb models de transferència radiativa de dispersió múltiple quan es coneix bé la informació d'entrada. No obstant, habitualment no es coneix informació d'entrada, com per exemple les propietats òptiques dels aerosols, la qual cosa pot portar a importants incerteses en la modelització. Sovint, s'utilitzen models més simples per aplicacions com ara la predicció d'UVI o l'elaboració de mapes d'UVI, ja que aquests són més ràpids i requereixen menys paràmetres d'entrada. Tenint en compte aquest marc de treball, l'objectiu general d'aquest estudi és analitzar l'acord al qual es pot arribar entre la mesura i la modelització d'UVI per condicions de cel sense núvols. D'aquesta manera, en aquest estudi es presenten comparacions model-mesura per diferents tècniques de modelització, diferents opcions d'entrada i per mesures d'UVI tant de radiòmetres eritemàtics com d'espectroradiòmeters. Com a conclusió general, es pot afirmar que la comparació model-mesura és molt útil per detectar limitacions i estimar incerteses tant en les modelitzacions com en les mesures. Pel que fa a la modelització, les principals limitacions que s'han trobat és la falta de coneixement de la informació d'aerosols considerada com a entrada dels models. També, s'han trobat importants diferències entre l'ozó mesurat des de satèl·lit i des de la superfície terrestre, la qual cosa pot portar a diferències importants en l'UVI modelitzat. PTUV, una nova i simple parametrització pel càlcul ràpid d'UVI per condicions de cel serens, ha estat desenvolupada en base a càlculs de transferència radiativa. La parametrització mostra una bona execució tant respecte el model base com en comparació amb diverses mesures d'UVI. PTUV ha demostrat la seva utilitat per aplicacions particulars com ara l'estudi de l'evolució anual de l'UVI per un cert lloc (Girona) i la composició de mapes d'alta resolució de valors d'UVI típics per un territori concret (Catalunya). En relació a les mesures, es constata que és molt important saber la resposta espectral dels radiòmetres eritemàtics per tal d'evitar grans incerteses a la mesura d'UVI. Aquest instruments, si estan ben caracteritzats, mostren una bona comparació amb els espectroradiòmetres d'alta qualitat en la mesura d'UVI. Les qüestions més importants respecte les mesures són la calibració i estabilitat a llarg termini. També, s'ha observat un efecte de temperatura en el PTFE, un material utilitzat en els difusors en alguns instruments, cosa que potencialment podria tenir implicacions importants en el camp experimental. Finalment, i pel que fa a les comparacions model-mesura, el millor acord s'ha trobat quan es consideren mesures d'UVI d'espectroradiòmetres d'alta qualitat i s'usen models de transferència radiativa que consideren les millors dades disponibles pel que fa als paràmetres òptics d'ozó i aerosols i els seus canvis en el temps. D'aquesta manera, l'acord pot ser tan alt dins un 0.1º% en UVI, i típicament entre menys d'un 3%. Aquest acord es veu altament deteriorat si s'ignora la informació d'aerosols i depèn de manera important del valor d'albedo de dispersió simple dels aerosols. Altres dades d'entrada del model, com ara l'albedo superficial i els perfils d'ozó i temperatura introdueixen una incertesa menor en els resultats de modelització.
Resumo:
One of the most common decisions we make is the one about where to move our eyes next. Here we examine the impact that processing the evidence supporting competing options has on saccade programming. Participants were asked to saccade to one of two possible visual targets indicated by a cloud of moving dots. We varied the evidence which supported saccade target choice by manipulating the proportion of dots moving towards one target or the other. The task was found to become easier as the evidence supporting target choice increased. This was reflected in an increase in percent correct and a decrease in saccade latency. The trajectory and landing position of saccades were found to deviate away from the non-selected target reflecting the choice of the target and the inhibition of the non-target. The extent of the deviation was found to increase with amount of sensory evidence supporting target choice. This shows that decision-making processes involved in saccade target choice have an impact on the spatial control of a saccade. This would seem to extend the notion of the processes involved in the control of saccade metrics beyond a competition between visual stimuli to one also reflecting a competition between options.