15 resultados para Implied hazard rate
em Helda - Digital Repository of University of Helsinki
Resumo:
Väitöskirjassani tarkastelen informaatiohyödykkeiden ja tekijänoikeuksien taloustiedettä kahdesta eri perspektiivistä. Niistä ensimmäinen kuuluu endogeenisen kasvuteorian alaan. Väitöskirjassani yleistän ”pool of knowledge” -tyyppisen endogeenisen kasvumallin tilanteeseen, jossa patentoitavissa olevalla innovaatiolla on minimikoko, ja jossa uudenlaisen tuotteen patentoinut yritys voi menettää monopolinsa tuotteeseen jäljittelyn johdosta. Mallin kontekstissa voidaan analysoida jäljittelyn ja innovaatioilta vaaditun ”minimikoon” vaikutuksia hyvinvointiin ja talouskasvuun. Kasvun maksimoiva imitaation määrä on mallissa aina nolla, mutta hyvinvoinnin maksimoiva imitaation määrä voi olla positiivinen. Talouskasvun ja hyvinvoinnin maksimoivalla patentoitavissa olevan innovaation ”minimikoolla” voi olla mikä tahansa teoreettista maksimia pienempi arvo. Väitöskirjani kahdessa jälkimmäisessä pääluvussa tarkastelen informaatiohyödykkeiden kaupallista piratismia mikrotaloustieteellisen mallin avulla. Informaatiohyödykkeistä laittomasti tehtyjen kopioiden tuotantokustannukset ovat pienet, ja miltei olemattomat silloin kun niitä levitetään esimerkiksi Internetissä. Koska piraattikopioilla on monta eri tuottajaa, niiden hinnan voitaisiin mikrotaloustieteen teorian perusteella olettaa laskevan melkein nollaan, ja jos näin kävisi, kaupallinen piratismi olisi mahdotonta. Mallissani selitän kaupallisen piratismin olemassaolon olettamalla, että piratismista saatavan rangaistuksen uhka riippuu siitä, kuinka monille kuluttajille piraatti tarjoaa laittomia hyödykkeitä, ja että se siksi vaikuttaa piraattikopioiden markkinoihin mainonnan kustannuksen tavoin. Kaupallisten piraattien kiinteiden kustannusten lisääminen on mallissani aina tekijänoikeuksien haltijan etujen mukaista, mutta ”mainonnan kustannuksen” lisääminen ei välttämättä ole, vaan se saattaa myös alentaa laillisten kopioiden myynnistä saatavia voittoja. Tämä tulos poikkeaa vastaavista aiemmista tuloksista sikäli, että se pätee vaikka tarkasteltuihin informaatiohyödykkeisiin ei liittyisi verkkovaikutuksia. Aiemmin ei-kaupallisen piratismin malleista on usein johdettu tulos, jonka mukaan informaatiohyödykkeen laittomat kopiot voivat kasvattaa laillisten kopioiden myynnistä saatavia voittoja jos laillisten kopioiden arvo niiden käyttäjille riippuu siitä, kuinka monet muut kuluttajat käyttävät samanlaista hyödykettä ja jos piraattikopioiden saatavuus lisää riittävästi laillisten kopioiden arvoa. Väitöskirjan viimeisessä pääluvussa yleistän mallini verkkotoimialoille, ja tutkin yleistämäni mallin avulla sitä, missä tapauksissa vastaava tulos pätee myös kaupalliseen piratismiin.
Resumo:
Old trees growing in urban environments are often felled due to symptoms of mechanical defects that could be hazardous to people and property. The decisions concerning these removals are justified by risk assessments carried out by tree care professionals. The major motivation for this study was to determine the most common profiles of potential hazard characteristics for the three most common urban tree genera in Helsinki City: Tilia, Betula and Acer, and in this way improve management practices and protection of old amenity trees. For this research, material from approximately 250 urban trees was collected in cooperation with the City of Helsinki Public Works Department during 2001 - 2004. From the total number of trees sampled, approximately 70% were defined as hazardous. The tree species had characteristic features as potential hazard profiles. For Tilia trees, hollowed heartwood with low fungal activity and advanced decay caused by Ganoderma lipsiense were the two most common profiles. In Betula spp., the primary reason for tree removal was usually lowered amenity value in terms of decline of the crown. Internal cracks, most often due to weak fork formation, were common causes of potential failure in Acer spp. Decay caused by Rigidoporus populinus often increased the risk of stem breakage in these Acer trees. Of the decay fungi observed, G. lipsiense was most often the reason for the increased risk of stem collapse. Other fungi that also caused extensive decay were R. populinus, Inonotus obliquus, Kretzschmaria deusta and Phellinus igniarius. The most common decay fungi in terms of incidence were Pholiota spp., but decay caused by these species did not have a high potential for causing stem breakage, because it rarely extended to the cambium. The various evaluations used in the study suggested contradictions in felling decisions based on trees displaying different stages of decay. For protection of old urban trees, it is crucial to develop monitoring methods so that tree care professionals could better analyse the rate of decay progression towards the sapwood and separate those trees with decreasing amounts of sound wood from those with decay that is restricted to the heartwood area.
Resumo:
This thesis aims at finding the role of deposit insurance scheme and central bank (CB) in keeping the banking system safe. The thesis also studies the factors associated with long-lasting banking crises. The first essay analyzes the effect of using explicit deposit insurance scheme (EDIS), instead of using implicit deposit insurance scheme (IDIS), on banking crises. The panel data for the period of 1980-2003 includes all countries for which the data on EDIS or IDIS exist. 70% of the countries in the sample are less developed countries (LDCs). About 55% of the countries adopting EDIS also come from LDCs. The major finding is that the using of EDIS increases the crisis probability at a strong significance level. This probability is greater if the EDIS is inefficiently designed allowing higher scope of moral hazard problem. Specifically, the probability is greater if the EDIS provides higher coverage to deposits and if it is less powerful from the legal point of view. This study also finds that the less developed a country is to handle EDIS, the higher the chance of banking crisis. Once the underdevelopment of an economy handling the EDIS is controlled, the EDIS separately is no longer a significant factor of banking crises. The second essay aims at determining whether a country s powerful CB can lessen the instability of the banking sector by minimizing the likelihood of a banking crisis. The data used include indicators of the CB s autonomy for a set of countries over the period of 1980-89. The study finds that in aggregate a more powerful CB lessens the probability of banking crisis. When the CB s authority is disentangled with respect to its responsibilities, the study finds that the longer tenure of CB s chief executive officer and the greater power of CB in assigning interest rate on government loans are necessary for reducing the probability of banking crisis. The study also finds that the probability of crisis reduces more if an autonomous CB can perform its duties in a country with stronger law and order tradition. The costs of long-lasting banking crises are high because both the depositors and the investors lose confidence in the banking system. For a rapid recovery of a crisis, the government very often undertakes one or more crisis resolution policy (CRP) measures. The third essay examines the CRP and other explanatory variables correlated with the durations of banking crises. The major finding is that the CRP measure allowing the regulation forbearance to keep the insolvent banks operative and the public debt relief program are respectively strongly and weakly significant to increase the durations of crises. Some other explanatory variables, which were found by previous studies to be related with the probability of crises to occur, are also correlated with the durations of crises.
Resumo:
Väitöskirja koostuu neljästä esseestä, joissa tutkitaan empiirisen työntaloustieteen kysymyksiä. Ensimmäinen essee tarkastelee työttömyysturvan tason vaikutusta työllistymiseen Suomessa. Vuonna 2003 ansiosidonnaista työttömyysturvaa korotettiin työntekijöille, joilla on pitkä työhistoria. Korotus oli keskimäärin 15 % ja se koski ensimmäistä 150 työttömyyspäivää. Tutkimuksessa arvioidaan korotuksen vaikutus vertailemalla työllistymisen todennäköisyyksiä korotuksen saaneen ryhmän ja vertailuryhmän välillä ennen uudistusta ja sen jälkeen. Tuloksien perusteella työttömyysturvan korotus laski työllistymisen todennäköisyyttä merkittävästi, keskimäärin noin 16 %. Korotuksen vaikutus on suurin työttömyyden alussa ja se katoaa kun oikeus korotettuun ansiosidonnaiseen päättyy. Toinen essee tutkii työttömyyden pitkän aikavälin kustannuksia Suomessa keskittyen vuosien 1991 – 1993 syvään lamaan. Laman aikana toimipaikkojen sulkeminen lisääntyi paljon ja työttömyysaste nousi yli 13 prosenttiyksikköä. Tutkimuksessa verrataan laman aikana toimipaikan sulkemisen vuoksi työttömäksi jääneitä parhaassa työiässä olevia miehiä työllisinä pysyneisiin. Työttömyyden vaikutusta tarkastellaan kuuden vuoden seurantajaksolla. Vuonna 1999 työttömyyttä laman aikana kokeneen ryhmän vuosiansiot olivat keskimäärin 25 % alemmat kuin vertailuryhmässä. Tulojen menetys johtui sekä alhaisemmasta työllisyydestä että palkkatasosta. Kolmannessa esseessä tarkastellaan Suomen 1990-luvun alun laman aiheuttamaa työttömyysongelmaa tutkimalla työttömyyden kestoon vaikuttavia tekijöitä yksilötasolla. Kiinnostuksen kohteena on työttömyyden rakenteen ja työn kysynnän muutoksien vaikutus keskimääräiseen kestoon. Usein oletetaan, että laman seurauksena työttömäksi jää keskimääräistä huonommin työllistyviä henkilöitä, jolloin se itsessään pidentäisi keskimääräistä työttömyyden kestoa. Tuloksien perusteella makrotason kysyntävaikutus oli keskeinen työttömyyden keston kannalta ja rakenteen muutoksilla oli vain pieni kestoa lisäävä vaikutus laman aikana. Viimeisessä esseessä tutkitaan suhdannevaihtelun vaikutusta työpaikkaonnettomuuksien esiintymiseen. Tutkimuksessa käytetään ruotsalaista yksilötason sairaalahoitoaineistoa, joka on yhdistetty populaatiotietokantaan. Aineiston avulla voidaan tutkia vaihtoehtoisia selityksiä onnettomuuksien lisääntymiselle noususuhdanteessa, minkä on esitetty johtuvan esim. stressin tai kiireen vaikutuksesta. Tuloksien perusteella työpaikkaonnettomuudet ovat syklisiä, mutta vain tiettyjen ryhmien kohdalla. Työvoiman rakenteen vaihtelu saattaa selittää osan naisten onnettomuuksien syklisyydestä. Miesten kohdalla vain vähemmän vakavat onnettomuudet ovat syklisiä, mikä saattaa johtua strategisesta käyttäytymisestä.
Resumo:
This licentiate's thesis analyzes the macroeconomic effects of fiscal policy in a small open economy under a flexible exchange rate regime, assuming that the government spends exclusively on domestically produced goods. The motivation for this research comes from the observation that the literature on the new open economy macroeconomics (NOEM) has focused almost exclusively on two-country global models and the analyses of the effects of fiscal policy on small economies are almost completely ignored. This thesis aims at filling in the gap in the NOEM literature and illustrates how the macroeconomic effects of fiscal policy in a small open economy depend on the specification of preferences. The research method is to present two theoretical model that are extensions to the model contained in the Appendix to Obstfeld and Rogoff (1995). The first model analyzes the macroeconomic effects of fiscal policy, making use of a model that exploits the idea of modelling private and government consumption as substitutes in private utility. The model offers intuitive predictions on how the effects of fiscal policy depend on the marginal rate of substitution between private and government consumption. The findings illustrate that the higher the substitutability between private and government consumption, (i) the bigger is the crowding out effect on private consumption (ii) and the smaller is the positive effect on output. The welfare analysis shows that the less fiscal policy decreases welfare the higher is the marginal rate of substitution between private and government consumption. The second model of this thesis studies how the macroeconomic effects of fiscal policy depend on the elasticity of substitution between traded and nontraded goods. This model reveals that this elasticity a key variable to explain the exchange rate, current account and output response to a permanent rise in government spending. Finally, the model demonstrates that temporary changes in government spending are an effective stabilization tool when used wisely and timely in response to undesired fluctuations in output. Undesired fluctuations in output can be perfectly offset by an opposite change in government spending without causing any side-effects.
Resumo:
Dispersal is a highly important life history trait. In fragmented landscapes the long-term persistence of populations depends on dispersal. Evolution of dispersal is affected by costs and benefits and these may differ between different landscapes. This results in differences in the strength and direction of natural selection on dispersal in fragmented landscapes. Dispersal has been shown to be a nonrandom process that is associated with traits such as flight ability in insects. This thesis examines genetic and physiological traits affecting dispersal in the Glanville fritillary butterfly (Melitaea cinxia). Flight metabolic rate is a repeatable trait representing flight ability. Unlike in many vertebrates, resting metabolic rate cannot be used as a surrogate of maximum metabolic rate as no strong correlation between the two was found in the Glanville fritillary. Resting and flight metabolic rate are affected by environmental variables, most notably temperature. However, only flight metabolic rate has a strong genetic component. Molecular variation in the much-studied candidate locus phosphoglucose isomerase (Pgi), which encodes the glycolytic enzyme PGI, has an effect on carbohydrate metabolism in flight. This effect is temperature dependent: in low to moderate temperatures individuals with the heterozygous genotype at the single nucleotide polymorphism (SNP) AA111 have higher flight metabolic rate than the common homozygous genotype. At high temperatures the situation is reversed. This finding suggests that variation in enzyme properties is indeed translated to organismal performance. High-resolution data on individual female Glanville fritillaries moving freely in the field were recorded using harmonic radar. There was a strong positive correlation between flight metabolic rate and dispersal rate. Flight metabolic rate explained one third of the observed variation in the one-hour movement distance. A fine-scaled analysis of mobility showed that mobility peaked at intermediate ambient temperatures but the two common Pgi genotypes differed in their reaction norms to temperature. As with flight metabolic rate, heterozygotes at SNP AA111 were the most active genotype in low to moderate temperatures. The results show that molecular variation is associated with variation in dispersal rate through the link of flight physiology under the influence of environmental conditions. The evolutionary pressures for dispersal differ between males and females. The effect of flight metabolic rate on dispersal was examined in both sexes in field and laboratory conditions. The relationship between flight metabolic rate and dispersal rate in the field and flight duration in the laboratory were found to differ between the two sexes. In females the relationship was positive, but in males the longest distances and flight durations were recorded for individuals with low flight metabolic rate. These findings may reflect male investment in mate locating. Instead of dispersing, males with high flight metabolic rate may establish territories and follow a perching strategy when locating females and hence move less on the landscape level. Males with low metabolic rate may be forced to disperse due to low competitive success or may show adaptations to an alternative strategy: patrolling. In the light of life history trade-offs and the rate of living theory having high metabolic rate may carry a cost in the form of shortened lifespan. Experiments relating flight metabolic rate to longevity showed a clear correlation in the opposite direction: high flight metabolic rate was associated with long lifespan. This suggests that individuals with high metabolic rate do not pay an extra physiological cost for their high flight capacity, rather there are positive correlations between different measures of fitness. These results highlight the importance of condition.
Resumo:
Mutation and recombination are the fundamental processes leading to genetic variation in natural populations. This variation forms the raw material for evolution through natural selection and drift. Therefore, studying mutation rates may reveal information about evolutionary histories as well as phylogenetic interrelationships of organisms. In this thesis two molecular tools, DNA barcoding and the molecular clock were examined. In the first part, the efficiency of mutations to delineate closely related species was tested and the implications for conservation practices were assessed. The second part investigated the proposition that a constant mutation rate exists within invertebrates, in form of a metabolic-rate dependent molecular clock, which can be applied to accurately date speciation events. DNA barcoding aspires to be an efficient technique to not only distinguish between species but also reveal population-level variation solely relying on mutations found on a short stretch of a single gene. In this thesis barcoding was applied to discriminate between Hylochares populations from Russian Karelia and new Hylochares findings from the greater Helsinki region in Finland. Although barcoding failed to delineate the two reproductively isolated groups, their distinct morphological features and differing life-history traits led to their classification as two closely related, although separate species. The lack of genetic differentiation appears to be due to a recent divergence event not yet reflected in the beetles molecular make-up. Thus, the Russian Hylochares was described as a new species. The Finnish species, previously considered as locally extinct, was recognized as endangered. Even if, due to their identical genetic make-up, the populations had been regarded as conspecific, conservation strategies based on prior knowledge from Russia would not have guaranteed the survival of the Finnish beetle. Therefore, new conservation actions based on detailed studies of the biology and life-history of the Finnish Hylochares were conducted to protect this endemic rarity in Finland. The idea behind the strict molecular clock is that mutation rates are constant over evolutionary time and may thus be used to infer species divergence dates. However, one of the most recent theories argues that a strict clock does not tick per unit of time but that it has a constant substitution rate per unit of mass-specific metabolic energy. Therefore, according to this hypothesis, molecular clocks have to be recalibrated taking body size and temperature into account. This thesis tested the temperature effect on mutation rates in equally sized invertebrates. For the first dataset (family Eucnemidae, Coleoptera) the phylogenetic interrelationships and evolutionary history of the genus Arrhipis had to be inferred before the influence of temperature on substitution rates could be studied. Further, a second, larger invertebrate dataset (family Syrphidae, Diptera) was employed. Several methodological approaches, a number of genes and multiple molecular clock models revealed that there was no consistent relationship between temperature and mutation rate for the taxa under study. Thus, the body size effect, observed in vertebrates but controversial for invertebrates, rather than temperature may be the underlying driving force behind the metabolic-rate dependent molecular clock. Therefore, the metabolic-rate dependent molecular clock does not hold for the here studied invertebrate groups. This thesis emphasizes that molecular techniques relying on mutation rates have to be applied with caution. Whereas they may work satisfactorily under certain conditions for specific taxa, they may fail for others. The molecular clock as well as DNA barcoding should incorporate all the information and data available to obtain comprehensive estimations of the existing biodiversity and its evolutionary history.
Resumo:
Atrial fibrillation is the most common arrhythmia requiring treatment. This Thesis investigated atrial fibrillation (AF) with a specific emphasis on atrial remodeling which was analysed from epidemiological, clinical and magnetocardiographic (MCG) perspectives. In the first study we evaluated in real-life clinical practice a population-based cohort of AF patients referred for their first elective cardioversion (CV). 183 consecutive patients were included of whom in 153 (84%) sinus rhythm (SR) was restored. Only 39 (25%) of those maintained SR for one year. Shorter duration of AF and the use of sotalol were the only characteristics associated with better restoration and maintenance of SR. During the one-year follow-up 40% of the patients ended up in permanent AF. Female gender and older age were associated with the acceptance of permanent AF. The LIFE-trial was a prospective, randomised, double-blinded study that evaluated losartan and atenolol in patients with hypertension and left ventricular hypertrophy (LVH). Of the 8,851 patients with SR at baseline and without a history of AF 371 patients developed new-onset AF during the study. Patients with new-onset AF had an increased risk of cardiac events, stroke, and increased rate of hospitalisation for heart failure. Younger age, female gender, lower systolic blood pressure, lesser LVH in ECG and randomisation to losartan therapy were independently associated with lower frequency of new-onset AF. The impact of AF on morbidity and mortality was evaluated in a post-hoc analysis of the OPTIMAAL trial that compared losartan with captopril in patients with acute myocardial infarction (AMI) and evidence of LV dysfunction. Of the 5,477 randomised patients 655 had AF at baseline, and 345 patients developed new AF during the follow-up period, median 3.0 years. Older patients and patients with signs of more serious heart disease had and developed AF more often. Patients with AF at baseline had an increased risk of mortality (hazard ratio (HR) of 1.32) and stroke (HR 1.77). New-onset AF was associated with increased mortality (HR 1.82) and stroke (HR of 2.29). In the fourth study we assessed the reproducibility of our MCG method. This method was used in the fifth study where 26 patients with persistent AF had immediately after the CV longer P-wave duration and higher energy of the last portion of atrial signal (RMS40) in MCG, increased P-wave dispersion in SAECG and decreased pump function of the atria as well as enlarged atrial diameter in echocardiography compared to age- and disease-matched controls. After one month in SR, P-wave duration in MCG still remained longer and left atrial (LA) diameter greater compared to the controls, while the other measurements had returned to the same level as in the control group. In conclusion is not a rare condition in either general population or patients with hypertension or AMI, and it is associated with increased risk of morbidity and mortality. Therefore, atrial remodeling that increases the likelihood of AF and also seems to be relatively stable has to be identified and prevented. MCG was found to be an encouraging new method to study electrical atrial remodeling and reverse remodeling. RAAS-suppressing medications appear to be the most promising method to prevent atrial remodeling and AF.
Resumo:
Genetic susceptibility to juvenile idiopathic arthritis (JIA) was studied in the genetically homogeneous Finnish population by collecting families with two or three patients affected by this disease from cases seen in the Rheumatism Foundation Hospital. The number of families ranged in different studies from 37 to 45 and the total number of patients with JIA, from among whom these cases were derived, was 2 000 to 2 300. Characteristics of the disease in affected siblings in Finland were compared with a population-based series and with a sibling series from the United States. A thorough clinical and ophthalmological examination was made of all affected patients belonging to sibpair series. Information on the occurrence of chronic rheumatic diseases in parents was collected by questionnaire and diagnoses were confirmed from hospital records. All patients, their parents and most of the healthy sibs were typed for human leukocyte antigen (HLA) alleles in loci A, C, B, DR and DQ. The HLA allele distribution of the cases was compared with corresponding data from Finnish bone marrow donors. The genetic component in JIA was found to be more significant than previously believed. A concordance rate of 25% for a disease with a population prevalence of 1 per 1000 implied a relative risk of 250 for a monozygotic (MZ) twin. An estimate for the sibling risk of an affected individual was about 15- to 20-fold. The disease was basically similar in familial and sporadic cases; the mean age at disease onset was however lower in familial cases, (4.8 years vs 7.4 years). Three sibpairs (3.4 expected) were concordant for the presence of asymptomatic uveitis. Uveitis would thus not appear to have any genetic component of its own, separate from the genetic basis of JIA. Four of the parents had JIA (0.2 cases expected), four had a type of rheumatoid factor-negative arthritis similar to that seen in juvenile patients but commencing in adulthood, and one had spondyloarthropathy (SPA). These findings provide additional support for the conception of a genetic predisposition to JIA and suggest the existence of a new disease entity, JIA of adult onset. Both the linkage analysis of the affected sibpairs and the association analysis of nuclear families provided overwhelming evidence of a major contribution of HLA to the genetic susceptibility to JIA. The association analysis in the Finnish population confirmed that the most significant associations prevailed for DRB1*0801, DQB1*0402, as expected from previous observations, and indicated the independent role of Cw*0401.
Resumo:
Mikael Juselius’ doctoral dissertation covers a range of significant issues in modern macroeconomics by empirically testing a number of important theoretical hypotheses. The first essay presents indirect evidence within the framework of the cointegrated VAR model on the elasticity of substitution between capital and labor by using Finnish manufacturing data. Instead of estimating the elasticity of substitution by using the first order conditions, he develops a new approach that utilizes a CES production function in a model with a 3-stage decision process: investment in the long run, wage bargaining in the medium run and price and employment decisions in the short run. He estimates the elasticity of substitution to be below one. The second essay tests the restrictions implied by the core equations of the New Keynesian Model (NKM) in a vector autoregressive model (VAR) by using both Euro area and U.S. data. Both the new Keynesian Phillips curve and the aggregate demand curve are estimated and tested. The restrictions implied by the core equations of the NKM are rejected on both U.S. and Euro area data. These results are important for further research. The third essay is methodologically similar to essay 2, but it concentrates on Finnish macro data by adopting a theoretical framework of an open economy. Juselius’ results suggests that the open economy NKM framework is too stylized to provide an adequate explanation for Finnish inflation. The final essay provides a macroeconometric model of Finnish inflation and associated explanatory variables and it estimates the relative importance of different inflation theories. His main finding is that Finnish inflation is primarily determined by excess demand in the product market and by changes in the long-term interest rate. This study is part of the research agenda carried out by the Research Unit of Economic Structure and Growth (RUESG). The aim of RUESG it to conduct theoretical and empirical research with respect to important issues in industrial economics, real option theory, game theory, organization theory, theory of financial systems as well as to study problems in labor markets, macroeconomics, natural resources, taxation and time series econometrics. RUESG was established at the beginning of 1995 and is one of the National Centers of Excellence in research selected by the Academy of Finland. It is financed jointly by the Academy of Finland, the University of Helsinki, the Yrjö Jahnsson Foundation, Bank of Finland and the Nokia Group. This support is gratefully acknowledged.
Resumo:
Modeling and forecasting of implied volatility (IV) is important to both practitioners and academics, especially in trading, pricing, hedging, and risk management activities, all of which require an accurate volatility. However, it has become challenging since the 1987 stock market crash, as implied volatilities (IVs) recovered from stock index options present two patterns: volatility smirk(skew) and volatility term-structure, if the two are examined at the same time, presents a rich implied volatility surface (IVS). This implies that the assumptions behind the Black-Scholes (1973) model do not hold empirically, as asset prices are mostly influenced by many underlying risk factors. This thesis, consists of four essays, is modeling and forecasting implied volatility in the presence of options markets’ empirical regularities. The first essay is modeling the dynamics IVS, it extends the Dumas, Fleming and Whaley (DFW) (1998) framework; for instance, using moneyness in the implied forward price and OTM put-call options on the FTSE100 index, a nonlinear optimization is used to estimate different models and thereby produce rich, smooth IVSs. Here, the constant-volatility model fails to explain the variations in the rich IVS. Next, it is found that three factors can explain about 69-88% of the variance in the IVS. Of this, on average, 56% is explained by the level factor, 15% by the term-structure factor, and the additional 7% by the jump-fear factor. The second essay proposes a quantile regression model for modeling contemporaneous asymmetric return-volatility relationship, which is the generalization of Hibbert et al. (2008) model. The results show strong negative asymmetric return-volatility relationship at various quantiles of IV distributions, it is monotonically increasing when moving from the median quantile to the uppermost quantile (i.e., 95%); therefore, OLS underestimates this relationship at upper quantiles. Additionally, the asymmetric relationship is more pronounced with the smirk (skew) adjusted volatility index measure in comparison to the old volatility index measure. Nonetheless, the volatility indices are ranked in terms of asymmetric volatility as follows: VIX, VSTOXX, VDAX, and VXN. The third essay examines the information content of the new-VDAX volatility index to forecast daily Value-at-Risk (VaR) estimates and compares its VaR forecasts with the forecasts of the Filtered Historical Simulation and RiskMetrics. All daily VaR models are then backtested from 1992-2009 using unconditional, independence, conditional coverage, and quadratic-score tests. It is found that the VDAX subsumes almost all information required for the volatility of daily VaR forecasts for a portfolio of the DAX30 index; implied-VaR models outperform all other VaR models. The fourth essay models the risk factors driving the swaption IVs. It is found that three factors can explain 94-97% of the variation in each of the EUR, USD, and GBP swaption IVs. There are significant linkages across factors, and bi-directional causality is at work between the factors implied by EUR and USD swaption IVs. Furthermore, the factors implied by EUR and USD IVs respond to each others’ shocks; however, surprisingly, GBP does not affect them. Second, the string market model calibration results show it can efficiently reproduce (or forecast) the volatility surface for each of the swaptions markets.
Resumo:
The objective of this paper is to improve option risk monitoring by examining the information content of implied volatility and by introducing the calculation of a single-sum expected risk exposure similar to the Value-at-Risk. The figure is calculated in two steps. First, there is a need to estimate the value of a portfolio of options for a number of different market scenarios, while the second step is to summarize the information content of the estimated scenarios into a single-sum risk measure. This involves the use of probability theory and return distributions, which confronts the user with the problems of non-normality in the return distribution of the underlying asset. Here the hyperbolic distribution is used to describe one alternative for dealing with heavy tails. Results indicate that the information content of implied volatility is useful when predicting future large returns in the underlying asset. Further, the hyperbolic distribution provides a good fit to historical returns enabling a more accurate definition of statistical intervals and extreme events.
Resumo:
The low predictive power of implied volatility in forecasting the subsequently realized volatility is a well-documented empirical puzzle. As suggested by e.g. Feinstein (1989), Jackwerth and Rubinstein (1996), and Bates (1997), we test whether unrealized expectations of jumps in volatility could explain this phenomenon. Our findings show that expectations of infrequently occurring jumps in volatility are indeed priced in implied volatility. This has two important consequences. First, implied volatility is actually expected to exceed realized volatility over long periods of time only to be greatly less than realized volatility during infrequently occurring periods of very high volatility. Second, the slope coefficient in the classic forecasting regression of realized volatility on implied volatility is very sensitive to the discrepancy between ex ante expected and ex post realized jump frequencies. If the in-sample frequency of positive volatility jumps is lower than ex ante assessed by the market, the classic regression test tends to reject the hypothesis of informational efficiency even if markets are informationally effective.
Resumo:
The majority of Internet traffic use Transmission Control Protocol (TCP) as the transport level protocol. It provides a reliable ordered byte stream for the applications. However, applications such as live video streaming place an emphasis on timeliness over reliability. Also a smooth sending rate can be desirable over sharp changes in the sending rate. For these applications TCP is not necessarily suitable. Rate control attempts to address the demands of these applications. An important design feature in all rate control mechanisms is TCP friendliness. We should not negatively impact TCP performance since it is still the dominant protocol. Rate Control mechanisms are classified into two different mechanisms: window-based mechanisms and rate-based mechanisms. Window-based mechanisms increase their sending rate after a successful transfer of a window of packets similar to TCP. They typically decrease their sending rate sharply after a packet loss. Rate-based solutions control their sending rate in some other way. A large subset of rate-based solutions are called equation-based solutions. Equation-based solutions have a control equation which provides an allowed sending rate. Typically these rate-based solutions react slower to both packet losses and increases in available bandwidth making their sending rate smoother than that of window-based solutions. This report contains a survey of rate control mechanisms and a discussion of their relative strengths and weaknesses. A section is dedicated to a discussion on the enhancements in wireless environments. Another topic in the report is bandwidth estimation. Bandwidth estimation is divided into capacity estimation and available bandwidth estimation. We describe techniques that enable the calculation of a fair sending rate that can be used to create novel rate control mechanisms.