990 resultados para Index options


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Lypsylehmien maidon juoksettumiskyvyn jalostuskeinot Väitöskirjassa tutkittiin lypsylehmien maidon juustonvalmistuslaadun parantamista jalostusvalinnan avulla. Tutkimusaihe on tärkeä, sillä yhä suurempi osa maidosta käytetään juustonvalmistukseen. Tutkimuksen kohteena oli maidon juoksettumiskyky, sillä se on yksi keskeisistä juustomäärään vaikuttavista tekijöistä. Maidon juoksettumiskyky vaihteli huomattavasti lehmien, sonnien, karjojen, rotujen ja lypsykauden vaiheiden välillä. Vaikka tankkimaidon juoksettumiskyvyssä olikin suuria eroja karjoittain, karja selitti vain pienen osan juoksettumiskyvyn kokonaisvaihtelusta. Todennäköisesti perinnölliset erot lehmien välillä selittävät suurimman osan karjojen tankkimaitojen juoksettumiskyvyssä havaituista eroista. Hyvä hoito ja ruokinta vähensivät kuitenkin jossain määrin huonosti juoksettuvien tankkimaitojen osuutta karjoissa. Holstein-friisiläiset lehmät olivat juoksettumiskyvyltään ayrshire-rotuisia lehmiä parempia. Huono juoksettuminen ja juoksettumattomuus oli vain vähäinen ongelma holstein-friisiläisillä (10 %), kun taas kolmannes ayrshire-lehmistä tuotti huonosti juoksettuvaa tai juoksettumatonta maitoa. Maitoa sanotaan huonosti juoksettuvaksi silloin, kun juustomassa ei ole riittävän kiinteää leikattavaksi puolen tunnin kuluttua juoksetteen lisäyksestä. Juoksettumattomaksi määriteltävä maito ei saostu lainkaan puolen tunnin aikana ja on siksi erittäin huonoa raaka-ainetta juustomeijereille. Noin 40 % lehmien välisistä eroista maidon juoksettumiskyvyssä selittyi perinnöllisillä tekijöillä. Juoksettumiskykyä voikin sanoa hyvin periytyväksi ominaisuudeksi. Kolme mittauskertaa lehmää kohti riittää varsin hyvin lehmän maidon keskimääräisen juoksettumiskyvyn arvioimiseen. Tällä hetkellä juoksettumiskyvyn suoran jalostamisen ongelmana on kuitenkin automatisoidun, laajamittaiseen käyttöön soveltuvan mittalaitteen puute. Tämän takia väitöskirjassa tutkittiin mahdollisuuksia jalostaa maidon juoksettumiskykyä epäsuorasti, jonkin toisen ominaisuuden kautta. Tällaisen ominaisuuden pitää olla kyllin voimakkaasti perinnöllisesti kytkeytynyt juoksettumiskykyyn, jotta jalostus olisi mahdollista sen avulla. Tutkittavat ominaisuudet olivat sonnien kokonaisjalostusarvossa jo mukana olevat maitotuotos ja utareterveyteen liittyvät ominaisuudet sekä kokonaisjalostusarvoon kuulumattomat maidon valkuais- ja kaseiinipitoisuus sekä maidon pH. Väitöskirjassa tutkittiin myös mahdollisuuksia ns. merkkiavusteiseen valintaan tutkimalla maidon juoksettumattomuuden perinnöllisyyttä ja kartoittamalla siihen liittyvät kromosomialueet. Tutkimuksen tulosten perusteella lehmien utareterveyden jalostaminen parantaa jonkin verran myös maidon juoksettumiskykyä sekä vähentää juoksettumattomuutta ayrshire-rotuisilla lehmillä. Lehmien maitotuotos ja maidon juoksettumiskyky sekä juoksettumattomuus ovat sen sijaan perinnöllisesti toisistaan riippumattomia ominaisuuksia. Myöskin maidon valkuais- ja kaseiinipitoisuuden perinnöllinen yhteys juoksettumiskykyyn oli likimain nolla. Maidon pH:n ja juoksettumiskyvyn välillä oli melko voimakas perinnöllinen yhteys, joten maidon pH:n jalostaminen parantaisi myös maidon juoksettumiskykyä. Todennäköisesti sen jalostaminen ei kuitenkaan vähentäisi juoksettumatonta maitoa tuottavien lehmien määrää. Koska maidon juoksettumattomuus on niin yleinen ongelma suomalaisilla ayrshire-lehmillä, väitöksessä selvitettiin tarkemmin ilmiön taustoja. Kaikissa kolmessa tutkimusaineistoissa noin 10 % ayrshire-lehmistä tuotti juoksettumatonta maitoa. Kahden vuoden kuukausittaisen seurannan aikana osa lehmistä tuotti juoksettumatonta maitoa lähes joka mittauskerralla. Maidon juoksettumattomuus oli yhteydessä lypsykauden vaiheeseen, mutta mikään ympäristötekijöistä ei pystynyt täysin selittämään sitä. Sen sijaan viitteet sen periytyvyydestä vahvistuivat tutkimusten edetessä. Lopuksi tutkimusryhmä onnistui kartoittamaan juoksettumattomuutta aiheuttavat kromosomialueet kromosomeihin 2 ja 18, lähelle DNA-merkkejä BMS1126 ja BMS1355. Tulosten perusteella maidon juoksettumattomuus ei ole yhteydessä maidon juoksettumistapahtumassa keskeisiin kaseiinigeeneihin. Sen sijaan on mahdollista, että juoksettumattomuusongelman aiheuttavat kaseiinigeenien syntetisoinnin jälkeisessä muokkauksessa tapahtuvat virheet. Asia vaatii kuitenkin perusteellista tutkimista. Väitöksen tulosten perusteella maidon juoksettumattomuusgeeniä kantavien eläinten karsiminen jalostuseläinten joukosta olisi tehokkain tapa jalostaa maidon juoksettumiskykyä suomalaisessa lypsykarjapopulaatiossa.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Terminal heart failure can be the cause or the result of major dysfunctions of the organisms. Although, the outcome of the natural history is the same in both situations, it is of prime importance to differentiate the two, as only heart failure as the primary cause allows for successful mechanical circulatory support as bridge to transplantation or towards recovery. Various objective parameters allow for the establishment of the diagnosis of terminal heart failure despite optimal medical treatment. A cardiac index <2.0 l/min, and a mixed venous oxygen saturation <60%, in combination with progressive renal failure, should trigger a diagnostic work-up in order to identify cardiac defects that can be corrected or to list the patient for transplantation with/without mechanical circulatory support.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Le biais de confusion est un défi majeur des études observationnelles, surtout s'ils sont induits par des caractéristiques difficiles, voire impossibles, à mesurer dans les banques de données administratives de soins de santé. Un des biais de confusion souvent présents dans les études pharmacoépidémiologiques est la prescription sélective (en anglais « prescription channeling »), qui se manifeste lorsque le choix du traitement dépend de l'état de santé du patient et/ou de son expérience antérieure avec diverses options thérapeutiques. Parmi les méthodes de contrôle de ce biais, on retrouve le score de comorbidité, qui caractérise l'état de santé d'un patient à partir de médicaments délivrés ou de diagnostics médicaux rapportés dans les données de facturations des médecins. La performance des scores de comorbidité fait cependant l'objet de controverses car elle semble varier de façon importante selon la population d'intérêt. Les objectifs de cette thèse étaient de développer, valider, et comparer les performances de deux scores de comorbidité (un qui prédit le décès et l’autre qui prédit l’institutionnalisation), développés à partir des banques de services pharmaceutiques de la Régie de l'assurance-maladie du Québec (RAMQ) pour leur utilisation dans la population âgée. Cette thèse vise également à déterminer si l'inclusion de caractéristiques non rapportées ou peu valides dans les banques de données administratives (caractéristiques socio-démographiques, troubles mentaux ou du sommeil), améliore la performance des scores de comorbidité dans la population âgée. Une étude cas-témoins intra-cohorte fut réalisée. La cohorte source consistait en un échantillon aléatoire de 87 389 personnes âgées vivant à domicile, répartie en une cohorte de développement (n=61 172; 70%) et une cohorte de validation (n=26 217; 30%). Les données ont été obtenues à partir des banques de données de la RAMQ. Pour être inclus dans l’étude, les sujets devaient être âgés de 66 ans et plus, et être membres du régime public d'assurance-médicaments du Québec entre le 1er janvier 2000 et le 31 décembre 2009. Les scores ont été développés à partir de la méthode du Framingham Heart Study, et leur performance évaluée par la c-statistique et l’aire sous les courbes « Receiver Operating Curves ». Pour le dernier objectif qui est de documenter l’impact de l’ajout de variables non-mesurées ou peu valides dans les banques de données au score de comorbidité développé, une étude de cohorte prospective (2005-2008) a été réalisée. La population à l'étude, de même que les données, sont issues de l'Étude sur la Santé des Aînés (n=1 494). Les variables d'intérêt incluaient statut marital, soutien social, présence de troubles de santé mentale ainsi que troubles du sommeil. Tel que décrit dans l'article 1, le Geriatric Comorbidity Score (GCS) basé sur le décès, a été développé et a présenté une bonne performance (c-statistique=0.75; IC95% 0.73-0.78). Cette performance s'est avérée supérieure à celle du Chronic Disease Score (CDS) lorsqu'appliqué dans la population à l'étude (c-statistique du CDS : 0.47; IC 95%: 0.45-0.49). Une revue de littérature exhaustive a montré que les facteurs associés au décès étaient très différents de ceux associés à l’institutionnalisation, justifiant ainsi le développement d'un score spécifique pour prédire le risque d'institutionnalisation. La performance de ce dernier s'est avérée non statistiquement différente de celle du score de décès (c-statistique institutionnalisation : 0.79 IC95% 0.77-0.81). L'inclusion de variables non rapportées dans les banques de données administratives n'a amélioré que de 11% la performance du score de décès; le statut marital et le soutien social ayant le plus contribué à l'amélioration observée. En conclusion, de cette thèse, sont issues trois contributions majeures. D'une part, il a été démontré que la performance des scores de comorbidité basés sur le décès dépend de la population cible, d'où l'intérêt du Geriatric Comorbidity Score, qui fut développé pour la population âgée vivant à domicile. D'autre part, les médicaments associés au risque d'institutionnalisation diffèrent de ceux associés au risque de décès dans la population âgé, justifiant ainsi le développement de deux scores distincts. Cependant, les performances des deux scores sont semblables. Enfin, les résultats indiquent que, dans la population âgée, l'absence de certaines caractéristiques ne compromet pas de façon importante la performance des scores de comorbidité déterminés à partir de banques de données d'ordonnances. Par conséquent, les scores de comorbidité demeurent un outil de recherche important pour les études observationnelles.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Paper unfolds the paradox that exists in the tribal community with respect to the development indicators and hence tries to cull out the difference in the standard of living of the tribes in a dichotomous framework, forward and backward. Four variables have been considered for ascertaining the standard of living and socio-economic conditions of the tribes. The data for the study is obtained from a primary survey in the three tribal predominant districts of Wayanad, Idukki and Palakkad. Wayanad was selected for studying six tribal communities (Paniya, Adiya, Kuruma, Kurichya, Urali and Kattunaika), Idukki for two communities (Malayarayan and Muthuvan) and Palakkad for one community (Irula). 500 samples from 9 prominent tribal communities of Kerala have been collected according to multistage proportionate random sample framework. The analysis highlights the disproportionate nature of socio-economic indicators within the tribes in Kerala owing to the failure of governmental schemes and assistances meant for their empowerment. The socio-economic variables, such as education, health, and livelihood have been augmented with SLI based on correlation analysis gives interesting inference for policy options as high educated tribal communities are positively correlated with high SLI and livelihood. Further, each of the SLI variable is decomposed using Correlation and Correspondence analysis for understanding the relative standing of the nine tribal sub communities in the three dimensional framework of high, medium and low SLI levels. Tribes with good education and employment (Malayarayan, Kuruma and Kurichya) have a better living standard and hence they can generally be termed as forward tribes whereas those with a low or poor education, employment and living standard indicators (Paniya, Adiya, Urali, Kattunaika, Muthuvans and Irula) are categorized as backward tribes

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The principal objective of this paper is to identify the relationship between the re­sults of the Canadian policies implemented to protect female workers against the impact of globalization on the garment industry and the institutional setting in which this labour market is immersed in Winnipeg. This research paper begins with a brief summary of the institutional theory appro­ach that sheds light on the analysis of the effects of institutions on the policy options to protect female workers of the Winnipeg garment industry. Next, this paper identi­fies the set of beliefs, formal procedures, routines, norms and conventions that cha­racterize the institutional environment of the female workers of Winnipeg’s garment industry. Subsequently, this paper descri­bes the impact of free trade policies on the garment industry of Winnipeg. Afterward, this paper presents an analysis of the ba­rriers that the institutional features of the garment sector in Winnipeg can set to the successful achievement of policy options addressed to protect the female workforce of this sector. Three policy options are considered: ethical purchasing; training/retraining programs and social engage­ment support for garment workers; and protection of migrated workers through promoting and facilitating bonds between Canada’s trade unions and trade unions of the labour sending countries. Finally, this paper concludes that the formation of isolated cultural groups inside of factories; the belief that there is gender and race discrimination on the part of the garment industry management against workers; the powerless social conditions of immi­grant women; the economic rationality of garment factories’ managers; and the lack of political will on the part of Canada and the labour sending countries to set effective bilateral agreements to protect migrate wor­kers, are the principal barriers that divide the actors involved in the garment industry in Winnipeg. This division among the prin­cipal actors of Winnipeg’s garment industry impedes the change toward more efficient institutions and, hence, the successful achievement of policy options addressed to protect women workers. 

Relevância:

30.00% 30.00%

Publicador:

Resumo:

En les últimes dècades, l'increment dels nivells de radiació solar ultraviolada (UVR) que arriba a la Terra (principalment degut a la disminució d'ozó estratosfèric) juntament amb l'augment detectat en malalties relacionades amb l'exposició a la UVR, ha portat a un gran volum d'investigacions sobre la radiació solar en aquesta banda i els seus efectes en els humans. L'índex ultraviolat (UVI), que ha estat adoptat internacionalment, va ser definit amb el propòsit d'informar al públic general sobre els riscos d'exposar el cos nu a la UVR i per tal d'enviar missatges preventius. L'UVI es va definir inicialment com el valor màxim diari. No obstant, el seu ús actual s'ha ampliat i té sentit referir-se a un valor instantani o a una evolució diària del valor d'UVI mesurat, modelitzat o predit. El valor concret d'UVI està afectat per la geometria Sol-Terra, els núvols, l'ozó, els aerosols, l'altitud i l'albedo superficial. Les mesures d'UVI d'alta qualitat són essencials com a referència i per estudiar tendències a llarg termini; es necessiten també tècniques acurades de modelització per tal d'entendre els factors que afecten la UVR, per predir l'UVI i com a control de qualitat de les mesures. És d'esperar que les mesures més acurades d'UVI s'obtinguin amb espectroradiòmetres. No obstant, com que els costs d'aquests dispositius són elevats, és més habitual trobar dades d'UVI de radiòmetres eritemàtics (de fet, la majoria de les xarxes d'UVI estan equipades amb aquest tipus de sensors). Els millors resultats en modelització s'obtenen amb models de transferència radiativa de dispersió múltiple quan es coneix bé la informació d'entrada. No obstant, habitualment no es coneix informació d'entrada, com per exemple les propietats òptiques dels aerosols, la qual cosa pot portar a importants incerteses en la modelització. Sovint, s'utilitzen models més simples per aplicacions com ara la predicció d'UVI o l'elaboració de mapes d'UVI, ja que aquests són més ràpids i requereixen menys paràmetres d'entrada. Tenint en compte aquest marc de treball, l'objectiu general d'aquest estudi és analitzar l'acord al qual es pot arribar entre la mesura i la modelització d'UVI per condicions de cel sense núvols. D'aquesta manera, en aquest estudi es presenten comparacions model-mesura per diferents tècniques de modelització, diferents opcions d'entrada i per mesures d'UVI tant de radiòmetres eritemàtics com d'espectroradiòmeters. Com a conclusió general, es pot afirmar que la comparació model-mesura és molt útil per detectar limitacions i estimar incerteses tant en les modelitzacions com en les mesures. Pel que fa a la modelització, les principals limitacions que s'han trobat és la falta de coneixement de la informació d'aerosols considerada com a entrada dels models. També, s'han trobat importants diferències entre l'ozó mesurat des de satèl·lit i des de la superfície terrestre, la qual cosa pot portar a diferències importants en l'UVI modelitzat. PTUV, una nova i simple parametrització pel càlcul ràpid d'UVI per condicions de cel serens, ha estat desenvolupada en base a càlculs de transferència radiativa. La parametrització mostra una bona execució tant respecte el model base com en comparació amb diverses mesures d'UVI. PTUV ha demostrat la seva utilitat per aplicacions particulars com ara l'estudi de l'evolució anual de l'UVI per un cert lloc (Girona) i la composició de mapes d'alta resolució de valors d'UVI típics per un territori concret (Catalunya). En relació a les mesures, es constata que és molt important saber la resposta espectral dels radiòmetres eritemàtics per tal d'evitar grans incerteses a la mesura d'UVI. Aquest instruments, si estan ben caracteritzats, mostren una bona comparació amb els espectroradiòmetres d'alta qualitat en la mesura d'UVI. Les qüestions més importants respecte les mesures són la calibració i estabilitat a llarg termini. També, s'ha observat un efecte de temperatura en el PTFE, un material utilitzat en els difusors en alguns instruments, cosa que potencialment podria tenir implicacions importants en el camp experimental. Finalment, i pel que fa a les comparacions model-mesura, el millor acord s'ha trobat quan es consideren mesures d'UVI d'espectroradiòmetres d'alta qualitat i s'usen models de transferència radiativa que consideren les millors dades disponibles pel que fa als paràmetres òptics d'ozó i aerosols i els seus canvis en el temps. D'aquesta manera, l'acord pot ser tan alt dins un 0.1º% en UVI, i típicament entre menys d'un 3%. Aquest acord es veu altament deteriorat si s'ignora la informació d'aerosols i depèn de manera important del valor d'albedo de dispersió simple dels aerosols. Altres dades d'entrada del model, com ara l'albedo superficial i els perfils d'ozó i temperatura introdueixen una incertesa menor en els resultats de modelització.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One of the most common decisions we make is the one about where to move our eyes next. Here we examine the impact that processing the evidence supporting competing options has on saccade programming. Participants were asked to saccade to one of two possible visual targets indicated by a cloud of moving dots. We varied the evidence which supported saccade target choice by manipulating the proportion of dots moving towards one target or the other. The task was found to become easier as the evidence supporting target choice increased. This was reflected in an increase in percent correct and a decrease in saccade latency. The trajectory and landing position of saccades were found to deviate away from the non-selected target reflecting the choice of the target and the inhibition of the non-target. The extent of the deviation was found to increase with amount of sensory evidence supporting target choice. This shows that decision-making processes involved in saccade target choice have an impact on the spatial control of a saccade. This would seem to extend the notion of the processes involved in the control of saccade metrics beyond a competition between visual stimuli to one also reflecting a competition between options.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Permanent grassland makes up a greater proportion of the agricultural area in the UK and Ireland than in any other EU country, representing 60% and 72% of UAA respectively (Eurostat 2007). Of the permanent grassland in the UK, approximately half (about 6 million hectares) comprises improved grassland on moist or free-draining neutral soils typical of lowland livestock farms. These swards tend to have low plant species richness and are typically dominated by perennial ryegrass (Lolium perenne). The aim of this paper is to review the ways in which biodiversity of such farmland can be enhanced, focussing on the evidence behind management options in English agri-environment schemes (AES) at a range of scales and utilising a range of mechanisms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Using a water balance modelling framework, this paper analyses the effects of urban design on the water balance, with a focus on evapotranspiration and storm water. First, two quite different urban water balance models are compared: Aquacycle which has been calibrated for a suburban catchment in Canberra, Australia, and the single-source urban evapotranspiration-interception scheme (SUES), an energy-based approach with a biophysically advanced representation of interception and evapotranspiration. A fair agreement between the two modelled estimates of evapotranspiration was significantly improved by allowing the vegetation cover (leaf area index, LAI) to vary seasonally, demonstrating the potential of SUES to quantify the links between water sensitive urban design and microclimates and the advantage of comparing the two modelling approaches. The comparison also revealed where improvements to SUES are needed, chiefly through improved estimates of vegetation cover dynamics as input to SUES, and more rigorous parameterization of the surface resistance equations using local-scale suburban flux measurements. Second, Aquacycle is used to identify the impact of an array of water sensitive urban design features on the water balance terms. This analysis confirms the potential to passively control urban microclimate by suburban design features that maximize evapotranspiration, such as vegetated roofs. The subsequent effects on daily maximum air temperatures are estimated using an atmospheric boundary layer budget. Potential energy savings of about 2% in summer cooling are estimated from this analysis. This is a clear ‘return on investment’ of using water to maintain urban greenspace, whether as parks distributed throughout an urban area or individual gardens or vegetated roofs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

1. The rapid expansion of systematic monitoring schemes necessitates robust methods to reliably assess species' status and trends. Insect monitoring poses a challenge where there are strong seasonal patterns, requiring repeated counts to reliably assess abundance. Butterfly monitoring schemes (BMSs) operate in an increasing number of countries with broadly the same methodology, yet they differ in their observation frequency and in the methods used to compute annual abundance indices. 2. Using simulated and observed data, we performed an extensive comparison of two approaches used to derive abundance indices from count data collected via BMS, under a range of sampling frequencies. Linear interpolation is most commonly used to estimate abundance indices from seasonal count series. A second method, hereafter the regional generalized additive model (GAM), fits a GAM to repeated counts within sites across a climatic region. For the two methods, we estimated bias in abundance indices and the statistical power for detecting trends, given different proportions of missing counts. We also compared the accuracy of trend estimates using systematically degraded observed counts of the Gatekeeper Pyronia tithonus (Linnaeus 1767). 3. The regional GAM method generally outperforms the linear interpolation method. When the proportion of missing counts increased beyond 50%, indices derived via the linear interpolation method showed substantially higher estimation error as well as clear biases, in comparison to the regional GAM method. The regional GAM method also showed higher power to detect trends when the proportion of missing counts was substantial. 4. Synthesis and applications. Monitoring offers invaluable data to support conservation policy and management, but requires robust analysis approaches and guidance for new and expanding schemes. Based on our findings, we recommend the regional generalized additive model approach when conducting integrative analyses across schemes, or when analysing scheme data with reduced sampling efforts. This method enables existing schemes to be expanded or new schemes to be developed with reduced within-year sampling frequency, as well as affording options to adapt protocols to more efficiently assess species status and trends across large geographical scales.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper performs a thorough statistical examination of the time-series properties of the daily market volatility index (VIX) from the Chicago Board Options Exchange (CBOE). The motivation lies not only on the widespread consensus that the VIX is a barometer of the overall market sentiment as to what concerns investors' risk appetite, but also on the fact that there are many trading strategies that rely on the VIX index for hedging and speculative purposes. Preliminary analysis suggests that the VIX index displays long-range dependence. This is well in line with the strong empirical evidence in the literature supporting long memory in both options-implied and realized variances. We thus resort to both parametric and semiparametric heterogeneous autoregressive (HAR) processes for modeling and forecasting purposes. Our main ndings are as follows. First, we con rm the evidence in the literature that there is a negative relationship between the VIX index and the S&P 500 index return as well as a positive contemporaneous link with the volume of the S&P 500 index. Second, the term spread has a slightly negative long-run impact in the VIX index, when possible multicollinearity and endogeneity are controlled for. Finally, we cannot reject the linearity of the above relationships, neither in sample nor out of sample. As for the latter, we actually show that it is pretty hard to beat the pure HAR process because of the very persistent nature of the VIX index.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper develops a methodology for testing the term structure of volatility forecasts derived from stochastic volatility models, and implements it to analyze models of S&P500 index volatility. U sing measurements of the ability of volatility models to hedge and value term structure dependent option positions, we fmd that hedging tests support the Black-Scholes delta and gamma hedges, but not the simple vega hedge when there is no model of the term structure of volatility. With various models, it is difficult to improve on a simple gamma hedge assuming constant volatility. Ofthe volatility models, the GARCH components estimate of term structure is preferred. Valuation tests indicate that all the models contain term structure information not incorporated in market prices.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Metals price risk management is a key issue related to financial risk in metal markets because of uncertainty of commodity price fluctuation, exchange rate, interest rate changes and huge price risk either to metals’ producers or consumers. Thus, it has been taken into account by all participants in metal markets including metals’ producers, consumers, merchants, banks, investment funds, speculators, traders and so on. Managing price risk provides stable income for both metals’ producers and consumers, so it increases the chance that a firm will invest in attractive projects. The purpose of this research is to evaluate risk management strategies in the copper market. The main tools and strategies of price risk management are hedging and other derivatives such as futures contracts, swaps and options contracts. Hedging is a transaction designed to reduce or eliminate price risk. Derivatives are financial instruments, whose returns are derived from other financial instruments and they are commonly used for managing financial risks. Although derivatives have been around in some form for centuries, their growth has accelerated rapidly during the last 20 years. Nowadays, they are widely used by financial institutions, corporations, professional investors, and individuals. This project is focused on the over-the-counter (OTC) market and its products such as exotic options, particularly Asian options. The first part of the project is a description of basic derivatives and risk management strategies. In addition, this part discusses basic concepts of spot and futures (forward) markets, benefits and costs of risk management and risks and rewards of positions in the derivative markets. The second part considers valuations of commodity derivatives. In this part, the options pricing model DerivaGem is applied to Asian call and put options on London Metal Exchange (LME) copper because it is important to understand how Asian options are valued and to compare theoretical values of the options with their market observed values. Predicting future trends of copper prices is important and would be essential to manage market price risk successfully. Therefore, the third part is a discussion about econometric commodity models. Based on this literature review, the fourth part of the project reports the construction and testing of an econometric model designed to forecast the monthly average price of copper on the LME. More specifically, this part aims at showing how LME copper prices can be explained by means of a simultaneous equation structural model (two-stage least squares regression) connecting supply and demand variables. A simultaneous econometric model for the copper industry is built: {█(Q_t^D=e^((-5.0485))∙P_((t-1))^((-0.1868) )∙〖GDP〗_t^((1.7151) )∙e^((0.0158)∙〖IP〗_t ) @Q_t^S=e^((-3.0785))∙P_((t-1))^((0.5960))∙T_t^((0.1408))∙P_(OIL(t))^((-0.1559))∙〖USDI〗_t^((1.2432))∙〖LIBOR〗_((t-6))^((-0.0561))@Q_t^D=Q_t^S )┤ P_((t-1))^CU=e^((-2.5165))∙〖GDP〗_t^((2.1910))∙e^((0.0202)∙〖IP〗_t )∙T_t^((-0.1799))∙P_(OIL(t))^((0.1991))∙〖USDI〗_t^((-1.5881))∙〖LIBOR〗_((t-6))^((0.0717) Where, Q_t^D and Q_t^Sare world demand for and supply of copper at time t respectively. P(t-1) is the lagged price of copper, which is the focus of the analysis in this part. GDPt is world gross domestic product at time t, which represents aggregate economic activity. In addition, industrial production should be considered here, so the global industrial production growth that is noted as IPt is included in the model. Tt is the time variable, which is a useful proxy for technological change. A proxy variable for the cost of energy in producing copper is the price of oil at time t, which is noted as POIL(t ) . USDIt is the U.S. dollar index variable at time t, which is an important variable for explaining the copper supply and copper prices. At last, LIBOR(t-6) is the 6-month lagged 1-year London Inter bank offering rate of interest. Although, the model can be applicable for different base metals' industries, the omitted exogenous variables such as the price of substitute or a combined variable related to the price of substitutes have not been considered in this study. Based on this econometric model and using a Monte-Carlo simulation analysis, the probabilities that the monthly average copper prices in 2006 and 2007 will be greater than specific strike price of an option are defined. The final part evaluates risk management strategies including options strategies, metal swaps and simple options in relation to the simulation results. The basic options strategies such as bull spreads, bear spreads and butterfly spreads, which are created by using both call and put options in 2006 and 2007 are evaluated. Consequently, each risk management strategy in 2006 and 2007 is analyzed based on the day of data and the price prediction model. As a result, applications stemming from this project include valuing Asian options, developing a copper price prediction model, forecasting and planning, and decision making for price risk management in the copper market.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The use of biofuels in the aviation sector has economic and environmental benefits. Among the options for the production of renewable jet fuels, hydroprocessed esters and fatty acids (HEFA) have received predominant attention in comparison with fatty acid methyl esters (FAME), which are not approved as additives for jet fuels. However, the presence of oxygen in methyl esters tends to reduce soot emissions and therefore particulate matter emissions. This sooting tendency is quantified in this work with an oxygen-extended sooting index, based on smoke point measurements. Results have shown considerable reduction in the sooting tendency for all biokerosenes (produced by transesterification and eventually distillation) with respect to fossil kerosenes. Among the tested biokerosenes, that made from palm kernel oil was the most effective one, and nondistilled methyl esters (from camelina and linseed oils) showed lower effectiveness than distilled biokerosenes to reduce the sooting tendency. These results may constitute an additional argument for the use of FAME’s as blend components of jet fuels. Other arguments were pointed out in previous publications, but some controversy has aroused over the use of these components. Some of the criticism was based on the fact that the methods used in our previous work are not approved for jet fuels in the standard methods and concluded that the use of FAME in any amount is, thus, inappropriate. However, some of the standard methods are not updated for considering oxygenated components (like the method for obtaining the lower heating value), and others are not precise enough (like the methods for measuring the freezing point), whereas some alternative methods may provide better reproducibility for oxygenated fuels.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper examines execution costs and the impact of trade size for stock index futures using price-volume transaction data from the London International Financial Futures and Options Exchange. Consistent with Subrahmanyam [Rev. Financ. Stud. 4 (1991) 11] we find that effective half spreads in the stock index futures market are small compared to stock markets, and that trades in stock index futures have only a small permanent price impact. This result is important as it helps to better understand the success of equity index products such as index futures and Exchange Traded Funds. We also find that there is no asymmetry in the post-trade price reaction between purchases and sales for stock index futures across various trade sizes. This result is consistent with the conjecture in Chan and Lakonishok [J. Financ. Econ. 33 (1993) 173] that the asymmetry surrounding block trades in stock markets is due to the high cost of short selling and the general reluctance of traders to short sell on stock markets. (C) 2004 Elsevier B.V. All rights reserved.