34 resultados para Modeling technique
Resumo:
Sormen koukistajajännevamman korjauksen jälkeisen aktiivisen mobilisaation on todettu johtavan parempaan toiminnalliseen lopputulokseen kuin nykyisin yleisesti käytetyn dynaamisen mobilisaation. Aktiivisen mobilisaation ongelma on jännekorjauksen pettämisriskin lisääntyminen nykyisten ommeltekniikoiden riittämättömän vahvuuden vuoksi. Jännekorjauksen lujuutta on parannettu kehittämällä monisäieommeltekniikoita, joissa jänteeseen tehdään useita rinnakkaisia ydinompeleita. Niiden kliinistä käyttöä rajoittaa kuitenkin monimutkainen ja aikaa vievä tekninen suoritus. Käden koukistajajännekorjauksessa käytetään yleisesti sulamattomia ommelmateriaaleja. Nykyiset käytössä olevat biohajoavat langat heikkenevät liian nopeasti jänteen paranemiseen nähden. Biohajoavan laktidistereokopolymeeri (PLDLA) 96/4 – langan vetolujuuden puoliintumisajan sekä kudosominaisuuksien on aiemmin todettu soveltuvan koukistajajännekorjaukseen. Tutkimuksen tavoitteena oli kehittää välittömän aktiivisen mobilisaation kestävä ja toteutukseltaan yksinkertainen käden koukistajajännekorjausmenetelmä biohajoavaa PLDLA 96/4 –materiaalia käyttäen. Tutkimuksessa analysoitiin viiden eri yleisesti käytetyn koukistajajänneompeleen biomekaanisia ominaisuuksia staattisessa vetolujuustestauksessa ydinompeleen rakenteellisten ominaisuuksien – 1) säikeiden (lankojen) lukumäärän, 2) langan paksuuden ja 3) ompeleen konfiguraation – vaikutuksen selvittämiseksi jännekorjauksen pettämiseen ja vahvuuteen. Jännekorjausten näkyvän avautumisen todettiin alkavan perifeerisen ompeleen pettäessä voima-venymäkäyrän myötöpisteessä. Ydinompeleen lankojen lukumäärän lisääminen paransi ompeleen pitokykyä jänteessä ja suurensi korjauksen myötövoimaa. Sen sijaan paksumman (vahvemman) langan käyttäminen tai ompeleen konfiguraatio eivät vaikuttaneet myötövoimaan. Tulosten perusteella tutkittiin mahdollisuuksia lisätä ompeleen pitokykyä jänteestä yksinkertaisella monisäieompeleella, jossa ydinommel tehtiin kolmen säikeen polyesterilangalla tai nauhamaisen rakenteen omaavalla kolmen säikeen polyesterilangalla. Nauhamainen rakenne lisäsi merkitsevästi ompeleen pitokykyä jänteessä parantaen myötövoimaa sekä maksimivoimaa. Korjauksen vahvuus ylitti aktiivisen mobilisaation jännekorjaukseen kohdistaman kuormitustason. PLDLA 96/4 –langan soveltuvuutta koukistajajännekorjaukseen selvitettiin tutkimalla langan biomekaanisia ominaisuuksia ja solmujen pito-ominaisuuksia staattisessa vetolujuustestauksessa verrattuna yleisimmin jännekorjauksessa käytettävään punottuun polyesterilankaan (Ticron®). PLDLA –langan todettiin soveltuvan hyvin koukistajajännekorjaukseen, sillä se on polyesterilankaa venymättömämpi ja solmujen pitävyys on parempi. Viimeisessä vaiheessa tutkittiin PLDLA 96/4 –langasta valmistetulla kolmisäikeisellä, nauhamaisella jännekorjausvälineellä tehdyn jännekorjauksen kestävyyttä staattisessa vetolujuustestauksessa sekä syklisessä kuormituksessa, joka simuloi staattista testausta paremmin mobilisaation toistuvaa kuormitusta. PLDLA-korjauksen vahvuus ylitti sekä staattisessa että syklisessä kuormituksessa aktiivisen mobilisaation edellyttämän vahvuuden. Nauhamaista litteää ommelmateriaalia ei aiemmin ole tutkittu tai käytetty käden koukistajajännekorjauksessa. Tässä tutkimuksessa ommelmateriaalin nauhamainen rakenne paransi merkitsevästi jännekorjauksen vahvuutta, minkä arvioidaan johtuvan lisääntyneestä kontaktipinnasta jänteen ja ommelmateriaalin välillä estäen ompeleen läpileikkautumista jänteessä. Tutkimuksessa biohajoavasta PLDLA –materiaalista valmistetulla rakenteeltaan nauhamaisella kolmisäikeisellä langalla tehdyn jännekorjauksen vahvuus saavutti aktiivisen mobilisaation edellyttämän tason. Lisäksi uusi menetelmä on helppokäyttöinen ja sillä vältetään perinteisten monisäieompeleiden tekniseen suoritukseen liittyvät ongelmat.
Resumo:
Solar UV radiation is harmful for life on planet Earth, but fortunately the atmospheric oxygen and ozone absorb almost entirely the most energetic UVC radiation photons. However, part of the UVB radiation and much of the UVA radiation reaches the surface of the Earth, and affect human health, environment, materials and drive atmospheric and aquatic photochemical processes. In order to quantify these effects and processes there is a need for ground-based UV measurements and radiative transfer modeling to estimate the amounts of UV radiation reaching the biosphere. Satellite measurements with their near-global spatial coverage and long-term data conti-nuity offer an attractive option for estimation of the surface UV radiation. This work focuses on radiative transfer theory based methods used for estimation of the UV radiation reaching the surface of the Earth. The objectives of the thesis were to implement the surface UV algorithm originally developed at NASA Goddard Space Flight Center for estimation of the surface UV irradiance from the meas-urements of the Dutch-Finnish built Ozone Monitoring Instrument (OMI), to improve the original surface UV algorithm especially in relation with snow cover, to validate the OMI-derived daily surface UV doses against ground-based measurements, and to demonstrate how the satellite-derived surface UV data can be used to study the effects of the UV radiation. The thesis consists of seven original papers and a summary. The summary includes an introduction of the OMI instrument, a review of the methods used for modeling of the surface UV using satellite data as well as the con-clusions of the main results of the original papers. The first two papers describe the algorithm used for estimation of the surface UV amounts from the OMI measurements as well as the unique Very Fast Delivery processing system developed for processing of the OMI data received at the Sodankylä satellite data centre. The third and the fourth papers present algorithm improvements related to the surface UV albedo of the snow-covered land. Fifth paper presents the results of the comparison of the OMI-derived daily erythemal doses with those calculated from the ground-based measurement data. It gives an estimate of the expected accuracy of the OMI-derived sur-face UV doses for various atmospheric and other conditions, and discusses the causes of the differences between the satellite-derived and ground-based data. The last two papers demonstrate the use of the satellite-derived sur-face UV data. Sixth paper presents an assessment of the photochemical decomposition rates in aquatic environment. Seventh paper presents use of satellite-derived daily surface UV doses for planning of the outdoor material weathering tests.
Resumo:
The main method of modifying properties of semiconductors is to introduce small amount of impurities inside the material. This is used to control magnetic and optical properties of materials and to realize p- and n-type semiconductors out of intrinsic material in order to manufacture fundamental components such as diodes. As diffusion can be described as random mixing of material due to thermal movement of atoms, it is essential to know the diffusion behavior of the impurities in order to manufacture working components. In modified radiotracer technique diffusion is studied using radioactive isotopes of elements as tracers. The technique is called modified as atoms are deployed inside the material by ion beam implantation. With ion implantation, a distinct distribution of impurities can be deployed inside the sample surface with good con- trol over the amount of implanted atoms. As electromagnetic radiation and other nuclear decay products emitted by radioactive materials can be easily detected, only very low amount of impurities can be used. This makes it possible to study diffusion in pure materials without essentially modifying the initial properties by doping. In this thesis a modified radiotracer technique is used to study the diffusion of beryllium in GaN, ZnO, SiGe and glassy carbon. GaN, ZnO and SiGe are of great interest to the semiconductor industry and beryllium as a small and possibly rapid dopant hasn t been studied previously using the technique. Glassy carbon has been added to demonstrate the feasibility of the technique. In addition, the diffusion of magnetic impurities, Mn and Co, has been studied in GaAs and ZnO (respectively) with spintronic applications in mind.
Resumo:
Hantaviruses, members of the genus Hantavirus in the Bunyaviridae family, are enveloped single-stranded RNA viruses with tri-segmented genome of negative polarity. In humans, hantaviruses cause two diseases, hemorrhagic fever with renal syndrome (HFRS) and hantavirus pulmonary syndrome (HPS), which vary in severity depending on the causative agent. Each hantavirus is carried by a specific rodent host and is transmitted to humans through excreta of infected rodents. The genome of hantaviruses encodes four structural proteins: the nucleocapsid protein (N), the glycoproteins (Gn and Gc), and the polymerase (L) and also the nonstructural protein (NSs). This thesis deals with the functional characterization of hantavirus N protein with regard to its structure. Structural studies of the N protein have progressed slowly and the crystal structure of the whole protein is still not available, therefore biochemical assays coupled with bioinformatical modeling proved essential for studying N protein structure and functions. Presumably, during RNA encapsidation, the N protein first forms intermediate trimers and then oligomers. First, we investigated the role of N-terminal domain in the N protein oligomerization. The results suggested that the N-terminal region of the N protein forms a coiled-coil, in which two antiparallel alpha helices interact via their hydrophobic seams. Hydrophobic residues L4, I11, L18, L25 and V32 in the first helix and L44, V51, L58 and L65 in the second helix were crucial for stabilizing the structure. The results were consistent with the head-to-head, tail-to-tail model for hantavirus N protein trimerization. We demonstrated that an intact coiled-coil structure of the N terminus is crucial for the oligomerization capacity of the N protein. We also added new details to the head-to-head, tail-to-tail model of trimerization by suggesting that the initial step is based on interaction(s) between intact intra-molecular coiled-coils of the monomers. We further analyzed the importance of charged aa residues located within the coiled-coil for the N protein oligomerization. To predict the interacting surfaces of the monomers we used an upgraded in silico model of the coiled-coil domain that was docked into a trimer. Next the predicted target residues were mutated. The results obtained using the mammalian two-hybrid assay suggested that conserved charged aa residues within the coiled-coil make a substantial contribution to the N protein oligomerization. This contribution probably involves the formation of interacting surfaces of the N monomers and also stabilization of the coiled-coil via intramolecular ionic bridging. We proposed that the tips of the coiled-coils are the first to come into direct contact and thus initiate tight packing of the three monomers into a compact structure. This was in agreement with the previous results showing that an increase in ionic strength abolished the interaction between N protein molecules. We also showed that residues having the strongest effect on the N protein oligomerization are not scattered randomly throughout the coiled-coil 3D model structure, but form clusters. Next we found evidence for the hantaviral N protein interaction with the cytoplasmic tail of the glycoprotein Gn. In order to study this interaction we used the GST pull-down assay in combination with mutagenesis technique. The results demonstrated that intact, properly folded zinc fingers of the Gn protein cytoplasmic tail as well as the middle domain of the N protein (that includes aa residues 80 248 and supposedly carries the RNA-binding domain) are essential for the interaction. Since hantaviruses do not have a matrix protein that mediates the packaging of the viral RNA in other negatve stranded viruses (NSRV), hantaviral RNPs should be involved in a direct interaction with the intraviral domains of the envelope-embedded glycoproteins. By showing the N-Gn interaction we provided the evidence for one of the crucial steps in the virus replication at which RNPs are directed to the site of the virus assembly. Finally we started analysis of the N protein RNA-binding region, which is supposedly located in the middle domain of the N protein molecule. We developed a model for the initial step of RNA-binding by the hantaviral N protein. We hypothesized that the hantaviral N protein possesses two secondary structure elements that initiate the RNA encapsidation. The results suggest that amino acid residues (172-176) presumably act as a hook to catch vRNA and that the positively charged interaction surface (aa residues 144-160) enhances the initial N-RNA interacation. In conclusion, we elucidated new functions of hantavirus N protein. Using in silico modeling we predicted the domain structure of the protein and using experimental techniques showed that each domain is responsible for executing certain function(s). We showed that intact N terminal coiled-coil domain is crucial for oligomerization and charged residues located on its surface form a interaction surface for the N monomers. The middle domain is essential for interaction with the cytoplasmic tail of the Gn protein and RNA binding.
Resumo:
We present a measurement of the $WW+WZ$ production cross section observed in a final state consisting of an identified electron or muon, two jets, and missing transverse energy. The measurement is carried out in a data sample corresponding to up to 4.6~fb$^{-1}$ of integrated luminosity at $\sqrt{s} = 1.96$ TeV collected by the CDF II detector. Matrix element calculations are used to separate the diboson signal from the large backgrounds. The $WW+WZ$ cross section is measured to be $17.4\pm3.3$~pb, in agreement with standard model predictions. A fit to the dijet invariant mass spectrum yields a compatible cross section measurement.
Resumo:
11β-hydroksisteroididehydrogenaasientsyymit (11β-HSD) 1 ja 2 säätelevät kortisonin ja kortisolin määrää kudoksissa. 11β-HSD1 -entsyymin ylimäärä erityisesti viskeraalisessa rasvakudoksessa aiheuttaa metaboliseen oireyhtymän klassisia oireita, mikä tarjoaa mahdollisuuden metabolisen oireyhtymän hoitoon 11β-HSD1 -entsyymin selektiivisellä estämisellä. 11β-HSD2 -entsyymin inhibitio aiheuttaa kortisonivälitteisen mineralokortikoidireseptorien aktivoitumisen, mikä puolestaan johtaa hypertensiivisiin haittavaikutuksiin. Haittavaikutuksista huolimatta 11β-HSD2 -entsyymin estäminen saattaa olla hyödyllistä tilanteissa, joissa halutaan nostaa kortisolin määrä elimistössä. Lukuisia selektiivisiä 11β-HSD1 inhibiittoreita on kehitetty, mutta 11β-HSD2-inhibiittoreita on raportoitu vähemmän. Ero näiden kahden isotsyymin aktiivisen kohdan välillä on myös tuntematon, mikä vaikeuttaa selektiivisten inhibiittoreiden kehittämistä kummallekin entsyymille. Tällä työllä oli kaksi tarkoitusta: (1) löytää ero 11β-HSD entsyymien välillä ja (2) kehittää farmakoforimalli, jota voitaisiin käyttää selektiivisten 11β-HSD2 -inhibiittoreiden virtuaaliseulontaan. Ongelmaa lähestyttiin tietokoneavusteisesti: homologimallinnuksella, pienmolekyylien telakoinnilla proteiiniin, ligandipohjaisella farmakoforimallinnuksella ja virtuaaliseulonnalla. Homologimallinnukseen käytettiin SwissModeler -ohjelmaa, ja luotu malli oli hyvin päällekäinaseteltavissa niin templaattinsa (17β-HSD1) kuin 11β-HSD1 -entsyymin kanssa. Eroa entsyymien välillä ei löytynyt tarkastelemalla päällekäinaseteltuja entsyymejä. Seitsemän yhdistettä, joista kuusi on 11β-HSD2 -selektiivisiä, telakoitiin molempiin entsyymeihin käyttäen ohjelmaa GOLD. 11β-HSD1 -entsyymiin yhdisteet kiinnittyivät kuten suurin osa 11β-HSD1 -selektiivisistä tai epäselektiivisistä inhibiittoreista, kun taas 11β-HSD2 -entsyymiin kaikki yhdisteet olivat telakoituneet käänteisesti. Tällainen sitoutumistapa mahdollistaa vetysidokset Ser310:een ja Asn171:een, aminohappoihin, jotka olivat nähtävissä vain 11β-HSD2 -entsyymissä. Farmakoforimallinnukseen käytettiin ohjelmaa LigandScout3.0, jolla ajettiin myös virtuaaliseulonnat. Luodut kaksi farmakoforimallia, jotka perustuivat aiemmin telakointiinkin käytettyihin kuuteen 11β-HSD2 -selektiiviseen yhdisteeseen, koostuivat kuudesta ominaisuudesta (vetysidosakseptori, vetysidosdonori ja hydrofobinen), ja kieltoalueista. 11β-HSD2 -selektiivisyyden kannalta tärkeimmät ominaisuudet ovat vetysidosakseptori, joka voi muodostaa sidoksen Ser310 kanssa ja vetysidosdonori sen vieressä. Tälle vetysidosdonorille ei löytynyt vuorovaikutusparia 11β-HSD2-mallista. Sopivasti proteiiniin orientoitunut vesimolekyyli voisi kuitenkin olla sopiva ratkaisu puuttuvalle vuorovaikutusparille. Koska molemmat farmakoforimallit löysivät 11β-HSD2 -selektiivisiä yhdisteitä ja jättivät epäselektiivisiä pois testiseulonnassa, käytettiin molempia malleja Innsbruckin yliopistossa säilytettävistä yhdisteistä (2700 kappaletta) koostetun tietokannan seulontaan. Molemmista seulonnoista löytyneistä hiteistä valittiin yhteensä kymmenen kappaletta, jotka lähetettiin biologisiin testeihin. Biologisien testien tulokset vahvistavat lopullisesti sen kuinka hyvin luodut mallit edustavat todellisuudessa 11β-HSD2 -selektiivisyyttä.
Resumo:
We present a measurement of the top quark mass in the all-hadronic channel (\tt $\to$ \bb$q_{1}\bar{q_{2}}q_{3}\bar{q_{4}}$) using 943 pb$^{-1}$ of \ppbar collisions at $\sqrt {s} = 1.96$ TeV collected at the CDF II detector at Fermilab (CDF). We apply the standard model production and decay matrix-element (ME) to $\ttbar$ candidate events. We calculate per-event probability densities according to the ME calculation and construct template models of signal and background. The scale of the jet energy is calibrated using additional templates formed with the invariant mass of pairs of jets. These templates form an overall likelihood function that depends on the top quark mass and on the jet energy scale (JES). We estimate both by maximizing this function. Given 72 observed events, we measure a top quark mass of 171.1 $\pm$ 3.7 (stat.+JES) $\pm$ 2.1 (syst.) GeV/$c^{2}$. The combined uncertainty on the top quark mass is 4.3 GeV/$c^{2}$.
Resumo:
One of the most fundamental and widely accepted ideas in finance is that investors are compensated through higher returns for taking on non-diversifiable risk. Hence the quantification, modeling and prediction of risk have been, and still are one of the most prolific research areas in financial economics. It was recognized early on that there are predictable patterns in the variance of speculative prices. Later research has shown that there may also be systematic variation in the skewness and kurtosis of financial returns. Lacking in the literature so far, is an out-of-sample forecast evaluation of the potential benefits of these new more complicated models with time-varying higher moments. Such an evaluation is the topic of this dissertation. Essay 1 investigates the forecast performance of the GARCH (1,1) model when estimated with 9 different error distributions on Standard and Poor’s 500 Index Future returns. By utilizing the theory of realized variance to construct an appropriate ex post measure of variance from intra-day data it is shown that allowing for a leptokurtic error distribution leads to significant improvements in variance forecasts compared to using the normal distribution. This result holds for daily, weekly as well as monthly forecast horizons. It is also found that allowing for skewness and time variation in the higher moments of the distribution does not further improve forecasts. In Essay 2, by using 20 years of daily Standard and Poor 500 index returns, it is found that density forecasts are much improved by allowing for constant excess kurtosis but not improved by allowing for skewness. By allowing the kurtosis and skewness to be time varying the density forecasts are not further improved but on the contrary made slightly worse. In Essay 3 a new model incorporating conditional variance, skewness and kurtosis based on the Normal Inverse Gaussian (NIG) distribution is proposed. The new model and two previously used NIG models are evaluated by their Value at Risk (VaR) forecasts on a long series of daily Standard and Poor’s 500 returns. The results show that only the new model produces satisfactory VaR forecasts for both 1% and 5% VaR Taken together the results of the thesis show that kurtosis appears not to exhibit predictable time variation, whereas there is found some predictability in the skewness. However, the dynamic properties of the skewness are not completely captured by any of the models.
Resumo:
Modeling and forecasting of implied volatility (IV) is important to both practitioners and academics, especially in trading, pricing, hedging, and risk management activities, all of which require an accurate volatility. However, it has become challenging since the 1987 stock market crash, as implied volatilities (IVs) recovered from stock index options present two patterns: volatility smirk(skew) and volatility term-structure, if the two are examined at the same time, presents a rich implied volatility surface (IVS). This implies that the assumptions behind the Black-Scholes (1973) model do not hold empirically, as asset prices are mostly influenced by many underlying risk factors. This thesis, consists of four essays, is modeling and forecasting implied volatility in the presence of options markets’ empirical regularities. The first essay is modeling the dynamics IVS, it extends the Dumas, Fleming and Whaley (DFW) (1998) framework; for instance, using moneyness in the implied forward price and OTM put-call options on the FTSE100 index, a nonlinear optimization is used to estimate different models and thereby produce rich, smooth IVSs. Here, the constant-volatility model fails to explain the variations in the rich IVS. Next, it is found that three factors can explain about 69-88% of the variance in the IVS. Of this, on average, 56% is explained by the level factor, 15% by the term-structure factor, and the additional 7% by the jump-fear factor. The second essay proposes a quantile regression model for modeling contemporaneous asymmetric return-volatility relationship, which is the generalization of Hibbert et al. (2008) model. The results show strong negative asymmetric return-volatility relationship at various quantiles of IV distributions, it is monotonically increasing when moving from the median quantile to the uppermost quantile (i.e., 95%); therefore, OLS underestimates this relationship at upper quantiles. Additionally, the asymmetric relationship is more pronounced with the smirk (skew) adjusted volatility index measure in comparison to the old volatility index measure. Nonetheless, the volatility indices are ranked in terms of asymmetric volatility as follows: VIX, VSTOXX, VDAX, and VXN. The third essay examines the information content of the new-VDAX volatility index to forecast daily Value-at-Risk (VaR) estimates and compares its VaR forecasts with the forecasts of the Filtered Historical Simulation and RiskMetrics. All daily VaR models are then backtested from 1992-2009 using unconditional, independence, conditional coverage, and quadratic-score tests. It is found that the VDAX subsumes almost all information required for the volatility of daily VaR forecasts for a portfolio of the DAX30 index; implied-VaR models outperform all other VaR models. The fourth essay models the risk factors driving the swaption IVs. It is found that three factors can explain 94-97% of the variation in each of the EUR, USD, and GBP swaption IVs. There are significant linkages across factors, and bi-directional causality is at work between the factors implied by EUR and USD swaption IVs. Furthermore, the factors implied by EUR and USD IVs respond to each others’ shocks; however, surprisingly, GBP does not affect them. Second, the string market model calibration results show it can efficiently reproduce (or forecast) the volatility surface for each of the swaptions markets.
Resumo:
Financial time series tend to behave in a manner that is not directly drawn from a normal distribution. Asymmetries and nonlinearities are usually seen and these characteristics need to be taken into account. To make forecasts and predictions of future return and risk is rather complicated. The existing models for predicting risk are of help to a certain degree, but the complexity in financial time series data makes it difficult. The introduction of nonlinearities and asymmetries for the purpose of better models and forecasts regarding both mean and variance is supported by the essays in this dissertation. Linear and nonlinear models are consequently introduced in this dissertation. The advantages of nonlinear models are that they can take into account asymmetries. Asymmetric patterns usually mean that large negative returns appear more often than positive returns of the same magnitude. This goes hand in hand with the fact that negative returns are associated with higher risk than in the case where positive returns of the same magnitude are observed. The reason why these models are of high importance lies in the ability to make the best possible estimations and predictions of future returns and for predicting risk.
Resumo:
The objective of this paper is to investigate and model the characteristics of the prevailing volatility smiles and surfaces on the DAX- and ESX-index options markets. Continuing on the trend of Implied Volatility Functions, the Standardized Log-Moneyness model is introduced and fitted to historical data. The model replaces the constant volatility parameter of the Black & Scholes pricing model with a matrix of volatilities with respect to moneyness and maturity and is tested out-of-sample. Considering the dynamics, the results show support for the hypotheses put forward in this study, implying that the smile increases in magnitude when maturity and ATM volatility decreases and that there is a negative/positive correlation between a change in the underlying asset/time to maturity and implied ATM volatility. Further, the Standardized Log-Moneyness model indicates an improvement to pricing accuracy compared to previous Implied Volatility Function models, however indicating that the parameters of the models are to be re-estimated continuously for the models to fully capture the changing dynamics of the volatility smiles.