938 resultados para implied volatility function models


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This Ph.D. thesis contains 4 essays in mathematical finance with a focus on pricing Asian option (Chapter 4), pricing futures and futures option (Chapter 5 and Chapter 6) and time dependent volatility in futures option (Chapter 7). In Chapter 4, the applicability of the Albrecher et al.(2005)'s comonotonicity approach was investigated in the context of various benchmark models for equities and com- modities. Instead of classical Levy models as in Albrecher et al.(2005), the focus is the Heston stochastic volatility model, the constant elasticity of variance (CEV) model and the Schwartz (1997) two-factor model. It is shown that the method delivers rather tight upper bounds for the prices of Asian Options in these models and as a by-product delivers super-hedging strategies which can be easily implemented. In Chapter 5, two types of three-factor models were studied to give the value of com- modities futures contracts, which allow volatility to be stochastic. Both these two models have closed-form solutions for futures contracts price. However, it is shown that Model 2 is better than Model 1 theoretically and also performs very well empiri- cally. Moreover, Model 2 can easily be implemented in practice. In comparison to the Schwartz (1997) two-factor model, it is shown that Model 2 has its unique advantages; hence, it is also a good choice to price the value of commodity futures contracts. Fur- thermore, if these two models are used at the same time, a more accurate price for commodity futures contracts can be obtained in most situations. In Chapter 6, the applicability of the asymptotic approach developed in Fouque et al.(2000b) was investigated for pricing commodity futures options in a Schwartz (1997) multi-factor model, featuring both stochastic convenience yield and stochastic volatility. It is shown that the zero-order term in the expansion coincides with the Schwartz (1997) two-factor term, with averaged volatility, and an explicit expression for the first-order correction term is provided. With empirical data from the natural gas futures market, it is also demonstrated that a significantly better calibration can be achieved by using the correction term as compared to the standard Schwartz (1997) two-factor expression, at virtually no extra effort. In Chapter 7, a new pricing formula is derived for futures options in the Schwartz (1997) two-factor model with time dependent spot volatility. The pricing formula can also be used to find the result of the time dependent spot volatility with futures options prices in the market. Furthermore, the limitations of the method that is used to find the time dependent spot volatility will be explained, and it is also shown how to make sure of its accuracy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Water regimes in the Brazilian Cerrados are sensitive to climatological disturbances and human intervention. The risk that critical water-table levels are exceeded over long periods of time can be estimated by applying stochastic methods in modeling the dynamic relationship between water levels and driving forces such as precipitation and evapotranspiration. In this study, a transfer function-noise model, the so called PIRFICT-model, is applied to estimate the dynamic relationship between water-table depth and precipitation surplus/deficit in a watershed with a groundwater monitoring scheme in the Brazilian Cerrados. Critical limits were defined for a period in the Cerrados agricultural calendar, the end of the rainy season, when extremely shallow levels (< 0.5-m depth) can pose a risk to plant health and machinery before harvesting. By simulating time-series models, the risk of exceeding critical thresholds during a continuous period of time (e.g. 10 days) is described by probability levels. These simulated probabilities were interpolated spatially using universal kriging, incorporating information related to the drainage basin from a digital elevation model. The resulting map reduced model uncertainty. Three areas were defined as presenting potential risk at the end of the rainy season. These areas deserve attention with respect to water-management and land-use planning.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The presence of gap junction coupling among neurons of the central nervous systems has been appreciated for some time now. In recent years there has been an upsurge of interest from the mathematical community in understanding the contribution of these direct electrical connections between cells to large-scale brain rhythms. Here we analyze a class of exactly soluble single neuron models, capable of producing realistic action potential shapes, that can be used as the basis for understanding dynamics at the network level. This work focuses on planar piece-wise linear models that can mimic the firing response of several different cell types. Under constant current injection the periodic response and phase response curve (PRC) is calculated in closed form. A simple formula for the stability of a periodic orbit is found using Floquet theory. From the calculated PRC and the periodic orbit a phase interaction function is constructed that allows the investigation of phase-locked network states using the theory of weakly coupled oscillators. For large networks with global gap junction connectivity we develop a theory of strong coupling instabilities of the homogeneous, synchronous and splay state. For a piece-wise linear caricature of the Morris-Lecar model, with oscillations arising from a homoclinic bifurcation, we show that large amplitude oscillations in the mean membrane potential are organized around such unstable orbits.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A presente dissertação visa uma aplicação de séries temporais, na modelação do índice financeiro FTSE100. Com base na série de retornos, foram estudadas a estacionaridade através do teste Phillips-Perron, a normalidade pelo Teste Jarque-Bera, a independência analisada pela função de autocorrelação e pelo teste de Ljung-Box, e utilizados modelos GARCH, com a finalidade de modelar e prever a variância condicional (volatilidade) da série financeira em estudo. As séries temporais financeiras apresentam características peculiares, revelando períodos mais voláteis do que outros. Esses períodos encontram-se distribuídos em clusters, sugerindo um grau de dependência no tempo. Atendendo à presença de tais grupos de volatilidade (não linearidade), torna-se necessário o recurso a modelos heterocedásticos condicionais, isto é, modelos que consideram que a variância condicional de uma série temporal não é constante e dependente do tempo. Face à grande variabilidade das séries temporais financeiras ao longo do tempo, os modelos ARCH (Engle, 1982) e a sua generalização GARCH (Bollerslev, 1986) revelam-se os mais adequados para o estudo da volatilidade. Em particular, estes modelos não lineares apresentam uma variância condicional aleatória, sendo possível, através do seu estudo, estimar e prever a volatilidade futura da série. Por fim, é apresentado o estudo empírico que se baseia numa proposta de modelação e previsão de um conjunto de dados reais do índice financeiro FTSE100.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cette thèse se compose de trois articles sur les politiques budgétaires et monétaires optimales. Dans le premier article, J'étudie la détermination conjointe de la politique budgétaire et monétaire optimale dans un cadre néo-keynésien avec les marchés du travail frictionnels, de la monnaie et avec distortion des taux d'imposition du revenu du travail. Dans le premier article, je trouve que lorsque le pouvoir de négociation des travailleurs est faible, la politique Ramsey-optimale appelle à un taux optimal d'inflation annuel significativement plus élevé, au-delà de 9.5%, qui est aussi très volatile, au-delà de 7.4%. Le gouvernement Ramsey utilise l'inflation pour induire des fluctuations efficaces dans les marchés du travail, malgré le fait que l'évolution des prix est coûteuse et malgré la présence de la fiscalité du travail variant dans le temps. Les résultats quantitatifs montrent clairement que le planificateur s'appuie plus fortement sur l'inflation, pas sur l'impôts, pour lisser les distorsions dans l'économie au cours du cycle économique. En effet, il ya un compromis tout à fait clair entre le taux optimal de l'inflation et sa volatilité et le taux d'impôt sur le revenu optimal et sa variabilité. Le plus faible est le degré de rigidité des prix, le plus élevé sont le taux d'inflation optimal et la volatilité de l'inflation et le plus faible sont le taux d'impôt optimal sur le revenu et la volatilité de l'impôt sur le revenu. Pour dix fois plus petit degré de rigidité des prix, le taux d'inflation optimal et sa volatilité augmentent remarquablement, plus de 58% et 10%, respectivement, et le taux d'impôt optimal sur le revenu et sa volatilité déclinent de façon spectaculaire. Ces résultats sont d'une grande importance étant donné que dans les modèles frictionnels du marché du travail sans politique budgétaire et monnaie, ou dans les Nouveaux cadres keynésien même avec un riche éventail de rigidités réelles et nominales et un minuscule degré de rigidité des prix, la stabilité des prix semble être l'objectif central de la politique monétaire optimale. En l'absence de politique budgétaire et la demande de monnaie, le taux d'inflation optimal tombe très proche de zéro, avec une volatilité environ 97 pour cent moins, compatible avec la littérature. Dans le deuxième article, je montre comment les résultats quantitatifs impliquent que le pouvoir de négociation des travailleurs et les coûts de l'aide sociale de règles monétaires sont liées négativement. Autrement dit, le plus faible est le pouvoir de négociation des travailleurs, le plus grand sont les coûts sociaux des règles de politique monétaire. Toutefois, dans un contraste saisissant par rapport à la littérature, les règles qui régissent à la production et à l'étroitesse du marché du travail entraînent des coûts de bien-être considérablement plus faible que la règle de ciblage de l'inflation. C'est en particulier le cas pour la règle qui répond à l'étroitesse du marché du travail. Les coûts de l'aide sociale aussi baisse remarquablement en augmentant la taille du coefficient de production dans les règles monétaires. Mes résultats indiquent qu'en augmentant le pouvoir de négociation du travailleur au niveau Hosios ou plus, les coûts de l'aide sociale des trois règles monétaires diminuent significativement et la réponse à la production ou à la étroitesse du marché du travail n'entraîne plus une baisse des coûts de bien-être moindre que la règle de ciblage de l'inflation, qui est en ligne avec la littérature existante. Dans le troisième article, je montre d'abord que la règle Friedman dans un modèle monétaire avec une contrainte de type cash-in-advance pour les entreprises n’est pas optimale lorsque le gouvernement pour financer ses dépenses a accès à des taxes à distorsion sur la consommation. Je soutiens donc que, la règle Friedman en présence de ces taxes à distorsion est optimale si nous supposons un modèle avec travaie raw-efficace où seule le travaie raw est soumis à la contrainte de type cash-in-advance et la fonction d'utilité est homothétique dans deux types de main-d'oeuvre et séparable dans la consommation. Lorsque la fonction de production présente des rendements constants à l'échelle, contrairement au modèle des produits de trésorerie de crédit que les prix de ces deux produits sont les mêmes, la règle Friedman est optimal même lorsque les taux de salaire sont différents. Si la fonction de production des rendements d'échelle croissant ou decroissant, pour avoir l'optimalité de la règle Friedman, les taux de salaire doivent être égales.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Secure computation involves multiple parties computing a common function while keeping their inputs private, and is a growing field of cryptography due to its potential for maintaining privacy guarantees in real-world applications. However, current secure computation protocols are not yet efficient enough to be used in practice. We argue that this is due to much of the research effort being focused on generality rather than specificity. Namely, current research tends to focus on constructing and improving protocols for the strongest notions of security or for an arbitrary number of parties. However, in real-world deployments, these security notions are often too strong, or the number of parties running a protocol would be smaller. In this thesis we make several steps towards bridging the efficiency gap of secure computation by focusing on constructing efficient protocols for specific real-world settings and security models. In particular, we make the following four contributions: - We show an efficient (when amortized over multiple runs) maliciously secure two-party secure computation (2PC) protocol in the multiple-execution setting, where the same function is computed multiple times by the same pair of parties. - We improve the efficiency of 2PC protocols in the publicly verifiable covert security model, where a party can cheat with some probability but if it gets caught then the honest party obtains a certificate proving that the given party cheated. - We show how to optimize existing 2PC protocols when the function to be computed includes predicate checks on its inputs. - We demonstrate an efficient maliciously secure protocol in the three-party setting.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ce projet illustre cinq études, mettant l'emphase sur le développement d'une nouvelle approche diagnostique cardiovasculaire afin d'évaluer le niveau d’oxygène contenu dans le myocarde ainsi que sa fonction microvasculaire. En combinant une séquence de résonance magnétique cardiovasculaire (RMC) pouvant détecter le niveau d’oxygène (OS), des manœuvres respiratoires ainsi que des analyses de gaz artériels peuvent être utilisés comme procédure non invasive destinée à induire une réponse vasoactive afin d’évaluer la réserve d'oxygénation, une mesure clé de la fonction vasculaire. Le nombre de tests diagnostiques cardiaques prescrits ainsi que les interventions, sont en pleine expansion. L'imagerie et tests non invasifs sont souvent effectués avant l’utilisation de procédures invasives. L'imagerie cardiaque permet d’évaluer la présence ou absence de sténoses coronaires, un important facteur économique dans notre système de soins de santé. Les techniques d'imagerie non invasives fournissent de l’information précise afin d’identifier la présence et l’emplacement du déficit de perfusion chez les patients présentant des symptômes d'ischémie myocardique. Néanmoins, plusieurs techniques actuelles requièrent la nécessité de radiation, d’agents de contraste ou traceurs, sans oublier des protocoles de stress pharmacologiques ou physiques. L’imagerie RMC peut identifier une sténose coronaire significative sans radiation. De nouvelles tendances d’utilisation de RMC visent à développer des techniques diagnostiques qui ne requièrent aucun facteur de stress pharmacologiques ou d’agents de contraste. L'objectif principal de ce projet était de développer et tester une nouvelle technique diagnostique afin d’évaluer la fonction vasculaire coronarienne en utilisant l' OS-RMC, en combinaison avec des manœuvres respiratoires comme stimulus vasoactif. Ensuite, les objectifs, secondaires étaient d’utilisés l’OS-RMC pour évaluer l'oxygénation du myocarde et la réponse coronaire en présence de gaz artériels altérés. Suite aux manœuvres respiratoires la réponse vasculaire a été validée chez un modèle animal pour ensuite être utilisé chez deux volontaires sains et finalement dans une population de patients atteints de maladies cardiovasculaires. Chez le modèle animal, les manœuvres respiratoires ont pu induire un changement significatif, mesuré intrusivement par débit sanguin coronaire. Il a été démontré qu’en présence d'une sténose coronarienne hémodynamiquement significative, l’OS-RMC pouvait détecter un déficit en oxygène du myocarde. Chez l’homme sain, l'application de cette technique en comparaison avec l'adénosine (l’agent standard) pour induire une vasodilatation coronarienne et les manœuvres respiratoires ont pu induire une réponse plus significative en oxygénation dans un myocarde sain. Finalement, nous avons utilisé les manœuvres respiratoires parmi un groupe de patients atteint de maladies coronariennes. Leurs myocardes étant altérées par une sténose coronaire, en conséquence modifiant ainsi leur réponse en oxygénation. Par la suite nous avons évalué les effets des gaz artériels sanguins sur l'oxygénation du myocarde. Ils démontrent que la réponse coronarienne est atténuée au cours de l’hyperoxie, suite à un stimuli d’apnée. Ce phénomène provoque une réduction globale du débit sanguin coronaire et un déficit d'oxygénation dans le modèle animal ayant une sténose lorsqu’un supplément en oxygène est donné. En conclusion, ce travail a permis d'améliorer notre compréhension des nouvelles techniques diagnostiques en imagerie cardiovasculaire. Par ailleurs, nous avons démontré que la combinaison de manœuvres respiratoires et l’imagerie OS-RMC peut fournir une méthode non-invasive et rentable pour évaluer la fonction vasculaire coronarienne régionale et globale.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Friedreich ataxia (FRDA) is the most common form of autosomal-recessive ataxia. Common nonmotor features include cardiomyopathy and diabetes mellitus. At present, no effective treatments are available to prevent disease progression. Age of onset varies from infancy to adulthood. In the majority of patients, FRDA is caused by intronic GAA expansions in FXN, which encodes a highly-conserved small mitochondrial matrix protein, frataxin. A mouse model of FRDA has been difficult to generate because complete loss of frataxin causes early embryonic lethality. Although there are some controversies about the function of frataxin, recent biochemical and structural studies have confirmed that it is a component of the multiprotein complex that assembles iron-sulfur clusters in the mitochondrial matrix. The main consequences of frataxin deficiency are energy deficit, altered iron metabolism, and oxidative damage.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Spiking neural networks - networks that encode information in the timing of spikes - are arising as a new approach in the artificial neural networks paradigm, emergent from cognitive science. One of these new models is the pulsed neural network with radial basis function, a network able to store information in the axonal propagation delay of neurons. Learning algorithms have been proposed to this model looking for mapping input pulses into output pulses. Recently, a new method was proposed to encode constant data into a temporal sequence of spikes, stimulating deeper studies in order to establish abilities and frontiers of this new approach. However, a well known problem of this kind of network is the high number of free parameters - more that 15 - to be properly configured or tuned in order to allow network convergence. This work presents for the first time a new learning function for this network training that allow the automatic configuration of one of the key network parameters: the synaptic weight decreasing factor.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper applies two measures to assess spillovers across markets: the Diebold Yilmaz (2012) Spillover Index and the Hafner and Herwartz (2006) analysis of multivariate GARCH models using volatility impulse response analysis. We use two sets of data, daily realized volatility estimates taken from the Oxford Man RV library, running from the beginning of 2000 to October 2016, for the S&P500 and the FTSE, plus ten years of daily returns series for the New York Stock Exchange Index and the FTSE 100 index, from 3 January 2005 to 31 January 2015. Both data sets capture both the Global Financial Crisis (GFC) and the subsequent European Sovereign Debt Crisis (ESDC). The spillover index captures the transmission of volatility to and from markets, plus net spillovers. The key difference between the measures is that the spillover index captures an average of spillovers over a period, whilst volatility impulse responses (VIRF) have to be calibrated to conditional volatility estimated at a particular point in time. The VIRF provide information about the impact of independent shocks on volatility. In the latter analysis, we explore the impact of three different shocks, the onset of the GFC, which we date as 9 August 2007 (GFC1). It took a year for the financial crisis to come to a head, but it did so on 15 September 2008, (GFC2). The third shock is 9 May 2010. Our modelling includes leverage and asymmetric effects undertaken in the context of a multivariate GARCH model, which are then analysed using both BEKK and diagonal BEKK (DBEKK) models. A key result is that the impact of negative shocks is larger, in terms of the effects on variances and covariances, but shorter in duration, in this case a difference between three and six months.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this paper is to contribute to the debate on corporate governance models in European transition economies. The paper consists of four parts. After a historic overview of the evolution of corporate governance, the introduction presents various understandings of the corporate governance function and describes current issues in corporate governance. Part two deals with governance systems in the (mainly domestically) privatized former state-owned companies in Central European transition countries, with the main types of company ownership structures, relationships between governing and management functions, and deficiencies in existing governance systems. Part three is dedicated to the analysis of factors that determine the efficiency of the relationship between the corporate governance and management functions in Central European transition economies. It deals with the issue of why the German (continental European) governance model is usually the preferred choice and why the chosen models underperform. In the conclusion the author offers his suggestions on how the Central European transition countries should improve their corporate governance in the future.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many geological formations consist of crystalline rocks that have very low matrix permeability but allow flow through an interconnected network of fractures. Understanding the flow of groundwater through such rocks is important in considering disposal of radioactive waste in underground repositories. A specific area of interest is the conditioning of fracture transmissivities on measured values of pressure in these formations. This is the process where the values of fracture transmissivities in a model are adjusted to obtain a good fit of the calculated pressures to measured pressure values. While there are existing methods to condition transmissivity fields on transmissivity, pressure and flow measurements for a continuous porous medium there is little literature on conditioning fracture networks. Conditioning fracture transmissivities on pressure or flow values is a complex problem because the measurements are not linearly related to the fracture transmissivities and they are also dependent on all the fracture transmissivities in the network. We present a new method for conditioning fracture transmissivities on measured pressure values based on the calculation of certain basis vectors; each basis vector represents the change to the log transmissivity of the fractures in the network that results in a unit increase in the pressure at one measurement point whilst keeping the pressure at the remaining measurement points constant. The fracture transmissivities are updated by adding a linear combination of basis vectors and coefficients, where the coefficients are obtained by minimizing an error function. A mathematical summary of the method is given. This algorithm is implemented in the existing finite element code ConnectFlow developed and marketed by Serco Technical Services, which models groundwater flow in a fracture network. Results of the conditioning are shown for a number of simple test problems as well as for a realistic large scale test case.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Doutoramento em Gestão

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Stem cell transplantation holds great promise for the treatment of myocardial infarction injury. We recently described the embryonic stem cell-derived cardiac progenitor cells (CPCs) capable of differentiating into cardiomyocytes, vascular endothelium, and smooth muscle. In this study, we hypothesized that transplanted CPCs will preserve function of the infarcted heart by participating in both muscle replacement and neovascularization. Differentiated CPCs formed functional electromechanical junctions with cardiomyocytes in vitro and conducted action potentials over cm-scale distances. When transplanted into infarcted mouse hearts, CPCs engrafted long-term in the infarct zone and surrounding myocardium without causing teratomas or arrhythmias. The grafted cells differentiated into cross-striated cardiomyocytes forming gap junctions with the host cells, while also contributing to neovascularization. Serial echocardiography and pressure-volume catheterization demonstrated attenuated ventricular dilatation and preserved left ventricular fractional shortening, systolic and diastolic function. Our results demonstrate that CPCs can engraft, differentiate, and preserve the functional output of the infarcted heart.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The diaphragm is the primary inspiratory pump muscle of breathing. Notwithstanding its critical role in pulmonary ventilation, the diaphragm like other striated muscles is malleable in response to physiological and pathophysiological stressors, with potential implications for the maintenance of respiratory homeostasis. This review considers hypoxic adaptation of the diaphragm muscle, with a focus on functional, structural, and metabolic remodeling relevant to conditions such as high altitude and chronic respiratory disease. On the basis of emerging data in animal models, we posit that hypoxia is a significant driver of respiratory muscle plasticity, with evidence suggestive of both compensatory and deleterious adaptations in conditions of sustained exposure to low oxygen. Cellular strategies driving diaphragm remodeling during exposure to sustained hypoxia appear to confer hypoxic tolerance at the expense of peak force-generating capacity, a key functional parameter that correlates with patient morbidity and mortality. Changes include, but are not limited to: redox-dependent activation of hypoxia-inducible factor (HIF) and MAP kinases; time-dependent carbonylation of key metabolic and functional proteins; decreased mitochondrial respiration; activation of atrophic signaling and increased proteolysis; and altered functional performance. Diaphragm muscle weakness may be a signature effect of sustained hypoxic exposure. We discuss the putative role of reactive oxygen species as mediators of both advantageous and disadvantageous adaptations of diaphragm muscle to sustained hypoxia, and the role of antioxidants in mitigating adverse effects of chronic hypoxic stress on respiratory muscle function.