947 resultados para Stochastic ODEs


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper describes an parallel semi-Lagrangian finite difference approach to the pricing of early exercise Asian Options on assets with a stochastic volatility. A multigrid procedure is described for the fast iterative solution of the discrete linear complementarity problems that result. The accuracy and performance of this approach is improved considerably by a strike-price related analytic transformation of asset prices. Asian options are contingent claims with payoffs that depend on the average price of an asset over some time interval. The payoff may depend on this average and a fixed strike price (Fixed Strike Asians) or it may depend on the average and the asset price (Floating Strike Asians). The option may also permit early exercise (American contract) or confine the holder to a fixed exercise date (European contract). The Fixed Strike Asian with early exercise is considered here where continuous arithmetic averaging has been used. Pricing such an option where the asset price has a stochastic volatility leads to the requirement to solve a tri-variate partial differential inequation in the three state variables of asset price, average price and volatility (or equivalently, variance). The similarity transformations [6] used with Floating Strike Asian options to reduce the dimensionality of the problem are not applicable to Fixed Strikes and so the numerical solution of a tri-variate problem is necessary. The computational challenge is to provide accurate solutions sufficiently quickly to support realtime trading activities at a reasonable cost in terms of hardware requirements.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper introduces the stochastic version of the Geometric Machine Model for the modelling of sequential, alternative, parallel (synchronous) and nondeterministic computations with stochastic numbers stored in a (possibly infinite) shared memory. The programming language L(D! 1), induced by the Coherence Space of Processes D! 1, can be applied to sequential and parallel products in order to provide recursive definitions for such processes, together with a domain-theoretic semantics of the Stochastic Arithmetic. We analyze both the spacial (ordinal) recursion, related to spacial modelling of the stochastic memory, and the temporal (structural) recursion, given by the inclusion relation modelling partial objects in the ordered structure of process construction.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we establish, from extensive numerical experiments, that the two dimensional stochastic fire-diffuse-fire model belongs to the directed percolation universality class. This model is an idealized model of intracellular calcium release that retains the both the discrete nature of calcium stores and the stochastic nature of release. It is formed from an array of noisy threshold elements that are coupled only by a diffusing signal. The model supports spontaneous release events that can merge to form spreading circular and spiral waves of activity. The critical level of noise required for the system to exhibit a non-equilibrium phase-transition between propagating and non-propagating waves is obtained by an examination of the \textit{local slope} $\delta(t)$ of the survival probability, $\Pi(t) \propto \exp(- \delta(t))$, for a wave to propagate for a time $t$.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This Ph.D. thesis contains 4 essays in mathematical finance with a focus on pricing Asian option (Chapter 4), pricing futures and futures option (Chapter 5 and Chapter 6) and time dependent volatility in futures option (Chapter 7). In Chapter 4, the applicability of the Albrecher et al.(2005)'s comonotonicity approach was investigated in the context of various benchmark models for equities and com- modities. Instead of classical Levy models as in Albrecher et al.(2005), the focus is the Heston stochastic volatility model, the constant elasticity of variance (CEV) model and the Schwartz (1997) two-factor model. It is shown that the method delivers rather tight upper bounds for the prices of Asian Options in these models and as a by-product delivers super-hedging strategies which can be easily implemented. In Chapter 5, two types of three-factor models were studied to give the value of com- modities futures contracts, which allow volatility to be stochastic. Both these two models have closed-form solutions for futures contracts price. However, it is shown that Model 2 is better than Model 1 theoretically and also performs very well empiri- cally. Moreover, Model 2 can easily be implemented in practice. In comparison to the Schwartz (1997) two-factor model, it is shown that Model 2 has its unique advantages; hence, it is also a good choice to price the value of commodity futures contracts. Fur- thermore, if these two models are used at the same time, a more accurate price for commodity futures contracts can be obtained in most situations. In Chapter 6, the applicability of the asymptotic approach developed in Fouque et al.(2000b) was investigated for pricing commodity futures options in a Schwartz (1997) multi-factor model, featuring both stochastic convenience yield and stochastic volatility. It is shown that the zero-order term in the expansion coincides with the Schwartz (1997) two-factor term, with averaged volatility, and an explicit expression for the first-order correction term is provided. With empirical data from the natural gas futures market, it is also demonstrated that a significantly better calibration can be achieved by using the correction term as compared to the standard Schwartz (1997) two-factor expression, at virtually no extra effort. In Chapter 7, a new pricing formula is derived for futures options in the Schwartz (1997) two-factor model with time dependent spot volatility. The pricing formula can also be used to find the result of the time dependent spot volatility with futures options prices in the market. Furthermore, the limitations of the method that is used to find the time dependent spot volatility will be explained, and it is also shown how to make sure of its accuracy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Stochastic methods based on time-series modeling combined with geostatistics can be useful tools to describe the variability of water-table levels in time and space and to account for uncertainty. Monitoring water-level networks can give information about the dynamic of the aquifer domain in both dimensions. Time-series modeling is an elegant way to treat monitoring data without the complexity of physical mechanistic models. Time-series model predictions can be interpolated spatially, with the spatial differences in water-table dynamics determined by the spatial variation in the system properties and the temporal variation driven by the dynamics of the inputs into the system. An integration of stochastic methods is presented, based on time-series modeling and geostatistics as a framework to predict water levels for decision making in groundwater management and land-use planning. The methodology is applied in a case study in a Guarani Aquifer System (GAS) outcrop area located in the southeastern part of Brazil. Communication of results in a clear and understandable form, via simulated scenarios, is discussed as an alternative, when translating scientific knowledge into applications of stochastic hydrogeology in large aquifers with limited monitoring network coverage like the GAS.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

[en] It is known that most of the problems applied in the real life present uncertainty. In the rst part of the dissertation, basic concepts and properties of the Stochastic Programming have been introduced to the reader, also known as Optimization under Uncertainty. Moreover, since stochastic programs are complex to compute, we have presented some other models such as wait-and-wee, expected value and the expected result of using expected value. The expected value of perfect information and the value of stochastic solution measures quantify how worthy the Stochastic Programming is, with respect to the other models. In the second part, it has been designed and implemented with the modeller GAMS and the optimizer CPLEX an application that optimizes the distribution of non-perishable products, guaranteeing some nutritional requirements with minimum cost. It has been developed within Hazia project, managed by Sortarazi association and associated with Food Bank of Biscay and Basic Social Services of several districts of Biscay.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Many geological formations consist of crystalline rocks that have very low matrix permeability but allow flow through an interconnected network of fractures. Understanding the flow of groundwater through such rocks is important in considering disposal of radioactive waste in underground repositories. A specific area of interest is the conditioning of fracture transmissivities on measured values of pressure in these formations. This is the process where the values of fracture transmissivities in a model are adjusted to obtain a good fit of the calculated pressures to measured pressure values. While there are existing methods to condition transmissivity fields on transmissivity, pressure and flow measurements for a continuous porous medium there is little literature on conditioning fracture networks. Conditioning fracture transmissivities on pressure or flow values is a complex problem because the measurements are not linearly related to the fracture transmissivities and they are also dependent on all the fracture transmissivities in the network. We present a new method for conditioning fracture transmissivities on measured pressure values based on the calculation of certain basis vectors; each basis vector represents the change to the log transmissivity of the fractures in the network that results in a unit increase in the pressure at one measurement point whilst keeping the pressure at the remaining measurement points constant. The fracture transmissivities are updated by adding a linear combination of basis vectors and coefficients, where the coefficients are obtained by minimizing an error function. A mathematical summary of the method is given. This algorithm is implemented in the existing finite element code ConnectFlow developed and marketed by Serco Technical Services, which models groundwater flow in a fracture network. Results of the conditioning are shown for a number of simple test problems as well as for a realistic large scale test case.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This dissertation is devoted to the equations of motion governing the evolution of a fluid or gas at the macroscopic scale. The classical model is a PDE description known as the Navier-Stokes equations. The behavior of solutions is notoriously complex, leading many in the scientific community to describe fluid mechanics using a statistical language. In the physics literature, this is often done in an ad-hoc manner with limited precision about the sense in which the randomness enters the evolution equation. The stochastic PDE community has begun proposing precise models, where a random perturbation appears explicitly in the evolution equation. Although this has been an active area of study in recent years, the existing literature is almost entirely devoted to incompressible fluids. The purpose of this thesis is to take a step forward in addressing this statistical perspective in the setting of compressible fluids. In particular, we study the well posedness for the corresponding system of Stochastic Navier Stokes equations, satisfied by the density, velocity, and temperature. The evolution of the momentum involves a random forcing which is Brownian in time and colored in space. We allow for multiplicative noise, meaning that spatial correlations may depend locally on the fluid variables. Our main result is a proof of global existence of weak martingale solutions to the Cauchy problem set within a bounded domain, emanating from large initial datum. The proof involves a mix of deterministic and stochastic analysis tools. Fundamentally, the approach is based on weak compactness techniques from the deterministic theory combined with martingale methods. Four layers of approximate stochastic PDE's are built and analyzed. A careful study of the probability laws of our approximating sequences is required. We prove appropriate tightness results and appeal to a recent generalization of the Skorohod theorem. This ultimately allows us to deduce analogues of the weak compactness tools of Lions and Feireisl, appropriately interpreted in the stochastic setting.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Energy Conservation Measure (ECM) project selection is made difficult given real-world constraints, limited resources to implement savings retrofits, various suppliers in the market and project financing alternatives. Many of these energy efficient retrofit projects should be viewed as a series of investments with annual returns for these traditionally risk-averse agencies. Given a list of ECMs available, federal, state and local agencies must determine how to implement projects at lowest costs. The most common methods of implementation planning are suboptimal relative to cost. Federal, state and local agencies can obtain greater returns on their energy conservation investment over traditional methods, regardless of the implementing organization. This dissertation outlines several approaches to improve the traditional energy conservations models. Any public buildings in regions with similar energy conservation goals in the United States or internationally can also benefit greatly from this research. Additionally, many private owners of buildings are under mandates to conserve energy e.g., Local Law 85 of the New York City Energy Conservation Code requires any building, public or private, to meet the most current energy code for any alteration or renovation. Thus, both public and private stakeholders can benefit from this research. The research in this dissertation advances and presents models that decision-makers can use to optimize the selection of ECM projects with respect to the total cost of implementation. A practical application of a two-level mathematical program with equilibrium constraints (MPEC) improves the current best practice for agencies concerned with making the most cost-effective selection leveraging energy services companies or utilities. The two-level model maximizes savings to the agency and profit to the energy services companies (Chapter 2). An additional model presented leverages a single congressional appropriation to implement ECM projects (Chapter 3). Returns from implemented ECM projects are used to fund additional ECM projects. In these cases, fluctuations in energy costs and uncertainty in the estimated savings severely influence ECM project selection and the amount of the appropriation requested. A risk aversion method proposed imposes a minimum on the number of “of projects completed in each stage. A comparative method using Conditional Value at Risk is analyzed. Time consistency was addressed in this chapter. This work demonstrates how a risk-based, stochastic, multi-stage model with binary decision variables at each stage provides a much more accurate estimate for planning than the agency’s traditional approach and deterministic models. Finally, in Chapter 4, a rolling-horizon model allows for subadditivity and superadditivity of the energy savings to simulate interactive effects between ECM projects. The approach makes use of inequalities (McCormick, 1976) to re-express constraints that involve the product of binary variables with an exact linearization (related to the convex hull of those constraints). This model additionally shows the benefits of learning between stages while remaining consistent with the single congressional appropriations framework.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper is concerned with a stochastic SIR (susceptible-infective-removed) model for the spread of an epidemic amongst a population of individuals, with a random network of social contacts, that is also partitioned into households. The behaviour of the model as the population size tends to infinity in an appropriate fashion is investigated. A threshold parameter which determines whether or not an epidemic with few initial infectives can become established and lead to a major outbreak is obtained, as are the probability that a major outbreak occurs and the expected proportion of the population that are ultimately infected by such an outbreak, together with methods for calculating these quantities. Monte Carlo simulations demonstrate that these asymptotic quantities accurately reflect the behaviour of finite populations, even for only moderately sized finite populations. The model is compared and contrasted with related models previously studied in the literature. The effects of the amount of clustering present in the overall population structure and the infectious period distribution on the outcomes of the model are also explored.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This volume presents a collection of papers covering applications from a wide range of systems with infinitely many degrees of freedom studied using techniques from stochastic and infinite dimensional analysis, e.g. Feynman path integrals, the statistical mechanics of polymer chains, complex networks, and quantum field theory. Systems of infinitely many degrees of freedom create their particular mathematical challenges which have been addressed by different mathematical theories, namely in the theories of stochastic processes, Malliavin calculus, and especially white noise analysis. These proceedings are inspired by a conference held on the occasion of Prof. Ludwig Streit’s 75th birthday and celebrate his pioneering and ongoing work in these fields.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Une approche classique pour traiter les problèmes d’optimisation avec incertitude à deux- et multi-étapes est d’utiliser l’analyse par scénario. Pour ce faire, l’incertitude de certaines données du problème est modélisée par vecteurs aléatoires avec des supports finis spécifiques aux étapes. Chacune de ces réalisations représente un scénario. En utilisant des scénarios, il est possible d’étudier des versions plus simples (sous-problèmes) du problème original. Comme technique de décomposition par scénario, l’algorithme de recouvrement progressif est une des méthodes les plus populaires pour résoudre les problèmes de programmation stochastique multi-étapes. Malgré la décomposition complète par scénario, l’efficacité de la méthode du recouvrement progressif est très sensible à certains aspects pratiques, tels que le choix du paramètre de pénalisation et la manipulation du terme quadratique dans la fonction objectif du lagrangien augmenté. Pour le choix du paramètre de pénalisation, nous examinons quelques-unes des méthodes populaires, et nous proposons une nouvelle stratégie adaptive qui vise à mieux suivre le processus de l’algorithme. Des expériences numériques sur des exemples de problèmes stochastiques linéaires multi-étapes suggèrent que la plupart des techniques existantes peuvent présenter une convergence prématurée à une solution sous-optimale ou converger vers la solution optimale, mais avec un taux très lent. En revanche, la nouvelle stratégie paraît robuste et efficace. Elle a convergé vers l’optimalité dans toutes nos expériences et a été la plus rapide dans la plupart des cas. Pour la question de la manipulation du terme quadratique, nous faisons une revue des techniques existantes et nous proposons l’idée de remplacer le terme quadratique par un terme linéaire. Bien que qu’il nous reste encore à tester notre méthode, nous avons l’intuition qu’elle réduira certaines difficultés numériques et théoriques de la méthode de recouvrement progressif.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Les métaheuristiques sont très utilisées dans le domaine de l'optimisation discrète. Elles permettent d’obtenir une solution de bonne qualité en un temps raisonnable, pour des problèmes qui sont de grande taille, complexes, et difficiles à résoudre. Souvent, les métaheuristiques ont beaucoup de paramètres que l’utilisateur doit ajuster manuellement pour un problème donné. L'objectif d'une métaheuristique adaptative est de permettre l'ajustement automatique de certains paramètres par la méthode, en se basant sur l’instance à résoudre. La métaheuristique adaptative, en utilisant les connaissances préalables dans la compréhension du problème, des notions de l'apprentissage machine et des domaines associés, crée une méthode plus générale et automatique pour résoudre des problèmes. L’optimisation globale des complexes miniers vise à établir les mouvements des matériaux dans les mines et les flux de traitement afin de maximiser la valeur économique du système. Souvent, en raison du grand nombre de variables entières dans le modèle, de la présence de contraintes complexes et de contraintes non-linéaires, il devient prohibitif de résoudre ces modèles en utilisant les optimiseurs disponibles dans l’industrie. Par conséquent, les métaheuristiques sont souvent utilisées pour l’optimisation de complexes miniers. Ce mémoire améliore un procédé de recuit simulé développé par Goodfellow & Dimitrakopoulos (2016) pour l’optimisation stochastique des complexes miniers stochastiques. La méthode développée par les auteurs nécessite beaucoup de paramètres pour fonctionner. Un de ceux-ci est de savoir comment la méthode de recuit simulé cherche dans le voisinage local de solutions. Ce mémoire implémente une méthode adaptative de recherche dans le voisinage pour améliorer la qualité d'une solution. Les résultats numériques montrent une augmentation jusqu'à 10% de la valeur de la fonction économique.