979 resultados para Stochastic differential equation
Resumo:
We present a model for mechanical activation of the cardiac tissue depending on the evolution of the transmembrane electrical potential and certain gating/ionic variables that are available in most of electrophysiological descriptions of the cardiac membrane. The basic idea consists in adding to the chosen ionic model one ordinary differential equation for the kinetics of the mechanical activation function. A relevant example illustrates the desired properties of the proposed model, such as delayed muscle contraction and correct magnitude of the muscle fibers' shortening.
Resumo:
Yksi keskeisimmistä tehtävistä matemaattisten mallien tilastollisessa analyysissä on mallien tuntemattomien parametrien estimointi. Tässä diplomityössä ollaan kiinnostuneita tuntemattomien parametrien jakaumista ja niiden muodostamiseen sopivista numeerisista menetelmistä, etenkin tapauksissa, joissa malli on epälineaarinen parametrien suhteen. Erilaisten numeeristen menetelmien osalta pääpaino on Markovin ketju Monte Carlo -menetelmissä (MCMC). Nämä laskentaintensiiviset menetelmät ovat viime aikoina kasvattaneet suosiotaan lähinnä kasvaneen laskentatehon vuoksi. Sekä Markovin ketjujen että Monte Carlo -simuloinnin teoriaa on esitelty työssä siinä määrin, että menetelmien toimivuus saadaan perusteltua. Viime aikoina kehitetyistä menetelmistä tarkastellaan etenkin adaptiivisia MCMC menetelmiä. Työn lähestymistapa on käytännönläheinen ja erilaisia MCMC -menetelmien toteutukseen liittyviä asioita korostetaan. Työn empiirisessä osuudessa tarkastellaan viiden esimerkkimallin tuntemattomien parametrien jakaumaa käyttäen hyväksi teoriaosassa esitettyjä menetelmiä. Mallit kuvaavat kemiallisia reaktioita ja kuvataan tavallisina differentiaaliyhtälöryhminä. Mallit on kerätty kemisteiltä Lappeenrannan teknillisestä yliopistosta ja Åbo Akademista, Turusta.
Resumo:
Langevin Equations of Ginzburg-Landau form, with multiplicative noise, are proposed to study the effects of fluctuations in domain growth. These equations are derived from a coarse-grained methodology. The Cahn-Hiliard-Cook linear stability analysis predicts some effects in the transitory regime. We also derive numerical algorithms for the computer simulation of these equations. The numerical results corroborate the analytical predictions of the linear analysis. We also present simulation results for spinodal decomposition at large times.
Resumo:
Työn tavoitteena oli kehittää nopeasti konvergoiva kuorielementti epälineaarisesti joustavien kappaleiden analysointiin. Kuorielementti perustuu absoluuttisten solmukoordinaattien menetelmään ja se hyödyntää kaarevuuden kuvausta elastisten voimien määrityksessä. Kehitettyä elementtiä verrattiin kontinuumimekaniikalla kehitettyyn kuorielementtiin ja kaupallisen elementtimenetelmän kuorielementtiin. Yksinkertaisimman kuormitustapauksen tuloksia verrattiin teknisen taivutusteorian mukaiseen analyyttiseen ratkaisuun. Staattisten testien tulokset tässä työssä kehitetyllä kuorielementillä vastasivat hyvin kaupallisella elementtimenetelmällä saatuja tuloksia. Deformaatioiden ollessa geometrisesti lineaarisella alueella, kehitetyllä kuorielementillä saadut tulokset vastasivat paremmin sekä analyyttistä ratkaisua että kaupallisella elementtimenetelmällä saatuja tuloksia kuin aiemman kontinuumimekaniikkaan perustuvan kuorielementin tulokset. Kehitetyn kuorielementin ongelmana verrattuna kontinuumimekaniikkaan perustuvaan elementtiin on monimutkaisempi kinematiikan kuvaus. Tästä on seurauksena laskenta-ajan huomattava kasvaminen. Jatkossa kannattaisi keskittyä numeeristen ratkaisumenetelmien kehittämiseen.
Resumo:
We treat some subtleties concerning the First Law of Thermodynamics and discuss the inherent difficulties, namely the interpretation of the heat and the work differentials. By proposing a new differential equation for the First Law, which is written using both system and neighborhood variables, we overcome the mentioned difficulties and establish a criterion for the definition of heat and work.
Resumo:
The known properties of diffusion on fractals are reviewed in order to give a general outlook of these dynamic processes. After that, we propose a description developed in the context of the intrinsic metric of fractals, which leads us to a differential equation able to describe diffusion in real fractals in the asymptotic regime. We show that our approach has a stronger physical justification than previous works on this field. The most important result we present is the introduction of a dependence on time and space for the conductivity in fractals, which is deduced by scaling arguments and supported by computer simulations. Finally, the diffusion equation is used to introduce the possibility of reaction-diffusion processes on fractals and analyze their properties. Specifically, an analytic expression for the speed of the corresponding travelling fronts, which can be of great interest for application purposes, is derived
Resumo:
The main objective of this thesis is to show that plate strips subjected to transverse line loads can be analysed by using the beam on elastic foundation (BEF) approach. It is shown that the elastic behaviour of both the centre line section of a semi infinite plate supported along two edges, and the free edge of a cantilever plate strip can be accurately predicted by calculations based on the two parameter BEF theory. The transverse bending stiffness of the plate strip forms the foundation. The foundation modulus is shown, mathematically and physically, to be the zero order term of the fourth order differential equation governing the behaviour of BEF, whereas the torsion rigidity of the plate acts like pre tension in the second order term. Direct equivalence is obtained for harmonic line loading by comparing the differential equations of Levy's method (a simply supported plate) with the BEF method. By equating the second and zero order terms of the semi infinite BEF model for each harmonic component, two parameters are obtained for a simply supported plate of width B: the characteristic length, 1/ λ, and the normalized sum, n, being the effect of axial loading and stiffening resulting from the torsion stiffness, nlin. This procedure gives the following result for the first mode when a uniaxial stress field was assumed (ν = 0): 1/λ = √2B/π and nlin = 1. For constant line loading, which is the superimposition of harmonic components, slightly differing foundation parameters are obtained when the maximum deflection and bending moment values of the theoretical plate, with v = 0, and BEF analysis solutions are equated: 1 /λ= 1.47B/π and nlin. = 0.59 for a simply supported plate; and 1/λ = 0.99B/π and nlin = 0.25 for a fixed plate. The BEF parameters of the plate strip with a free edge are determined based solely on finite element analysis (FEA) results: 1/λ = 1.29B/π and nlin. = 0.65, where B is the double width of the cantilever plate strip. The stress biaxial, v > 0, is shown not to affect the values of the BEF parameters significantly the result of the geometric nonlinearity caused by in plane, axial and biaxial loading is studied theoretically by comparing the differential equations of Levy's method with the BEF approach. The BEF model is generalised to take into account the elastic rotation stiffness of the longitudinal edges. Finally, formulae are presented that take into account the effect of Poisson's ratio, and geometric non linearity, on bending behaviour resulting from axial and transverse inplane loading. It is also shown that the BEF parameters of the semi infinite model are valid for linear elastic analysis of a plate strip of finite length. The BEF model was verified by applying it to the analysis of bending stresses caused by misalignments in a laboratory test panel. In summary, it can be concluded that the advantages of the BEF theory are that it is a simple tool, and that it is accurate enough for specific stress analysis of semi infinite and finite plate bending problems.
Resumo:
In the power market, electricity prices play an important role at the economic level. The behavior of a price trend usually known as a structural break may change over time in terms of its mean value, its volatility, or it may change for a period of time before reverting back to its original behavior or switching to another style of behavior, and the latter is typically termed a regime shift or regime switch. Our task in this thesis is to develop an electricity price time series model that captures fat tailed distributions which can explain this behavior and analyze it for better understanding. For NordPool data used, the obtained Markov Regime-Switching model operates on two regimes: regular and non-regular. Three criteria have been considered price difference criterion, capacity/flow difference criterion and spikes in Finland criterion. The suitability of GARCH modeling to simulate multi-regime modeling is also studied.
Resumo:
A model for predicting temperature evolution for automatic controling systems in manufacturing processes requiring the coiling of bars in the transfer table is presented. Although the method is of a general nature, the presentation in this work refers to the manufacturing of steel plates in hot rolling mills. The predicting strategy is based on a mathematical model of the evolution of temperature in a coiling and uncoiling bar and is presented in the form of a parabolic partial differential equation for a shape changing domain. The mathematical model is solved numerically by a space discretization via geometrically adaptive finite elements which accomodate the change in shape of the domain, using a computationally novel treatment of the resulting thermal contact problem due to coiling. Time is discretized according to a Crank-Nicolson scheme. Since the actual physical process takes less time than the time required by the process controlling computer to solve the full mathematical model, a special predictive device was developed, in the form of a set of least squares polynomials, based on the off-line numerical solution of the mathematical model.
Resumo:
State-of-the-art predictions of atmospheric states rely on large-scale numerical models of chaotic systems. This dissertation studies numerical methods for state and parameter estimation in such systems. The motivation comes from weather and climate models and a methodological perspective is adopted. The dissertation comprises three sections: state estimation, parameter estimation and chemical data assimilation with real atmospheric satellite data. In the state estimation part of this dissertation, a new filtering technique based on a combination of ensemble and variational Kalman filtering approaches, is presented, experimented and discussed. This new filter is developed for large-scale Kalman filtering applications. In the parameter estimation part, three different techniques for parameter estimation in chaotic systems are considered. The methods are studied using the parameterized Lorenz 95 system, which is a benchmark model for data assimilation. In addition, a dilemma related to the uniqueness of weather and climate model closure parameters is discussed. In the data-oriented part of this dissertation, data from the Global Ozone Monitoring by Occultation of Stars (GOMOS) satellite instrument are considered and an alternative algorithm to retrieve atmospheric parameters from the measurements is presented. The validation study presents first global comparisons between two unique satellite-borne datasets of vertical profiles of nitrogen trioxide (NO3), retrieved using GOMOS and Stratospheric Aerosol and Gas Experiment III (SAGE III) satellite instruments. The GOMOS NO3 observations are also considered in a chemical state estimation study in order to retrieve stratospheric temperature profiles. The main result of this dissertation is the consideration of likelihood calculations via Kalman filtering outputs. The concept has previously been used together with stochastic differential equations and in time series analysis. In this work, the concept is applied to chaotic dynamical systems and used together with Markov chain Monte Carlo (MCMC) methods for statistical analysis. In particular, this methodology is advocated for use in numerical weather prediction (NWP) and climate model applications. In addition, the concept is shown to be useful in estimating the filter-specific parameters related, e.g., to model error covariance matrix parameters.
Resumo:
Identification of low-dimensional structures and main sources of variation from multivariate data are fundamental tasks in data analysis. Many methods aimed at these tasks involve solution of an optimization problem. Thus, the objective of this thesis is to develop computationally efficient and theoretically justified methods for solving such problems. Most of the thesis is based on a statistical model, where ridges of the density estimated from the data are considered as relevant features. Finding ridges, that are generalized maxima, necessitates development of advanced optimization methods. An efficient and convergent trust region Newton method for projecting a point onto a ridge of the underlying density is developed for this purpose. The method is utilized in a differential equation-based approach for tracing ridges and computing projection coordinates along them. The density estimation is done nonparametrically by using Gaussian kernels. This allows application of ridge-based methods with only mild assumptions on the underlying structure of the data. The statistical model and the ridge finding methods are adapted to two different applications. The first one is extraction of curvilinear structures from noisy data mixed with background clutter. The second one is a novel nonlinear generalization of principal component analysis (PCA) and its extension to time series data. The methods have a wide range of potential applications, where most of the earlier approaches are inadequate. Examples include identification of faults from seismic data and identification of filaments from cosmological data. Applicability of the nonlinear PCA to climate analysis and reconstruction of periodic patterns from noisy time series data are also demonstrated. Other contributions of the thesis include development of an efficient semidefinite optimization method for embedding graphs into the Euclidean space. The method produces structure-preserving embeddings that maximize interpoint distances. It is primarily developed for dimensionality reduction, but has also potential applications in graph theory and various areas of physics, chemistry and engineering. Asymptotic behaviour of ridges and maxima of Gaussian kernel densities is also investigated when the kernel bandwidth approaches infinity. The results are applied to the nonlinear PCA and to finding significant maxima of such densities, which is a typical problem in visual object tracking.
Resumo:
This thesis is concerned with the state and parameter estimation in state space models. The estimation of states and parameters is an important task when mathematical modeling is applied to many different application areas such as the global positioning systems, target tracking, navigation, brain imaging, spread of infectious diseases, biological processes, telecommunications, audio signal processing, stochastic optimal control, machine learning, and physical systems. In Bayesian settings, the estimation of states or parameters amounts to computation of the posterior probability density function. Except for a very restricted number of models, it is impossible to compute this density function in a closed form. Hence, we need approximation methods. A state estimation problem involves estimating the states (latent variables) that are not directly observed in the output of the system. In this thesis, we use the Kalman filter, extended Kalman filter, Gauss–Hermite filters, and particle filters to estimate the states based on available measurements. Among these filters, particle filters are numerical methods for approximating the filtering distributions of non-linear non-Gaussian state space models via Monte Carlo. The performance of a particle filter heavily depends on the chosen importance distribution. For instance, inappropriate choice of the importance distribution can lead to the failure of convergence of the particle filter algorithm. In this thesis, we analyze the theoretical Lᵖ particle filter convergence with general importance distributions, where p ≥2 is an integer. A parameter estimation problem is considered with inferring the model parameters from measurements. For high-dimensional complex models, estimation of parameters can be done by Markov chain Monte Carlo (MCMC) methods. In its operation, the MCMC method requires the unnormalized posterior distribution of the parameters and a proposal distribution. In this thesis, we show how the posterior density function of the parameters of a state space model can be computed by filtering based methods, where the states are integrated out. This type of computation is then applied to estimate parameters of stochastic differential equations. Furthermore, we compute the partial derivatives of the log-posterior density function and use the hybrid Monte Carlo and scaled conjugate gradient methods to infer the parameters of stochastic differential equations. The computational efficiency of MCMC methods is highly depend on the chosen proposal distribution. A commonly used proposal distribution is Gaussian. In this kind of proposal, the covariance matrix must be well tuned. To tune it, adaptive MCMC methods can be used. In this thesis, we propose a new way of updating the covariance matrix using the variational Bayesian adaptive Kalman filter algorithm.
Resumo:
This thesis concerns the analysis of epidemic models. We adopt the Bayesian paradigm and develop suitable Markov Chain Monte Carlo (MCMC) algorithms. This is done by considering an Ebola outbreak in the Democratic Republic of Congo, former Zaïre, 1995 as a case of SEIR epidemic models. We model the Ebola epidemic deterministically using ODEs and stochastically through SDEs to take into account a possible bias in each compartment. Since the model has unknown parameters, we use different methods to estimate them such as least squares, maximum likelihood and MCMC. The motivation behind choosing MCMC over other existing methods in this thesis is that it has the ability to tackle complicated nonlinear problems with large number of parameters. First, in a deterministic Ebola model, we compute the likelihood function by sum of square of residuals method and estimate parameters using the LSQ and MCMC methods. We sample parameters and then use them to calculate the basic reproduction number and to study the disease-free equilibrium. From the sampled chain from the posterior, we test the convergence diagnostic and confirm the viability of the model. The results show that the Ebola model fits the observed onset data with high precision, and all the unknown model parameters are well identified. Second, we convert the ODE model into a SDE Ebola model. We compute the likelihood function using extended Kalman filter (EKF) and estimate parameters again. The motivation of using the SDE formulation here is to consider the impact of modelling errors. Moreover, the EKF approach allows us to formulate a filtered likelihood for the parameters of such a stochastic model. We use the MCMC procedure to attain the posterior distributions of the parameters of the SDE Ebola model drift and diffusion parts. In this thesis, we analyse two cases: (1) the model error covariance matrix of the dynamic noise is close to zero , i.e. only small stochasticity added into the model. The results are then similar to the ones got from deterministic Ebola model, even if methods of computing the likelihood function are different (2) the model error covariance matrix is different from zero, i.e. a considerable stochasticity is introduced into the Ebola model. This accounts for the situation where we would know that the model is not exact. As a results, we obtain parameter posteriors with larger variances. Consequently, the model predictions then show larger uncertainties, in accordance with the assumption of an incomplete model.
Resumo:
This master thesis presents a study on the requisite cooling of an activated sludge process in paper and pulp industry. The energy consumption of paper and pulp industry and it’s wastewater treatment plant in particular is relatively high. It is therefore useful to understand the wastewater treatment process of such industries. The activated sludge process is a biological mechanism which degrades carbonaceous compounds that are present in waste. The modified activated sludge model constructed here aims to imitate the bio-kinetics of an activated sludge process. However, due to the complicated non-linear behavior of the biological process, modelling this system is laborious and intriguing. We attempt to find a system solution first using steady-state modelling of Activated Sludge Model number 1 (ASM1), approached by Euler’s method and an ordinary differential equation solver. Furthermore, an enthalpy study of paper and pulp industry’s vital pollutants was carried out and applied to revise the temperature shift over a period of time to formulate the operation of cooling water. This finding will lead to a forecast of the plant process execution in a cost-effective manner and management of effluent efficiency. The final stage of the thesis was achieved by optimizing the steady state of ASM1.
Resumo:
We provide a theoretical framework to explain the empirical finding that the estimated betas are sensitive to the sampling interval even when using continuously compounded returns. We suppose that stock prices have both permanent and transitory components. The permanent component is a standard geometric Brownian motion while the transitory component is a stationary Ornstein-Uhlenbeck process. The discrete time representation of the beta depends on the sampling interval and two components labelled \"permanent and transitory betas\". We show that if no transitory component is present in stock prices, then no sampling interval effect occurs. However, the presence of a transitory component implies that the beta is an increasing (decreasing) function of the sampling interval for more (less) risky assets. In our framework, assets are labelled risky if their \"permanent beta\" is greater than their \"transitory beta\" and vice versa for less risky assets. Simulations show that our theoretical results provide good approximations for the means and standard deviations of estimated betas in small samples. Our results can be perceived as indirect evidence for the presence of a transitory component in stock prices, as proposed by Fama and French (1988) and Poterba and Summers (1988).