987 resultados para Stochastic process


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work provides a forward step in the study and comprehension of the relationships between stochastic processes and a certain class of integral-partial differential equation, which can be used in order to model anomalous diffusion and transport in statistical physics. In the first part, we brought the reader through the fundamental notions of probability and stochastic processes, stochastic integration and stochastic differential equations as well. In particular, within the study of H-sssi processes, we focused on fractional Brownian motion (fBm) and its discrete-time increment process, the fractional Gaussian noise (fGn), which provide examples of non-Markovian Gaussian processes. The fGn, together with stationary FARIMA processes, is widely used in the modeling and estimation of long-memory, or long-range dependence (LRD). Time series manifesting long-range dependence, are often observed in nature especially in physics, meteorology, climatology, but also in hydrology, geophysics, economy and many others. We deepely studied LRD, giving many real data examples, providing statistical analysis and introducing parametric methods of estimation. Then, we introduced the theory of fractional integrals and derivatives, which indeed turns out to be very appropriate for studying and modeling systems with long-memory properties. After having introduced the basics concepts, we provided many examples and applications. For instance, we investigated the relaxation equation with distributed order time-fractional derivatives, which describes models characterized by a strong memory component and can be used to model relaxation in complex systems, which deviates from the classical exponential Debye pattern. Then, we focused in the study of generalizations of the standard diffusion equation, by passing through the preliminary study of the fractional forward drift equation. Such generalizations have been obtained by using fractional integrals and derivatives of distributed orders. In order to find a connection between the anomalous diffusion described by these equations and the long-range dependence, we introduced and studied the generalized grey Brownian motion (ggBm), which is actually a parametric class of H-sssi processes, which have indeed marginal probability density function evolving in time according to a partial integro-differential equation of fractional type. The ggBm is of course Non-Markovian. All around the work, we have remarked many times that, starting from a master equation of a probability density function f(x,t), it is always possible to define an equivalence class of stochastic processes with the same marginal density function f(x,t). All these processes provide suitable stochastic models for the starting equation. Studying the ggBm, we just focused on a subclass made up of processes with stationary increments. The ggBm has been defined canonically in the so called grey noise space. However, we have been able to provide a characterization notwithstanding the underline probability space. We also pointed out that that the generalized grey Brownian motion is a direct generalization of a Gaussian process and in particular it generalizes Brownain motion and fractional Brownain motion as well. Finally, we introduced and analyzed a more general class of diffusion type equations related to certain non-Markovian stochastic processes. We started from the forward drift equation, which have been made non-local in time by the introduction of a suitable chosen memory kernel K(t). The resulting non-Markovian equation has been interpreted in a natural way as the evolution equation of the marginal density function of a random time process l(t). We then consider the subordinated process Y(t)=X(l(t)) where X(t) is a Markovian diffusion. The corresponding time-evolution of the marginal density function of Y(t) is governed by a non-Markovian Fokker-Planck equation which involves the same memory kernel K(t). We developed several applications and derived the exact solutions. Moreover, we considered different stochastic models for the given equations, providing path simulations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A field of computational neuroscience develops mathematical models to describe neuronal systems. The aim is to better understand the nervous system. Historically, the integrate-and-fire model, developed by Lapique in 1907, was the first model describing a neuron. In 1952 Hodgkin and Huxley [8] described the so called Hodgkin-Huxley model in the article “A Quantitative Description of Membrane Current and Its Application to Conduction and Excitation in Nerve”. The Hodgkin-Huxley model is one of the most successful and widely-used biological neuron models. Based on experimental data from the squid giant axon, Hodgkin and Huxley developed their mathematical model as a four-dimensional system of first-order ordinary differential equations. One of these equations characterizes the membrane potential as a process in time, whereas the other three equations depict the opening and closing state of sodium and potassium ion channels. The membrane potential is proportional to the sum of ionic current flowing across the membrane and an externally applied current. For various types of external input the membrane potential behaves differently. This thesis considers the following three types of input: (i) Rinzel and Miller [15] calculated an interval of amplitudes for a constant applied current, where the membrane potential is repetitively spiking; (ii) Aihara, Matsumoto and Ikegaya [1] said that dependent on the amplitude and the frequency of a periodic applied current the membrane potential responds periodically; (iii) Izhikevich [12] stated that brief pulses of positive and negative current with different amplitudes and frequencies can lead to a periodic response of the membrane potential. In chapter 1 the Hodgkin-Huxley model is introduced according to Izhikevich [12]. Besides the definition of the model, several biological and physiological notes are made, and further concepts are described by examples. Moreover, the numerical methods to solve the equations of the Hodgkin-Huxley model are presented which were used for the computer simulations in chapter 2 and chapter 3. In chapter 2 the statements for the three different inputs (i), (ii) and (iii) will be verified, and periodic behavior for the inputs (ii) and (iii) will be investigated. In chapter 3 the inputs are embedded in an Ornstein-Uhlenbeck process to see the influence of noise on the results of chapter 2.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We consider stochastic individual-based models for social behaviour of groups of animals. In these models the trajectory of each animal is given by a stochastic differential equation with interaction. The social interaction is contained in the drift term of the SDE. We consider a global aggregation force and a short-range repulsion force. The repulsion range and strength gets rescaled with the number of animals N. We show that for N tending to infinity stochastic fluctuations disappear and a smoothed version of the empirical process converges uniformly towards the solution of a nonlinear, nonlocal partial differential equation of advection-reaction-diffusion type. The rescaling of the repulsion in the individual-based model implies that the corresponding term in the limit equation is local while the aggregation term is non-local. Moreover, we discuss the effect of a predator on the system and derive an analogous convergence result. The predator acts as an repulsive force. Different laws of motion for the predator are considered.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The variables involved in the equations that describe realistic synaptic dynamics always vary in a limited range. Their boundedness makes the synapses forgetful, not for the mere passage of time, but because new experiences overwrite old memories. The forgetting rate depends on how many synapses are modified by each new experience: many changes means fast learning and fast forgetting, whereas few changes means slow learning and long memory retention. Reducing the average number of modified synapses can extend the memory span at the price of a reduced amount of information stored when a new experience is memorized. Every trick which allows to slow down the learning process in a smart way can improve the memory performance. We review some of the tricks that allow to elude fast forgetting (oblivion). They are based on the stochastic selection of the synapses whose modifications are actually consolidated following each new experience. In practice only a randomly selected, small fraction of the synapses eligible for an update are actually modified. This allows to acquire the amount of information necessary to retrieve the memory without compromising the retention of old experiences. The fraction of modified synapses can be further reduced in a smart way by changing synapses only when it is really necessary, i.e. when the post-synaptic neuron does not respond as desired. Finally we show that such a stochastic selection emerges naturally from spike driven synaptic dynamics which read noisy pre and post-synaptic neural activities. These activities can actually be generated by a chaotic system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The AEGISS (Ascertainment and Enhancement of Gastrointestinal Infection Surveillance and Statistics) project aims to use spatio-temporal statistical methods to identify anomalies in the space-time distribution of non-specific, gastrointestinal infections in the UK, using the Southampton area in southern England as a test-case. In this paper, we use the AEGISS project to illustrate how spatio-temporal point process methodology can be used in the development of a rapid-response, spatial surveillance system. Current surveillance of gastroenteric disease in the UK relies on general practitioners reporting cases of suspected food-poisoning through a statutory notification scheme, voluntary laboratory reports of the isolation of gastrointestinal pathogens and standard reports of general outbreaks of infectious intestinal disease by public health and environmental health authorities. However, most statutory notifications are made only after a laboratory reports the isolation of a gastrointestinal pathogen. As a result, detection is delayed and the ability to react to an emerging outbreak is reduced. For more detailed discussion, see Diggle et al. (2003). A new and potentially valuable source of data on the incidence of non-specific gastro-enteric infections in the UK is NHS Direct, a 24-hour phone-in clinical advice service. NHS Direct data are less likely than reports by general practitioners to suffer from spatially and temporally localized inconsistencies in reporting rates. Also, reporting delays by patients are likely to be reduced, as no appointments are needed. Against this, NHS Direct data sacrifice specificity. Each call to NHS Direct is classified only according to the general pattern of reported symptoms (Cooper et al, 2003). The current paper focuses on the use of spatio-temporal statistical analysis for early detection of unexplained variation in the spatio-temporal incidence of non-specific gastroenteric symptoms, as reported to NHS Direct. Section 2 describes our statistical formulation of this problem, the nature of the available data and our approach to predictive inference. Section 3 describes the stochastic model. Section 4 gives the results of fitting the model to NHS Direct data. Section 5 shows how the model is used for spatio-temporal prediction. The paper concludes with a short discussion.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Wind energy has been one of the most growing sectors of the nation’s renewable energy portfolio for the past decade, and the same tendency is being projected for the upcoming years given the aggressive governmental policies for the reduction of fossil fuel dependency. Great technological expectation and outstanding commercial penetration has shown the so called Horizontal Axis Wind Turbines (HAWT) technologies. Given its great acceptance, size evolution of wind turbines over time has increased exponentially. However, safety and economical concerns have emerged as a result of the newly design tendencies for massive scale wind turbine structures presenting high slenderness ratios and complex shapes, typically located in remote areas (e.g. offshore wind farms). In this regard, safety operation requires not only having first-hand information regarding actual structural dynamic conditions under aerodynamic action, but also a deep understanding of the environmental factors in which these multibody rotating structures operate. Given the cyclo-stochastic patterns of the wind loading exerting pressure on a HAWT, a probabilistic framework is appropriate to characterize the risk of failure in terms of resistance and serviceability conditions, at any given time. Furthermore, sources of uncertainty such as material imperfections, buffeting and flutter, aeroelastic damping, gyroscopic effects, turbulence, among others, have pleaded for the use of a more sophisticated mathematical framework that could properly handle all these sources of indetermination. The attainable modeling complexity that arises as a result of these characterizations demands a data-driven experimental validation methodology to calibrate and corroborate the model. For this aim, System Identification (SI) techniques offer a spectrum of well-established numerical methods appropriated for stationary, deterministic, and data-driven numerical schemes, capable of predicting actual dynamic states (eigenrealizations) of traditional time-invariant dynamic systems. As a consequence, it is proposed a modified data-driven SI metric based on the so called Subspace Realization Theory, now adapted for stochastic non-stationary and timevarying systems, as is the case of HAWT’s complex aerodynamics. Simultaneously, this investigation explores the characterization of the turbine loading and response envelopes for critical failure modes of the structural components the wind turbine is made of. In the long run, both aerodynamic framework (theoretical model) and system identification (experimental model) will be merged in a numerical engine formulated as a search algorithm for model updating, also known as Adaptive Simulated Annealing (ASA) process. This iterative engine is based on a set of function minimizations computed by a metric called Modal Assurance Criterion (MAC). In summary, the Thesis is composed of four major parts: (1) development of an analytical aerodynamic framework that predicts interacted wind-structure stochastic loads on wind turbine components; (2) development of a novel tapered-swept-corved Spinning Finite Element (SFE) that includes dampedgyroscopic effects and axial-flexural-torsional coupling; (3) a novel data-driven structural health monitoring (SHM) algorithm via stochastic subspace identification methods; and (4) a numerical search (optimization) engine based on ASA and MAC capable of updating the SFE aerodynamic model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Stochastic simulation is an important and practical technique for computing probabilities of rare events, like the payoff probability of a financial option, the probability that a queue exceeds a certain level or the probability of ruin of the insurer's risk process. Rare events occur so infrequently, that they cannot be reasonably recorded during a standard simulation procedure: specifc simulation algorithms which thwart the rarity of the event to simulate are required. An important algorithm in this context is based on changing the sampling distribution and it is called importance sampling. Optimal Monte Carlo algorithms for computing rare event probabilities are either logarithmic eficient or possess bounded relative error.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The application of Markov processes is very useful to health-care problems. The objective of this study is to provide a structured methodology of forecasting cost based upon combining a stochastic model of utilization (Markov Chain) and deterministic cost function. The perspective of the cost in this study is the reimbursement for the services rendered. The data to be used is the OneCare database of claim records of their enrollees over a two-year period of January 1, 1996–December 31, 1997. The model combines a Markov Chain that describes the utilization pattern and its variability where the use of resources by risk groups (age, gender, and diagnosis) will be considered in the process and a cost function determined from a fixed schedule based on real costs or charges for those in the OneCare claims database. The cost function is a secondary application to the model. Goodness-of-fit will be used checked for the model against the traditional method of cost forecasting. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we introduce technical efficiency via the intercept that evolve over time as a AR(1) process in a stochastic frontier (SF) framework in a panel data framework. Following are the distinguishing features of the model. First, the model is dynamic in nature. Second, it can separate technical inefficiency from fixed firm-specific effects which are not part of inefficiency. Third, the model allows one to estimate technical change separate from change in technical efficiency. We propose the ML method to estimate the parameters of the model. Finally, we derive expressions to calculate/predict technical inefficiency (efficiency).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Stochastic model updating must be considered for quantifying uncertainties inherently existing in real-world engineering structures. By this means the statistical properties,instead of deterministic values, of structural parameters can be sought indicating the parameter variability. However, the implementation of stochastic model updating is much more complicated than that of deterministic methods particularly in the aspects of theoretical complexity and low computational efficiency. This study attempts to propose a simple and cost-efficient method by decomposing a stochastic updating process into a series of deterministic ones with the aid of response surface models and Monte Carlo simulation. The response surface models are used as surrogates for original FE models in the interest of programming simplification, fast response computation and easy inverse optimization. Monte Carlo simulation is adopted for generating samples from the assumed or measured probability distributions of responses. Each sample corresponds to an individual deterministic inverse process predicting the deterministic values of parameters. Then the parameter means and variances can be statistically estimated based on all the parameter predictions by running all the samples. Meanwhile, the analysis of variance approach is employed for the evaluation of parameter variability significance. The proposed method has been demonstrated firstly on a numerical beam and then a set of nominally identical steel plates tested in the laboratory. It is found that compared with the existing stochastic model updating methods, the proposed method presents similar accuracy while its primary merits consist in its simple implementation and cost efficiency in response computation and inverse optimization.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper contributes with a unified formulation that merges previ- ous analysis on the prediction of the performance ( value function ) of certain sequence of actions ( policy ) when an agent operates a Markov decision process with large state-space. When the states are represented by features and the value function is linearly approxi- mated, our analysis reveals a new relationship between two common cost functions used to obtain the optimal approximation. In addition, this analysis allows us to propose an efficient adaptive algorithm that provides an unbiased linear estimate. The performance of the pro- posed algorithm is illustrated by simulation, showing competitive results when compared with the state-of-the-art solutions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

National Highway Traffic Safety Administration, Office of Driver and Pedestrian Programs, Washington, D.C.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The birth, death and catastrophe process is an extension of the birth-death process that incorporates the possibility of reductions in population of arbitrary size. We will consider a general form of this model in which the transition rates are allowed to depend on the current population size in an arbitrary manner. The linear case, where the transition rates are proportional to current population size, has been studied extensively. In particular, extinction probabilities, the expected time to extinction, and the distribution of the population size conditional on nonextinction (the quasi-stationary distribution) have all been evaluated explicitly. However, whilst these characteristics are of interest in the modelling and management of populations, processes with linear rate coefficients represent only a very limited class of models. We address this limitation by allowing for a wider range of catastrophic events. Despite this generalisation, explicit expressions can still be found for the expected extinction times.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we construct implicit stochastic Runge-Kutta (SRK) methods for solving stochastic differential equations of Stratonovich type. Instead of using the increment of a Wiener process, modified random variables are used. We give convergence conditions of the SRK methods with these modified random variables. In particular, the truncated random variable is used. We present a two-stage stiffly accurate diagonal implicit SRK (SADISRK2) method with strong order 1.0 which has better numerical behaviour than extant methods. We also construct a five-stage diagonal implicit SRK method and a six-stage stiffly accurate diagonal implicit SRK method with strong order 1.5. The mean-square and asymptotic stability properties of the trapezoidal method and the SADISRK2 method are analysed and compared with an explicit method and a semi-implicit method. Numerical results are reported for confirming convergence properties and for comparing the numerical behaviour of these methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Proposed by M. Stutzer (1996), canonical valuation is a new method for valuing derivative securities under the risk-neutral framework. It is non-parametric, simple to apply, and, unlike many alternative approaches, does not require any option data. Although canonical valuation has great potential, its applicability in realistic scenarios has not yet been widely tested. This article documents the ability of canonical valuation to price derivatives in a number of settings. In a constant-volatility world, canonical estimates of option prices struggle to match a Black-Scholes estimate based on historical volatility. However, in a more realistic stochastic-volatility setting, canonical valuation outperforms the Black-Scholes model. As the volatility generating process becomes further removed from the constant-volatility world, the relative performance edge of canonical valuation is more evident. In general, the results are encouraging that canonical valuation is a useful technique for valuing derivatives. (C) 2005 Wiley Periodicals, Inc.