968 resultados para Stochastic Process
Resumo:
Response analysis of a linear structure with uncertainties in both structural parameters and external excitation is considered here. When such an analysis is carried out using the spectral stochastic finite element method (SSFEM), often the computational cost tends to be prohibitive due to the rapid growth of the number of spectral bases with the number of random variables and the order of expansion. For instance, if the excitation contains a random frequency, or if it is a general random process, then a good approximation of these excitations using polynomial chaos expansion (PCE) involves a large number of terms, which leads to very high cost. To address this issue of high computational cost, a hybrid method is proposed in this work. In this method, first the random eigenvalue problem is solved using the weak formulation of SSFEM, which involves solving a system of deterministic nonlinear algebraic equations to estimate the PCE coefficients of the random eigenvalues and eigenvectors. Then the response is estimated using a Monte Carlo (MC) simulation, where the modal bases are sampled from the PCE of the random eigenvectors estimated in the previous step, followed by a numerical time integration. It is observed through numerical studies that this proposed method successfully reduces the computational burden compared with either a pure SSFEM of a pure MC simulation and more accurate than a perturbation method. The computational gain improves as the problem size in terms of degrees of freedom grows. It also improves as the timespan of interest reduces.
Resumo:
In this paper we first derive a necessary and sufficient condition for a stationary strategy to be the Nash equilibrium of discounted constrained stochastic game under certain assumptions. In this process we also develop a nonlinear (non-convex) optimization problem for a discounted constrained stochastic game. We use the linear best response functions of every player and complementary slackness theorem for linear programs to derive both the optimization problem and the equivalent condition. We then extend this result to average reward constrained stochastic games. Finally, we present a heuristic algorithm motivated by our necessary and sufficient conditions for a discounted cost constrained stochastic game. We numerically observe the convergence of this algorithm to Nash equilibrium. (C) 2015 Elsevier B.V. All rights reserved.
Resumo:
In this article, we look at the political business cycle problem through the lens of uncertainty. The feedback control used by us is the famous NKPC with stochasticity and wage rigidities. We extend the New Keynesian Phillips Curve model to the continuous time stochastic set up with an Ornstein-Uhlenbeck process. We minimize relevant expected quadratic cost by solving the corresponding Hamilton-Jacobi-Bellman equation. The basic intuition of the classical model is qualitatively carried forward in our set up but uncertainty also plays an important role in determining the optimal trajectory of the voter support function. The internal variability of the system acts as a base shifter for the support function in the risk neutral case. The role of uncertainty is even more prominent in the risk averse case where all the shape parameters are directly dependent on variability. Thus, in this case variability controls both the rates of change as well as the base shift parameters. To gain more insight we have also studied the model when the coefficients are time invariant and studied numerical solutions. The close relationship between the unemployment rate and the support function for the incumbent party is highlighted. The role of uncertainty in creating sampling fluctuation in this set up, possibly towards apparently anomalous results, is also explored.
Resumo:
Stochastic characteristics prevail in the process of short fatigue crack progression. This paper presents a method taking into account the balance of crack number density to describe the stochastic behaviour of short crack collective evolution. The results from the simulation illustrate the stochastic development of short cracks. The experiments on two types of steels show the random distribution for collective short cracks with the number of cracks and the maximum crack length as a function of different locations on specimen surface. The experiments also give the variation of total number of short cracks with fatigue cycles. The test results are consistent with numerical simulations.
Resumo:
The inhomogeneous Poisson process is a point process that has varying intensity across its domain (usually time or space). For nonparametric Bayesian modeling, the Gaussian process is a useful way to place a prior distribution on this intensity. The combination of a Poisson process and GP is known as a Gaussian Cox process, or doubly-stochastic Poisson process. Likelihood-based inference in these models requires an intractable integral over an infinite-dimensional random function. In this paper we present the first approach to Gaussian Cox processes in which it is possible to perform inference without introducing approximations or finitedimensional proxy distributions. We call our method the Sigmoidal Gaussian Cox Process, which uses a generative model for Poisson data to enable tractable inference via Markov chain Monte Carlo. We compare our methods to competing methods on synthetic data and apply it to several real-world data sets. Copyright 2009.
Resumo:
The inhomogeneous Poisson process is a point process that has varying intensity across its domain (usually time or space). For nonparametric Bayesian modeling, the Gaussian process is a useful way to place a prior distribution on this intensity. The combination of a Poisson process and GP is known as a Gaussian Cox process, or doubly-stochastic Poisson process. Likelihood-based inference in these models requires an intractable integral over an infinite-dimensional random function. In this paper we present the first approach to Gaussian Cox processes in which it is possible to perform inference without introducing approximations or finite-dimensional proxy distributions. We call our method the Sigmoidal Gaussian Cox Process, which uses a generative model for Poisson data to enable tractable inference via Markov chain Monte Carlo. We compare our methods to competing methods on synthetic data and apply it to several real-world data sets.
Resumo:
A newly developed numerical code, MFPA(2D) (Material Failure Process Analysis), is applied to study the influence of stochastic mesoscopic structure on macroscopic mechanical behavior of rock-like materials. A set of uniaxial compression tests has been numerically studied with numerical specimens containing pre-existing crack-like flaw. The numerical results reveal the influence of random mesoscopic structure on failure process of brittle material, which indicates that the variation of failure mode is strongly sensitive to the local disorder feature of the specimen. And the patterns of the crack evolution in the specimens are very different from each other due to the random mesoscopic structure in material. The results give a good explanation for various kinds of fracture modes and peak strength variation observed in laboratory studies with specimens made from the same rock block being statistically homogenous in macro scale. In addition, the evolution of crack is more complicated in heterogeneous cases than in homogeneous cases.
Resumo:
A brief review is presented of statistical approaches on microdamage evolution. An experimental study of statistical microdamage evolution in two ductile materials under dynamic loading is carried out. The observation indicates that there are large differences in size and distribution of microvoids between these two materials. With this phenomenon in mind, kinetic equations governing the nucleation and growth of microvoids in nonlinear rate-dependent materials are combined with the balance law of void number to establish statistical differential equations that describe the evolution of microvoids' number density. The theoretical solution provides a reasonable explanation of the experimentally observed phenomenon. The effects of stochastic fluctuation which is influenced by the inhomogeneous microscopic structure of materials are subsequently examined (i.e. stochastic growth model). Based on the stochastic differential equation, a Fokker-Planck equation which governs the evolution of the transition probability is derived. The analytical solution for the transition probability is then obtained and the effects of stochastic fluctuation is discussed. The statistical and stochastic analyses may provide effective approaches to reveal the physics of damage evolution and dynamic failure process in ductile materials.
Resumo:
The effects of stochastic extension on the statistical evolution of the ideal microcrack system are discussed. First, a general theoretical formulation and an expression for the transition probability of extension process are presented, then the features of evolution in stochastic model are demonstrated by several numerical results and compared with that in deterministic model.
Resumo:
A theory of two-point boundary value problems analogous to the theory of initial value problems for stochastic ordinary differential equations whose solutions form Markov processes is developed. The theory of initial value problems consists of three main parts: the proof that the solution process is markovian and diffusive; the construction of the Kolmogorov or Fokker-Planck equation of the process; and the proof that the transistion probability density of the process is a unique solution of the Fokker-Planck equation.
It is assumed here that the stochastic differential equation under consideration has, as an initial value problem, a diffusive markovian solution process. When a given boundary value problem for this stochastic equation almost surely has unique solutions, we show that the solution process of the boundary value problem is also a diffusive Markov process. Since a boundary value problem, unlike an initial value problem, has no preferred direction for the parameter set, we find that there are two Fokker-Planck equations, one for each direction. It is shown that the density of the solution process of the boundary value problem is the unique simultaneous solution of this pair of Fokker-Planck equations.
This theory is then applied to the problem of a vibrating string with stochastic density.
Resumo:
A general review of stochastic processes is given in the introduction; definitions, properties and a rough classification are presented together with the position and scope of the author's work as it fits into the general scheme.
The first section presents a brief summary of the pertinent analytical properties of continuous stochastic processes and their probability-theoretic foundations which are used in the sequel.
The remaining two sections (II and III), comprising the body of the work, are the author's contribution to the theory. It turns out that a very inclusive class of continuous stochastic processes are characterized by a fundamental partial differential equation and its adjoint (the Fokker-Planck equations). The coefficients appearing in those equations assimilate, in a most concise way, all the salient properties of the process, freed from boundary value considerations. The writer’s work consists in characterizing the processes through these coefficients without recourse to solving the partial differential equations.
First, a class of coefficients leading to a unique, continuous process is presented, and several facts are proven to show why this class is restricted. Then, in terms of the coefficients, the unconditional statistics are deduced, these being the mean, variance and covariance. The most general class of coefficients leading to the Gaussian distribution is deduced, and a complete characterization of these processes is presented. By specializing the coefficients, all the known stochastic processes may be readily studied, and some examples of these are presented; viz. the Einstein process, Bachelier process, Ornstein-Uhlenbeck process, etc. The calculations are effectively reduced down to ordinary first order differential equations, and in addition to giving a comprehensive characterization, the derivations are materially simplified over the solution to the original partial differential equations.
In the last section the properties of the integral process are presented. After an expository section on the definition, meaning, and importance of the integral process, a particular example is carried through starting from basic definition. This illustrates the fundamental properties, and an inherent paradox. Next the basic coefficients of the integral process are studied in terms of the original coefficients, and the integral process is uniquely characterized. It is shown that the integral process, with a slight modification, is a continuous Markoff process.
The elementary statistics of the integral process are deduced: means, variances, and covariances, in terms of the original coefficients. It is shown that an integral process is never temporally homogeneous in a non-degenerate process.
Finally, in terms of the original class of admissible coefficients, the statistics of the integral process are explicitly presented, and the integral process of all known continuous processes are specified.
Resumo:
Grinding is an advanced machining process for the manufacturing of valuable complex and accurate parts for high added value sectors such as aerospace, wind generation, etc. Due to the extremely severe conditions inside grinding machines, critical process variables such as part surface finish or grinding wheel wear cannot be easily and cheaply measured on-line. In this paper a virtual sensor for on-line monitoring of those variables is presented. The sensor is based on the modelling ability of Artificial Neural Networks (ANNs) for stochastic and non-linear processes such as grinding; the selected architecture is the Layer-Recurrent neural network. The sensor makes use of the relation between the variables to be measured and power consumption in the wheel spindle, which can be easily measured. A sensor calibration methodology is presented, and the levels of error that can be expected are discussed. Validation of the new sensor is carried out by comparing the sensor's results with actual measurements carried out in an industrial grinding machine. Results show excellent estimation performance for both wheel wear and surface roughness. In the case of wheel wear, the absolute error is within the range of microns (average value 32 mu m). In the case of surface finish, the absolute error is well below R-a 1 mu m (average value 0.32 mu m). The present approach can be easily generalized to other grinding operations.
Resumo:
Density modeling is notoriously difficult for high dimensional data. One approach to the problem is to search for a lower dimensional manifold which captures the main characteristics of the data. Recently, the Gaussian Process Latent Variable Model (GPLVM) has successfully been used to find low dimensional manifolds in a variety of complex data. The GPLVM consists of a set of points in a low dimensional latent space, and a stochastic map to the observed space. We show how it can be interpreted as a density model in the observed space. However, the GPLVM is not trained as a density model and therefore yields bad density estimates. We propose a new training strategy and obtain improved generalisation performance and better density estimates in comparative evaluations on several benchmark data sets. © 2010 Springer-Verlag.
Resumo:
Based on the phase-conjugation polarization interference between two two-photon processes, we theoretically investigated the attosecond scale asymmetry sum-frequency polarization beat in four-level system (FASPB). The field correlation has weak influence on the FASPB signal when the laser has narrow bandwidth. Conversely, when the laser has broadband linewidth, the FASPB signal shows resonance-nonresonance cross correlation. The two-photon signal exhibits hybrid radiation-matter detuning terahertz; damping oscillation, i.e., when the laser frequency is off resonance from the two-photon transition, the signal exhibits damping oscillation and the profile of the two-photon self-correlation signal also exhibits zero time-delay asymmetry of the maxima. We have also investigated the asymmetry of attosecond polarization beat caused by the shift of the two-photon self-correlation zero time-delay phenomenon, in which the maxima of the two two-photon signals are shifted from zero time-delay point to opposite directions. As an attosecond ultrafast modulation process, FASPB can be intrinsically extended to any level-summation systems of two dipolar forbidden excited states.
Resumo:
A new approach is proposed to simulate splash erosion on local soil surfaces. Without the effect of wind and other raindrops, the impact of free-falling raindrops was considered as an independent event from the stochastic viewpoint. The erosivity of a single raindrop depending on its kinetic energy was computed by an empirical relationship in which the kinetic energy was expressed as a power function of the equivalent diameter of the raindrop. An empirical linear function combining the kinetic energy and soil shear strength was used to estimate the impacted amount of soil particles by a single raindrop. Considering an ideal local soil surface with size of I m x I m, the expected number of received free-failing raindrops with different diameters per unit time was described by the combination of the raindrop size distribution function and the terminal velocity of raindrops. The total splash amount was seen as the sum of the impact amount by all raindrops in the rainfall event. The total splash amount per unit time was subdivided into three different components, including net splash amount, single impact amount and re-detachment amount. The re-detachment amount was obtained by a spatial geometric probability derived using the Poisson function in which overlapped impacted areas were considered. The net splash amount was defined as the mass of soil particles collected outside the splash dish. It was estimated by another spatial geometric probability in which the average splashed distance related to the median grain size of soil and effects of other impacted soil particles and other free-falling raindrops were considered. Splash experiments in artificial rainfall were carried out to validate the availability and accuracy of the model. Our simulated results suggested that the net splash amount and re-detachment amount were small parts of the total splash amount. Their proportions were 0.15% and 2.6%, respectively. The comparison of simulated data with measured data showed that this model could be applied to simulate the soil-splash process successfully and needed information of the rainfall intensity and original soil properties including initial bulk intensity, water content, median grain size and some empirical constants related to the soil surface shear strength, the raindrop size distribution function and the average splashed distance. Copyright (c) 2007 John Wiley & Sons, Ltd.