987 resultados para Stochastic process
Resumo:
Quite often, in the construction of a pulp mill involves establishing the size of tanks which will accommodate the material from the various processes in which case estimating the right tank size a priori would be vital. Hence, simulation of the whole production process would be worthwhile. Therefore, there is need to develop mathematical models that would mimic the behavior of the output from the various production units of the pulp mill to work as simulators. Markov chain models, Autoregressive moving average (ARMA) model, Mean reversion models with ensemble interaction together with Markov regime switching models are proposed for that purpose.
Resumo:
Stochastic differential equation (SDE) is a differential equation in which some of the terms and its solution are stochastic processes. SDEs play a central role in modeling physical systems like finance, Biology, Engineering, to mention some. In modeling process, the computation of the trajectories (sample paths) of solutions to SDEs is very important. However, the exact solution to a SDE is generally difficult to obtain due to non-differentiability character of realizations of the Brownian motion. There exist approximation methods of solutions of SDE. The solutions will be continuous stochastic processes that represent diffusive dynamics, a common modeling assumption for financial, Biology, physical, environmental systems. This Masters' thesis is an introduction and survey of numerical solution methods for stochastic differential equations. Standard numerical methods, local linearization methods and filtering methods are well described. We compute the root mean square errors for each method from which we propose a better numerical scheme. Stochastic differential equations can be formulated from a given ordinary differential equations. In this thesis, we describe two kind of formulations: parametric and non-parametric techniques. The formulation is based on epidemiological SEIR model. This methods have a tendency of increasing parameters in the constructed SDEs, hence, it requires more data. We compare the two techniques numerically.
Resumo:
The thesis deals with analysis of some Stochastic Inventory Models with Pooling/Retrial of Customers.. In the first model we analyze an (s,S) production Inventory system with retrial of customers. Arrival of customers from outside the system form a Poisson process. The inter production times are exponentially distributed with parameter µ. When inventory level reaches zero further arriving demands are sent to the orbit which has capacity M(<∞). Customers, who find the orbit full and inventory level at zero are lost to the system. Demands arising from the orbital customers are exponentially distributed with parameter γ. In the model-II we extend these results to perishable inventory system assuming that the life-time of each item follows exponential with parameter θ. The study deals with an (s,S) production inventory with service times and retrial of unsatisfied customers. Primary demands occur according to a Markovian Arrival Process(MAP). Consider an (s,S)-retrial inventory with service time in which primary demands occur according to a Batch Markovian Arrival Process (BMAP). The inventory is controlled by the (s,S) policy and (s,S) inventory system with service time. Primary demands occur according to Poissson process with parameter λ. The study concentrates two models. In the first model we analyze an (s,S) Inventory system with postponed demands where arrivals of demands form a Poisson process. In the second model, we extend our results to perishable inventory system assuming that the life-time of each item follows exponential distribution with parameter θ. Also it is assumed that when inventory level is zero the arriving demands choose to enter the pool with probability β and with complementary probability (1- β) it is lost for ever. Finally it analyze an (s,S) production inventory system with switching time. A lot of work is reported under the assumption that the switching time is negligible but this is not the case for several real life situation.
Resumo:
To ensure quality of machined products at minimum machining costs and maximum machining effectiveness, it is very important to select optimum parameters when metal cutting machine tools are employed. Traditionally, the experience of the operator plays a major role in the selection of optimum metal cutting conditions. However, attaining optimum values each time by even a skilled operator is difficult. The non-linear nature of the machining process has compelled engineers to search for more effective methods to attain optimization. The design objective preceding most engineering design activities is simply to minimize the cost of production or to maximize the production efficiency. The main aim of research work reported here is to build robust optimization algorithms by exploiting ideas that nature has to offer from its backyard and using it to solve real world optimization problems in manufacturing processes.In this thesis, after conducting an exhaustive literature review, several optimization techniques used in various manufacturing processes have been identified. The selection of optimal cutting parameters, like depth of cut, feed and speed is a very important issue for every machining process. Experiments have been designed using Taguchi technique and dry turning of SS420 has been performed on Kirlosker turn master 35 lathe. Analysis using S/N and ANOVA were performed to find the optimum level and percentage of contribution of each parameter. By using S/N analysis the optimum machining parameters from the experimentation is obtained.Optimization algorithms begin with one or more design solutions supplied by the user and then iteratively check new design solutions, relative search spaces in order to achieve the true optimum solution. A mathematical model has been developed using response surface analysis for surface roughness and the model was validated using published results from literature.Methodologies in optimization such as Simulated annealing (SA), Particle Swarm Optimization (PSO), Conventional Genetic Algorithm (CGA) and Improved Genetic Algorithm (IGA) are applied to optimize machining parameters while dry turning of SS420 material. All the above algorithms were tested for their efficiency, robustness and accuracy and observe how they often outperform conventional optimization method applied to difficult real world problems. The SA, PSO, CGA and IGA codes were developed using MATLAB. For each evolutionary algorithmic method, optimum cutting conditions are provided to achieve better surface finish.The computational results using SA clearly demonstrated that the proposed solution procedure is quite capable in solving such complicated problems effectively and efficiently. Particle Swarm Optimization (PSO) is a relatively recent heuristic search method whose mechanics are inspired by the swarming or collaborative behavior of biological populations. From the results it has been observed that PSO provides better results and also more computationally efficient.Based on the results obtained using CGA and IGA for the optimization of machining process, the proposed IGA provides better results than the conventional GA. The improved genetic algorithm incorporating a stochastic crossover technique and an artificial initial population scheme is developed to provide a faster search mechanism. Finally, a comparison among these algorithms were made for the specific example of dry turning of SS 420 material and arriving at optimum machining parameters of feed, cutting speed, depth of cut and tool nose radius for minimum surface roughness as the criterion. To summarize, the research work fills in conspicuous gaps between research prototypes and industry requirements, by simulating evolutionary procedures seen in nature that optimize its own systems.
Resumo:
This thesis Entitled Stochastic modelling and analysis.This thesis is divided into six chapters including this introductory chapter. In second chapter, we consider an (s,S) inventory model with service, reneging of customers and finite shortage of items.In the third chapter, we consider an (s,S) inventoiy system with retrial of customers. Arrival of customers forms a Poisson process with rate. When the inventory level depletes to s due to demands, an order for replenishment is placed.In Chapter 4, we analyze and compare three (s,S) inventory systems with positive service time and retrial of customers. In all these systems, arrivals of customers form a Poisson process and service times are exponentially distributed. In chapter 5, we analyze and compare three production inventory systems with positive service time and retrial of customers. In all these systems, arrivals of customers form a Poisson process and service times are exponentially distributed.In chapter 6, we consider a PH /PH /l inventory model with reneging of customers and finite shortage of items.
Resumo:
In this thesis we have presented several inventory models of utility. Of these inventory with retrial of unsatisfied demands and inventory with postponed work are quite recently introduced concepts, the latt~~ being introduced for the first time. Inventory with service time is relatively new with a handful of research work reported. The di lficuity encoLlntered in inventory with service, unlike the queueing process, is that even the simplest case needs a 2-dimensional process for its description. Only in certain specific cases we can introduce generating function • to solve for the system state distribution. However numerical procedures can be developed for solving these problem.
Resumo:
Despite the many models developed for phosphorus concentration prediction at differing spatial and temporal scales, there has been little effort to quantify uncertainty in their predictions. Model prediction uncertainty quantification is desirable, for informed decision-making in river-systems management. An uncertainty analysis of the process-based model, integrated catchment model of phosphorus (INCA-P), within the generalised likelihood uncertainty estimation (GLUE) framework is presented. The framework is applied to the Lugg catchment (1,077 km2), a River Wye tributary, on the England–Wales border. Daily discharge and monthly phosphorus (total reactive and total), for a limited number of reaches, are used to initially assess uncertainty and sensitivity of 44 model parameters, identified as being most important for discharge and phosphorus predictions. This study demonstrates that parameter homogeneity assumptions (spatial heterogeneity is treated as land use type fractional areas) can achieve higher model fits, than a previous expertly calibrated parameter set. The model is capable of reproducing the hydrology, but a threshold Nash-Sutcliffe co-efficient of determination (E or R 2) of 0.3 is not achieved when simulating observed total phosphorus (TP) data in the upland reaches or total reactive phosphorus (TRP) in any reach. Despite this, the model reproduces the general dynamics of TP and TRP, in point source dominated lower reaches. This paper discusses why this application of INCA-P fails to find any parameter sets, which simultaneously describe all observed data acceptably. The discussion focuses on uncertainty of readily available input data, and whether such process-based models should be used when there isn’t sufficient data to support the many parameters.
Resumo:
An information processing paradigm in the brain is proposed, instantiated in an artificial neural network using biologically motivated temporal encoding. The network will locate within the external world stimulus, the target memory, defined by a specific pattern of micro-features. The proposed network is robust and efficient. Akin in operation to the swarm intelligence paradigm, stochastic diffusion search, it will find the best-fit to the memory with linear time complexity. information multiplexing enables neurons to process knowledge as 'tokens' rather than 'types'. The network illustrates possible emergence of cognitive processing from low level interactions such as memory retrieval based on partial matching. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
Major research on equity index dynamics has investigated only US indices (usually the S&P 500) and has provided contradictory results. In this paper a clarification and extension of that previous research is given. We find that European equity indices have quite different dynamics from the S&P 500. Each of the European indices considered may be satisfactorily modelled using either an affine model with price and volatility jumps or a GARCH volatility process without jumps. The S&P 500 dynamics are much more difficult to capture in a jump-diffusion framework.
Resumo:
We consider the relation between so called continuous localization models—i.e. non-linear stochastic Schrödinger evolutions—and the discrete GRW-model of wave function collapse. The former can be understood as scaling limit of the GRW process. The proof relies on a stochastic Trotter formula, which is of interest in its own right. Our Trotter formula also allows to complement results on existence theory of stochastic Schrödinger evolutions by Holevo and Mora/Rebolledo.
Resumo:
We analyze the large time behavior of a stochastic model for the lay down of fibers on a moving conveyor belt in the production process of nonwovens. It is shown that under weak conditions this degenerate diffusion process has a unique invariant distribution and is even geometrically ergodic. This generalizes results from previous works [M. Grothaus and A. Klar, SIAM J. Math. Anal., 40 (2008), pp. 968–983; J. Dolbeault et al., arXiv:1201.2156] concerning the case of a stationary conveyor belt, in which the situation of a moving conveyor belt has been left open.
Resumo:
Mathematical models, as instruments for understanding the workings of nature, are a traditional tool of physics, but they also play an ever increasing role in biology - in the description of fundamental processes as well as that of complex systems. In this review, the authors discuss two examples of the application of group theoretical methods, which constitute the mathematical discipline for a quantitative description of the idea of symmetry, to genetics. The first one appears, in the form of a pseudo-orthogonal (Lorentz like) symmetry, in the stochastic modelling of what may be regarded as the simplest possible example of a genetic network and, hopefully, a building block for more complicated ones: a single self-interacting or externally regulated gene with only two possible states: ` on` and ` off`. The second is the algebraic approach to the evolution of the genetic code, according to which the current code results from a dynamical symmetry breaking process, starting out from an initial state of complete symmetry and ending in the presently observed final state of low symmetry. In both cases, symmetry plays a decisive role: in the first, it is a characteristic feature of the dynamics of the gene switch and its decay to equilibrium, whereas in the second, it provides the guidelines for the evolution of the coding rules.
Resumo:
The regimen of environmental flows (EF) must be included as terms of environmental demand in the management of water resources. Even though there are numerous methods for the computation of EF, the criteria applied at different steps in the calculation process are quite subjective whereas the results are fixed values that must be meet by water planners. This study presents a friendly-user tool for the assessment of the probability of compliance of a certain EF scenario with the natural regimen in a semiarid area in southern Spain. 250 replications of a 25-yr period of different hydrological variables (rainfall, minimum and maximum flows, ...) were obtained at the study site from the combination of Monte Carlo technique and local hydrological relationships. Several assumptions are made such as the independence of annual rainfall from year to year and the variability of occurrence of the meteorological agents, mainly precipitation as the main source of uncertainty. Inputs to the tool are easily selected from a first menu and comprise measured rainfall data, EF values and the hydrological relationships for at least a 20-yr period. The outputs are the probabilities of compliance of the different components of the EF for the study period. From this, local optimization can be applied to establish EF components with a certain level of compliance in the study period. Different options for graphic output and analysis of results are included in terms of graphs and tables in several formats. This methodology turned out to be a useful tool for the implementation of an uncertainty analysis within the scope of environmental flows in water management and allowed the simulation of the impacts of several water resource development scenarios in the study site.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)