901 resultados para stochastic simulation method
Resumo:
We give reasons why demographic parameters such as survival and reproduction rates are often modelled well in stochastic population simulation using beta distributions. In practice, it is frequently expected that these parameters will be correlated, for example with survival rates for all age classes tending to be high or low in the same year. We therefore discuss a method for producing correlated beta random variables by transforming correlated normal random variables, and show how it can be applied in practice by means of a simple example. We also note how the same approach can be used to produce correlated uniform triangular, and exponential random variables. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
The Direct Simulation Monte Carlo (DSMC) method is used to simulate the flow of rarefied gases. In the Macroscopic Chemistry Method (MCM) for DSMC, chemical reaction rates calculated from local macroscopic flow properties are enforced in each cell. Unlike the standard total collision energy (TCE) chemistry model for DSMC, the new method is not restricted to an Arrhenius form of the reaction rate coefficient, nor is it restricted to a collision cross-section which yields a simple power-law viscosity. For reaction rates of interest in aerospace applications, chemically reacting collisions are generally infrequent events and, as such, local equilibrium conditions are established before a significant number of chemical reactions occur. Hence, the reaction rates which have been used in MCM have been calculated from the reaction rate data which are expected to be correct only for conditions of thermal equilibrium. Here we consider artificially high reaction rates so that the fraction of reacting collisions is not small and propose a simple method of estimating the rates of chemical reactions which can be used in the Macroscopic Chemistry Method in both equilibrium and non-equilibrium conditions. Two tests are presented: (1) The dissociation rates under conditions of thermal non-equilibrium are determined from a zero-dimensional Monte-Carlo sampling procedure which simulates ‘intra-modal’ non-equilibrium; that is, equilibrium distributions in each of the translational, rotational and vibrational modes but with different temperatures for each mode; (2) The 2-D hypersonic flow of molecular oxygen over a vertical plate at Mach 30 is calculated. In both cases the new method produces results in close agreement with those given by the standard TCE model in the same highly nonequilibrium conditions. We conclude that the general method of estimating the non-equilibrium reaction rate is a simple means by which information contained within non-equilibrium distribution functions predicted by the DSMC method can be included in the Macroscopic Chemistry Method.
Resumo:
Numerical experiments using a finite difference method were carried out to determine the motion of axisymmetric Taylor vortices for narrow-gap Taylor vortex flow. When a pressure gradient is imposed on the flow the vortices are observed to move with an axial speed of 1.16 +/- 0.005 times the mean axial flow velocity. The method of Brenner was used to calculate the long-time axial spread of material in the flow. For flows where there is no pressure gradient, the axial dispersion scales with the square root of the molecular diffusion, in agreement with the results of Rosen-bluth et al. for high Peclet number dispersion in spatially periodic flows with a roll structure. When a pressure gradient is imposed the dispersion increases by an amount approximately equal to 6.5 x 10(-4) (W) over bar(2)d(2)/D-m, where (W) over bar is the average axial velocity in the annulus, analogous to Taylor dispersion for laminar flow in an empty tube.
Resumo:
In quantum measurement theory it is necessary to show how a, quantum source conditions a classical stochastic record of measured results. We discuss mesoscopic conductance using quantum stochastic calculus to elucidate the quantum nature of the measurement taking place in these systems. To illustrate the method we derive the current fluctuations in a two terminal mesoscopic circuit with two tunnel barriers containing a single quasi bound state on the well. The method enables us to focus on either the incoming/ outgoing Fermi fields in the leads, or on the irreversible dynamics of the well state itself. We show an equivalence between the approach of Buttiker and the Fermi quantum stochastic calculus for mesoscopic systems.
Resumo:
Realistic time frames in which management decisions are made often preclude the completion of the detailed analyses necessary for conservation planning. Under these circumstances, efficient alternatives may assist in approximating the results of more thorough studies that require extensive resources and time. We outline a set of concepts and formulas that may be used in lieu of detailed population viability analyses and habitat modeling exercises to estimate the protected areas required to provide desirable conservation outcomes for a suite of threatened plant species. We used expert judgment of parameters and assessment of a population size that results in a specified quasiextinction risk based on simple dynamic models The area required to support a population of this size is adjusted to take into account deterministic and stochastic human influences, including small-scale disturbance deterministic trends such as habitat loss, and changes in population density through processes such as predation and competition. We set targets for different disturbance regimes and geographic regions. We applied our methods to Banksia cuneata, Boronia keysii, and Parsonsia dorrigoensis, resulting in target areas for conservation of 1102, 733, and 1084 ha, respectively. These results provide guidance on target areas and priorities for conservation strategies.
Resumo:
1. A model of the population dynamics of Banksia ornata was developed, using stochastic dynamic programming (a state-dependent decision-making tool), to determine optimal fire management strategies that incorporate trade-offs between biodiversity conservation and fuel reduction. 2. The modelled population of B. ornata was described by its age and density, and was exposed to the risk of unplanned fires and stochastic variation in germination success. 3. For a given population in each year, three management strategies were considered: (i) lighting a prescribed fire; (ii) controlling the incidence of unplanned fire; (iii) doing nothing. 4. The optimal management strategy depended on the state of the B. ornata population, with the time since the last fire (age of the population) being the most important variable. Lighting a prescribed fire at an age of less than 30 years was only optimal when the density of seedlings after a fire was low (< 100 plants ha(-1)) or when there were benefits of maintaining a low fuel load by using more frequent fire. 5. Because the cost of management was assumed to be negligible (relative to the value of the persistence of the population), the do-nothing option was never the optimal strategy, although lighting prescribed fires had only marginal benefits when the mean interval between unplanned fires was less than 20-30 years.
Resumo:
The majority of past and current individual-tree growth modelling methodologies have failed to characterise and incorporate structured stochastic components. Rather, they have relied on deterministic predictions or have added an unstructured random component to predictions. In particular, spatial stochastic structure has been neglected, despite being present in most applications of individual-tree growth models. Spatial stochastic structure (also called spatial dependence or spatial autocorrelation) eventuates when spatial influences such as competition and micro-site effects are not fully captured in models. Temporal stochastic structure (also called temporal dependence or temporal autocorrelation) eventuates when a sequence of measurements is taken on an individual-tree over time, and variables explaining temporal variation in these measurements are not included in the model. Nested stochastic structure eventuates when measurements are combined across sampling units and differences among the sampling units are not fully captured in the model. This review examines spatial, temporal, and nested stochastic structure and instances where each has been characterised in the forest biometry and statistical literature. Methodologies for incorporating stochastic structure in growth model estimation and prediction are described. Benefits from incorporation of stochastic structure include valid statistical inference, improved estimation efficiency, and more realistic and theoretically sound predictions. It is proposed in this review that individual-tree modelling methodologies need to characterise and include structured stochasticity. Possibilities for future research are discussed. (C) 2001 Elsevier Science B.V. All rights reserved.
Resumo:
Numerical methods are used to simulate the double-diffusion driven convective pore-fluid flow and rock alteration in three-dimensional fluid-saturated geological fault zones. The double diffusion is caused by a combination of both the positive upward temperature gradient and the positive downward salinity concentration gradient within a three-dimensional fluid-saturated geological fault zone, which is assumed to be more permeable than its surrounding rocks. In order to ensure the physical meaningfulness of the obtained numerical solutions, the numerical method used in this study is validated by a benchmark problem, for which the analytical solution to the critical Rayleigh number of the system is available. The theoretical value of the critical Rayleigh number of a three-dimensional fluid-saturated geological fault zone system can be used to judge whether or not the double-diffusion driven convective pore-fluid flow can take place within the system. After the possibility of triggering the double-diffusion driven convective pore-fluid flow is theoretically validated for the numerical model of a three-dimensional fluid-saturated geological fault zone system, the corresponding numerical solutions for the convective flow and temperature are directly coupled with a geochemical system. Through the numerical simulation of the coupled system between the convective fluid flow, heat transfer, mass transport and chemical reactions, we have investigated the effect of the double-diffusion driven convective pore-fluid flow on the rock alteration, which is the direct consequence of mineral redistribution due to its dissolution, transportation and precipitation, within the three-dimensional fluid-saturated geological fault zone system. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
In this paper we discuss implicit Taylor methods for stiff Ito stochastic differential equations. Based on the relationship between Ito stochastic integrals and backward stochastic integrals, we introduce three implicit Taylor methods: the implicit Euler-Taylor method with strong order 0.5, the implicit Milstein-Taylor method with strong order 1.0 and the implicit Taylor method with strong order 1.5. The mean-square stability properties of the implicit Euler-Taylor and Milstein-Taylor methods are much better than those of the corresponding semi-implicit Euler and Milstein methods and these two implicit methods can be used to solve stochastic differential equations which are stiff in both the deterministic and the stochastic components. Numerical results are reported to show the convergence properties and the stability properties of these three implicit Taylor methods. The stability analysis and numerical results show that the implicit Euler-Taylor and Milstein-Taylor methods are very promising methods for stiff stochastic differential equations.
Resumo:
In this paper we discuss implicit methods based on stiffly accurate Runge-Kutta methods and splitting techniques for solving Stratonovich stochastic differential equations (SDEs). Two splitting techniques: the balanced splitting technique and the deterministic splitting technique, are used in this paper. We construct a two-stage implicit Runge-Kutta method with strong order 1.0 which is corrected twice and no update is needed. The stability properties and numerical results show that this approach is suitable for solving stiff SDEs. (C) 2001 Elsevier Science B.V. All rights reserved.
Resumo:
The step size determines the accuracy of a discrete element simulation. The position and velocity updating calculation uses a pre-calculated table and hence the control of step size can not use the integration formulas for step size control. A step size control scheme for use with the table driven velocity and position calculation uses the difference between the calculation result from one big step and that from two small steps. This variable time step size method chooses the suitable time step size for each particle at each step automatically according to the conditions. Simulation using fixed time step method is compared with that of using variable time step method. The difference in computation time for the same accuracy using a variable step size (compared to the fixed step) depends on the particular problem. For a simple test case the times are roughly similar. However, the variable step size gives the required accuracy on the first run. A fixed step size may require several runs to check the simulation accuracy or a conservative step size that results in longer run times. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
Activated sludge models are used extensively in the study of wastewater treatment processes. While various commercial implementations of these models are available, there are many people who need to code models themselves using the simulation packages available to them, Quality assurance of such models is difficult. While benchmarking problems have been developed and are available, the comparison of simulation data with that of commercial models leads only to the detection, not the isolation of errors. To identify the errors in the code is time-consuming. In this paper, we address the problem by developing a systematic and largely automated approach to the isolation of coding errors. There are three steps: firstly, possible errors are classified according to their place in the model structure and a feature matrix is established for each class of errors. Secondly, an observer is designed to generate residuals, such that each class of errors imposes a subspace, spanned by its feature matrix, on the residuals. Finally. localising the residuals in a subspace isolates coding errors. The algorithm proved capable of rapidly and reliably isolating a variety of single and simultaneous errors in a case study using the ASM 1 activated sludge model. In this paper a newly coded model was verified against a known implementation. The method is also applicable to simultaneous verification of any two independent implementations, hence is useful in commercial model development.
Resumo:
Understanding the genetic architecture of quantitative traits can greatly assist the design of strategies for their manipulation in plant-breeding programs. For a number of traits, genetic variation can be the result of segregation of a few major genes and many polygenes (minor genes). The joint segregation analysis (JSA) is a maximum-likelihood approach for fitting segregation models through the simultaneous use of phenotypic information from multiple generations. Our objective in this paper was to use computer simulation to quantify the power of the JSA method for testing the mixed-inheritance model for quantitative traits when it was applied to the six basic generations: both parents (P-1 and P-2), F-1, F-2, and both backcross generations (B-1 and B-2) derived from crossing the F-1 to each parent. A total of 1968 genetic model-experiment scenarios were considered in the simulation study to quantify the power of the method. Factors that interacted to influence the power of the JSA method to correctly detect genetic models were: (1) whether there were one or two major genes in combination with polygenes, (2) the heritability of the major genes and polygenes, (3) the level of dispersion of the major genes and polygenes between the two parents, and (4) the number of individuals examined in each generation (population size). The greatest levels of power were observed for the genetic models defined with simple inheritance; e.g., the power was greater than 90% for the one major gene model, regardless of the population size and major-gene heritability. Lower levels of power were observed for the genetic models with complex inheritance (major genes and polygenes), low heritability, small population sizes and a large dispersion of favourable genes among the two parents; e.g., the power was less than 5% for the two major-gene model with a heritability value of 0.3 and population sizes of 100 individuals. The JSA methodology was then applied to a previously studied sorghum data-set to investigate the genetic control of the putative drought resistance-trait osmotic adjustment in three crosses. The previous study concluded that there were two major genes segregating for osmotic adjustment in the three crosses. Application of the JSA method resulted in a change in the proposed genetic model. The presence of the two major genes was confirmed with the addition of an unspecified number of polygenes.
Resumo:
A new method to extract MOSFET's threshold voltage VT by measurement of the gate-to-substrate capacitance C-gb of the transistor is presented. Unlike existing extraction methods based on I-V data, the measurement of C-gb does not require de drain current to now between drain and source thus eliminating the effects of source and drain series resistance R-S/D, and at the same time, retains a symmetrical potential profile across the channel. Experimental and simulation results on devices with different sizes are presented to justify the proposed method.