188 resultados para Metals - Formability - Simulation methods
Resumo:
In this paper we discuss implicit Taylor methods for stiff Ito stochastic differential equations. Based on the relationship between Ito stochastic integrals and backward stochastic integrals, we introduce three implicit Taylor methods: the implicit Euler-Taylor method with strong order 0.5, the implicit Milstein-Taylor method with strong order 1.0 and the implicit Taylor method with strong order 1.5. The mean-square stability properties of the implicit Euler-Taylor and Milstein-Taylor methods are much better than those of the corresponding semi-implicit Euler and Milstein methods and these two implicit methods can be used to solve stochastic differential equations which are stiff in both the deterministic and the stochastic components. Numerical results are reported to show the convergence properties and the stability properties of these three implicit Taylor methods. The stability analysis and numerical results show that the implicit Euler-Taylor and Milstein-Taylor methods are very promising methods for stiff stochastic differential equations.
Resumo:
We establish existence of solutions for a finite difference approximation to y = f(x, y, y ') on [0, 1], subject to nonlinear two-point Sturm-Liouville boundary conditions of the form g(i)(y(i),y ' (i)) = 0, i = 0, 1, assuming S satisfies one-sided growth bounds with respect to y '. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
In this paper we discuss implicit methods based on stiffly accurate Runge-Kutta methods and splitting techniques for solving Stratonovich stochastic differential equations (SDEs). Two splitting techniques: the balanced splitting technique and the deterministic splitting technique, are used in this paper. We construct a two-stage implicit Runge-Kutta method with strong order 1.0 which is corrected twice and no update is needed. The stability properties and numerical results show that this approach is suitable for solving stiff SDEs. (C) 2001 Elsevier Science B.V. All rights reserved.
Resumo:
A simulation of competitively primed allele-specific DNA amplification has been constructed and its behavior examined, This has shown that when the ratio of the amount of homoduplex misprime product to the total amount of amplimer is low, it increases by approximately one-fourth of the mispriming frequency with each doubling of the total amount of amplimer, When the ratio is high acid reverse mispriming becomes significant, it asymptotes toward a value
Resumo:
Bond's method for ball mill scale-up only gives the mill power draw for a given duty. This method is incompatible with computer modelling and simulation techniques. It might not be applicable for the design of fine grinding ball mills and ball mills preceded by autogenous and semi-autogenous grinding mills. Model-based ball mill scale-up methods have not been validated using a wide range of full-scale circuit data. Their accuracy is therefore questionable. Some of these methods also need expensive pilot testing. A new ball mill scale-up procedure is developed which does not have these limitations. This procedure uses data from two laboratory tests to determine the parameters of a ball mill model. A set of scale-up criteria then scales-up these parameters. The procedure uses the scaled-up parameters to simulate the steady state performance of full-scale mill circuits. At the end of the simulation, the scale-up procedure gives the size distribution, the volumetric flowrate and the mass flowrate of all the streams in the circuit, and the mill power draw.
Resumo:
The step size determines the accuracy of a discrete element simulation. The position and velocity updating calculation uses a pre-calculated table and hence the control of step size can not use the integration formulas for step size control. A step size control scheme for use with the table driven velocity and position calculation uses the difference between the calculation result from one big step and that from two small steps. This variable time step size method chooses the suitable time step size for each particle at each step automatically according to the conditions. Simulation using fixed time step method is compared with that of using variable time step method. The difference in computation time for the same accuracy using a variable step size (compared to the fixed step) depends on the particular problem. For a simple test case the times are roughly similar. However, the variable step size gives the required accuracy on the first run. A fixed step size may require several runs to check the simulation accuracy or a conservative step size that results in longer run times. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
Activated sludge models are used extensively in the study of wastewater treatment processes. While various commercial implementations of these models are available, there are many people who need to code models themselves using the simulation packages available to them, Quality assurance of such models is difficult. While benchmarking problems have been developed and are available, the comparison of simulation data with that of commercial models leads only to the detection, not the isolation of errors. To identify the errors in the code is time-consuming. In this paper, we address the problem by developing a systematic and largely automated approach to the isolation of coding errors. There are three steps: firstly, possible errors are classified according to their place in the model structure and a feature matrix is established for each class of errors. Secondly, an observer is designed to generate residuals, such that each class of errors imposes a subspace, spanned by its feature matrix, on the residuals. Finally. localising the residuals in a subspace isolates coding errors. The algorithm proved capable of rapidly and reliably isolating a variety of single and simultaneous errors in a case study using the ASM 1 activated sludge model. In this paper a newly coded model was verified against a known implementation. The method is also applicable to simultaneous verification of any two independent implementations, hence is useful in commercial model development.
Resumo:
Computational simulations of the title reaction are presented, covering a temperature range from 300 to 2000 K. At lower temperatures we find that initial formation of the cyclopropene complex by addition of methylene to acetylene is irreversible, as is the stabilisation process via collisional energy transfer. Product branching between propargyl and the stable isomers is predicted at 300 K as a function of pressure for the first time. At intermediate temperatures (1200 K), complex temporal evolution involving multiple steady states begins to emerge. At high temperatures (2000 K) the timescale for subsequent unimolecular decay of thermalized intermediates begins to impinge on the timescale for reaction of methylene, such that the rate of formation of propargyl product does not admit a simple analysis in terms of a single time-independent rate constant until the methylene supply becomes depleted. Likewise, at the elevated temperatures the thermalized intermediates cannot be regarded as irreversible product channels. Our solution algorithm involves spectral propagation of a symmetrised version of the discretized master equation matrix, and is implemented in a high precision environment which makes hitherto unachievable low-temperature modelling a reality.
Resumo:
1. There are a variety of methods that could be used to increase the efficiency of the design of experiments. However, it is only recently that such methods have been considered in the design of clinical pharmacology trials. 2. Two such methods, termed data-dependent (e.g. simulation) and data-independent (e.g. analytical evaluation of the information in a particular design), are becoming increasingly used as efficient methods for designing clinical trials. These two design methods have tended to be viewed as competitive, although a complementary role in design is proposed here. 3. The impetus for the use of these two methods has been the need for a more fully integrated approach to the drug development process that specifically allows for sequential development (i.e. where the results of early phase studies influence later-phase studies). 4. The present article briefly presents the background and theory that underpins both the data-dependent and -independent methods with the use of illustrative examples from the literature. In addition, the potential advantages and disadvantages of each method are discussed.