950 resultados para Simulation methods.
Resumo:
Bond's method for ball mill scale-up only gives the mill power draw for a given duty. This method is incompatible with computer modelling and simulation techniques. It might not be applicable for the design of fine grinding ball mills and ball mills preceded by autogenous and semi-autogenous grinding mills. Model-based ball mill scale-up methods have not been validated using a wide range of full-scale circuit data. Their accuracy is therefore questionable. Some of these methods also need expensive pilot testing. A new ball mill scale-up procedure is developed which does not have these limitations. This procedure uses data from two laboratory tests to determine the parameters of a ball mill model. A set of scale-up criteria then scales-up these parameters. The procedure uses the scaled-up parameters to simulate the steady state performance of full-scale mill circuits. At the end of the simulation, the scale-up procedure gives the size distribution, the volumetric flowrate and the mass flowrate of all the streams in the circuit, and the mill power draw.
Resumo:
The step size determines the accuracy of a discrete element simulation. The position and velocity updating calculation uses a pre-calculated table and hence the control of step size can not use the integration formulas for step size control. A step size control scheme for use with the table driven velocity and position calculation uses the difference between the calculation result from one big step and that from two small steps. This variable time step size method chooses the suitable time step size for each particle at each step automatically according to the conditions. Simulation using fixed time step method is compared with that of using variable time step method. The difference in computation time for the same accuracy using a variable step size (compared to the fixed step) depends on the particular problem. For a simple test case the times are roughly similar. However, the variable step size gives the required accuracy on the first run. A fixed step size may require several runs to check the simulation accuracy or a conservative step size that results in longer run times. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
Activated sludge models are used extensively in the study of wastewater treatment processes. While various commercial implementations of these models are available, there are many people who need to code models themselves using the simulation packages available to them, Quality assurance of such models is difficult. While benchmarking problems have been developed and are available, the comparison of simulation data with that of commercial models leads only to the detection, not the isolation of errors. To identify the errors in the code is time-consuming. In this paper, we address the problem by developing a systematic and largely automated approach to the isolation of coding errors. There are three steps: firstly, possible errors are classified according to their place in the model structure and a feature matrix is established for each class of errors. Secondly, an observer is designed to generate residuals, such that each class of errors imposes a subspace, spanned by its feature matrix, on the residuals. Finally. localising the residuals in a subspace isolates coding errors. The algorithm proved capable of rapidly and reliably isolating a variety of single and simultaneous errors in a case study using the ASM 1 activated sludge model. In this paper a newly coded model was verified against a known implementation. The method is also applicable to simultaneous verification of any two independent implementations, hence is useful in commercial model development.
Resumo:
Computational simulations of the title reaction are presented, covering a temperature range from 300 to 2000 K. At lower temperatures we find that initial formation of the cyclopropene complex by addition of methylene to acetylene is irreversible, as is the stabilisation process via collisional energy transfer. Product branching between propargyl and the stable isomers is predicted at 300 K as a function of pressure for the first time. At intermediate temperatures (1200 K), complex temporal evolution involving multiple steady states begins to emerge. At high temperatures (2000 K) the timescale for subsequent unimolecular decay of thermalized intermediates begins to impinge on the timescale for reaction of methylene, such that the rate of formation of propargyl product does not admit a simple analysis in terms of a single time-independent rate constant until the methylene supply becomes depleted. Likewise, at the elevated temperatures the thermalized intermediates cannot be regarded as irreversible product channels. Our solution algorithm involves spectral propagation of a symmetrised version of the discretized master equation matrix, and is implemented in a high precision environment which makes hitherto unachievable low-temperature modelling a reality.
Resumo:
1. There are a variety of methods that could be used to increase the efficiency of the design of experiments. However, it is only recently that such methods have been considered in the design of clinical pharmacology trials. 2. Two such methods, termed data-dependent (e.g. simulation) and data-independent (e.g. analytical evaluation of the information in a particular design), are becoming increasingly used as efficient methods for designing clinical trials. These two design methods have tended to be viewed as competitive, although a complementary role in design is proposed here. 3. The impetus for the use of these two methods has been the need for a more fully integrated approach to the drug development process that specifically allows for sequential development (i.e. where the results of early phase studies influence later-phase studies). 4. The present article briefly presents the background and theory that underpins both the data-dependent and -independent methods with the use of illustrative examples from the literature. In addition, the potential advantages and disadvantages of each method are discussed.
Resumo:
Petrov-Galerkin methods are known to be versatile techniques for the solution of a wide variety of convection-dispersion transport problems, including those involving steep gradients. but have hitherto received little attention by chemical engineers. We illustrate the technique by means of the well-known problem of simultaneous diffusion and adsorption in a spherical sorbent pellet comprised of spherical, non-overlapping microparticles of uniform size and investigate the uptake dynamics. Solutions to adsorption problems exhibit steep gradients when macropore diffusion controls or micropore diffusion controls, and the application of classical numerical methods to such problems can present difficulties. In this paper, a semi-discrete Petrov-Galerkin finite element method for numerically solving adsorption problems with steep gradients in bidisperse solids is presented. The numerical solution was found to match the analytical solution when the adsorption isotherm is linear and the diffusivities are constant. Computed results for the Langmuir isotherm and non-constant diffusivity in microparticle are numerically evaluated for comparison with results of a fitted-mesh collocation method, which was proposed by Liu and Bhatia (Comput. Chem. Engng. 23 (1999) 933-943). The new method is simple, highly efficient, and well-suited to a variety of adsorption and desorption problems involving steep gradients. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
Surrogate methods for detecting lateral gene transfer are those that do not require inference of phylogenetic trees. Herein I apply four such methods to identify open reading frames (ORFs) in the genome of Escherichia coli K12 that may have arisen by lateral gene transfer. Only two of these methods detect the same ORFs more frequently than expected by chance, whereas several intersections contain many fewer ORFs than expected. Each of the four methods detects a different non-random set of ORFs. The methods may detect lateral ORFs of different relative ages; testing this hypothesis will require rigorous inference of trees. (C) 2001 Federation of European Microbiological Societies. Published by Elsevier Science BN. All rights reserved.
Resumo:
We developed a general model to assess patient activity within the primary and secondary health-care sectors following a dermatology outpatient consultation. Based on observed variables from the UK teledermatology trial, the model showed that up to 11 doctor-patient interactions occurred before a patient was ultimately discharged from care. In a cohort of 1000 patients, the average number of health-care visits was 2.4 (range 1-11). Simulation analysis suggested that the most important parameter affecting the total number of doctor-patient Interactions is patient discharge from care following the initial consultation. This implies that resources should be concentrated in this area. The introduction of teledermatology (either realtime or store and forward) changes the values of the model parameters. The model provides a quantitative tool for planning the future provision of dermatology health-care.
Resumo:
Recent progress in the production, purification, and experimental and theoretical investigations of carbon nanotubes for hydrogen storage are reviewed. From the industrial point of view, the chemical vapor deposition process has shown advantages over laser ablation and electric-arc-discharge methods. The ultimate goal in nanotube synthesis should be to gain control over geometrical aspects of nanotubes, such as location and orientation, and the atomic structure of nanotubes, including helicity and diameter. There is currently no effective and simple purification procedure that fulfills all requirements for processing carbon nanotubes. Purification is still the bottleneck for technical applications, especially where large amounts of material are required. Although the alkali-metal-doped carbon nanotubes showed high H-2 Weight uptake, further investigations indicated that some of this uptake was due to water rather than hydrogen. This discovery indicates a potential source of error in evaluation of the storage capacity of doped carbon nanotubes. Nevertheless, currently available single-wall nanotubes yield a hydrogen uptake value near 4 wt% under moderate pressure and room temperature. A further 50% increase is needed to meet U.S. Department of Energy targets for commercial exploitation. Meeting this target will require combining experimental and theoretical efforts to achieve a full understanding of the adsorption process, so that the uptake can be rationally optimized to commercially attractive levels. Large-scale production and purification of carbon nanotubes and remarkable improvement of H-2 storage capacity in carbon nanotubes represent significant technological and theoretical challenges in the years to come.
Resumo:
Problems associated with the stickiness of food in processing and storage practices along with its causative factors are outlined. Fundamental mechanisms that explain why and how food products become sticky are discussed. Methods currently in use for characterizing and overcoming stickiness problems in food processing and storage operations are described. The use of glass transition temperature-based model, which provides a rational basis for understanding and characterizing the stickiness of many food products, is highlighted.
Resumo:
Dispersal, or the amount of dispersion between an individual's birthplace and that of its offspring, is of great importance in population biology, behavioural ecology and conservation, however, obtaining direct estimates from field data on natural populations can be problematic. The prickly forest skink, Gnypetoscincus queenslandiae, is a rainforest endemic skink from the wet tropics of Australia. Because of its log-dwelling habits and lack of definite nesting sites, a demographic estimate of dispersal distance is difficult to obtain. Neighbourhood size, defined as 4 piD sigma (2) (where D is the population density and sigma (2) the mean axial squared parent-offspring dispersal rate), dispersal and density were estimated directly and indirectly for this species using mark-recapture and microsatellite data, respectively, on lizards captured at a local geographical scale of 3 ha. Mark-recapture data gave a dispersal rate of 843 m(2)/generation (assuming a generation time of 6.5 years), a time-scaled density of 13 635 individuals * generation/km(2) and, hence, a neighbourhood size of 144 individuals. A genetic method based on the multilocus (10 loci) microsatellite genotypes of individuals and their geographical location indicated that there is a significant isolation by distance pattern, and gave a neighbourhood size of 69 individuals, with a 95% confidence interval between 48 and 184. This translates into a dispersal rate of 404 m(2)/generation when using the mark-recapture density estimation, or an estimate of time-scaled population density of 6520 individuals * generation/km(2) when using the mark-recapture dispersal rate estimate. The relationship between the two categories of neighbourhood size, dispersal and density estimates and reasons for any disparities are discussed.