907 resultados para Simulation Design


Relevância:

20.00% 20.00%

Publicador:

Resumo:

There are many methods for the analysis and design of embedded cantilever retaining walls. They involve various different simplifications of the pressure distribution to allow calculation of the limiting equilibrium retained height and the bending moment when the retained height is less than the limiting equilibrium value, i.e. the serviceability case. Recently, a new method for determining the serviceability earth pressure and bending moment has been proposed. This method makes an assumption defining the point of zero net pressure. This assumption implies that the passive pressure is not fully mobilised immediately below the excavation level. The finite element analyses presented in this paper examine the net pressure distribution on walls in which the retained height is less, than the limiting equilibrium value. The study shows that for all practical walls, the earth pressure distributions on the front and back of the wall are at their limit values, Kp and K-a respectively, when the lumped factor of safety F-r is less than or equal to2.0. A rectilinear net pressure distribution is proposed that is intuitively logical. It produces good predictions of the complete bending moment diagram for walls in the service configuration and the proposed method gives results that have excellent agreement with centrifuge model tests. The study shows that the method for determining the serviceability bending moment suggested by Padfield and Mair(1) in the CIRIA Report 104 gives excellent predictions of the maximum bending moment in practical cantilever walls. It provides the missing data that have been needed to verify and justify the CIRIA 104 method.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Geographical information systems (GIS) coupled to 3D visualisation technology is an emerging tool for urban planning and landscape design applications. The utility of 3D GIS for realistically visualising the built environment and proposed development scenarios is much advocated in the literature. Planners assess the merits of proposed changes using visual impact assessment (VIA). We have used Arcview GIS and visualisation software: called PolyTRIM from the University of Toronto, Centre for Landscape Research (CLR) to create a 3D scene for the entrance to a University campus. The paper investigates the thesis that to facilitate VIA in planning and design requires not only visualisation, but also a structured evaluation technique (Delphi) to arbitrate the decision-making process. (C) 2001 Elsevier Science B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study investigates the hierarchy of cytotoxic T cell (CTL) responses to twelve HLA A2-restricted epitopes from the latent, lytic and structural proteins of Epstein–Barr virus (EBV) in acute infectious mononucleosis and in healthy seropositive donors and the relative immunogenecity of these epitopes in transgenic mice. Responses to the lytic epitope were uniformly strong in all healthy seropositive individuals and acute infectious mononucleosis donors while moderate or low responses were observed to the latent and structural epitopes, respectively in both groups studied. In contrast, when HLA A2/Kb transgenic mice were immunised with these peptide epitopes, CTL responses were observed to all epitopes with a maximal response to the epitopes within the structural proteins and low to moderate responses to the latent epitopes. This hierarchy of CTL responses in mice was also reflected in an MHC stabilisation analysis. These contrasting CTL responses in humans following natural infection compared to the immunogenicity of these epitopes and their ability to stabilise MHC may need to be considered when designing an EBV vaccine.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Modelling and simulation studies were carried out at 26 cement clinker grinding circuits including tube mills, air separators and high pressure grinding rolls in 8 plants. The results reported earlier have shown that tube mills can be modelled as several mills in series, and the internal partition in tube mills can be modelled as a screen which must retain coarse particles in the first compartment but not impede the flow of drying air. In this work the modelling has been extended to show that the Tromp curve which describes separator (classifier) performance can be modelled in terms of d(50)(corr), by-pass, the fish hook, and the sharpness of the curve. Also the high pressure grinding rolls model developed at the Julius Kruttschnitt Mineral Research Centre gives satisfactory predictions using a breakage function derived from impact and compressed bed tests. Simulation studies of a full plant incorporating a tube mill, HPGR and separators showed that the models could successfully predict the performance of the another mill working under different conditions. The simulation capability can therefore be used for process optimization and design. (C) 2001 Elsevier Science Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Bond's method for ball mill scale-up only gives the mill power draw for a given duty. This method is incompatible with computer modelling and simulation techniques. It might not be applicable for the design of fine grinding ball mills and ball mills preceded by autogenous and semi-autogenous grinding mills. Model-based ball mill scale-up methods have not been validated using a wide range of full-scale circuit data. Their accuracy is therefore questionable. Some of these methods also need expensive pilot testing. A new ball mill scale-up procedure is developed which does not have these limitations. This procedure uses data from two laboratory tests to determine the parameters of a ball mill model. A set of scale-up criteria then scales-up these parameters. The procedure uses the scaled-up parameters to simulate the steady state performance of full-scale mill circuits. At the end of the simulation, the scale-up procedure gives the size distribution, the volumetric flowrate and the mass flowrate of all the streams in the circuit, and the mill power draw.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A new ball mill scale-up procedure is developed. This procedure has been validated using seven sets of Ml-scale ball mil data. The largest ball mills in these data have diameters (inside liners) of 6.58m. The procedure can predict the 80% passing size of the circuit product to within +/-6% of the measured value, with a precision of +/-11% (one standard deviation); the re-circulating load to within +/-33% of the mass-balanced value (this error margin is within the uncertainty associated with the determination of the re-circulating load); and the mill power to within +/-5% of the measured value. This procedure is applicable for the design of ball mills which are preceded by autogenous (AG) mills, semi-autogenous (SAG) mills, crushers and flotation circuits. The new procedure is more precise and more accurate than Bond's method for ball mill scale-up. This procedure contains no efficiency correction which relates to the mill diameter. This suggests that, within the range of mill diameter studied, milling efficiency does not vary with mill diameter. This is in contrast with Bond's equation-Bond claimed that milling efficiency increases with mill diameter. (C) 2001 Elsevier Science Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The step size determines the accuracy of a discrete element simulation. The position and velocity updating calculation uses a pre-calculated table and hence the control of step size can not use the integration formulas for step size control. A step size control scheme for use with the table driven velocity and position calculation uses the difference between the calculation result from one big step and that from two small steps. This variable time step size method chooses the suitable time step size for each particle at each step automatically according to the conditions. Simulation using fixed time step method is compared with that of using variable time step method. The difference in computation time for the same accuracy using a variable step size (compared to the fixed step) depends on the particular problem. For a simple test case the times are roughly similar. However, the variable step size gives the required accuracy on the first run. A fixed step size may require several runs to check the simulation accuracy or a conservative step size that results in longer run times. (C) 2001 Elsevier Science Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Activated sludge models are used extensively in the study of wastewater treatment processes. While various commercial implementations of these models are available, there are many people who need to code models themselves using the simulation packages available to them, Quality assurance of such models is difficult. While benchmarking problems have been developed and are available, the comparison of simulation data with that of commercial models leads only to the detection, not the isolation of errors. To identify the errors in the code is time-consuming. In this paper, we address the problem by developing a systematic and largely automated approach to the isolation of coding errors. There are three steps: firstly, possible errors are classified according to their place in the model structure and a feature matrix is established for each class of errors. Secondly, an observer is designed to generate residuals, such that each class of errors imposes a subspace, spanned by its feature matrix, on the residuals. Finally. localising the residuals in a subspace isolates coding errors. The algorithm proved capable of rapidly and reliably isolating a variety of single and simultaneous errors in a case study using the ASM 1 activated sludge model. In this paper a newly coded model was verified against a known implementation. The method is also applicable to simultaneous verification of any two independent implementations, hence is useful in commercial model development.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The cystine knot structural motif is present in peptides and proteins from a variety of species, including fungi, plants, marine molluscs. insects and spiders. It comprises an embedded ring formed by two disulfide bonds and their connecting backbone segments which is threaded by a third disulfide bond. It is invariably associated with nearby beta-sheet structure and appears to be a highly efficient motif for structure stabilization. Because of this stability it makes an ideal framework for molecular engineering applications. In this review we summarize the main structural features of the cystine knot motif, focussing on toxin molecules containing either the inhibitor cystine knot or the cyclic cystine knot. Peptides containing these motifs are 26-48 residues long and include ion channel blockers, haemolytic agents, as well as molecules having antiviral and antibacterial activities. The stability of peptide toxins containing the cystine knot motif, their range of bioactivities and their unique structural scaffold can be harnessed for molecular engineering applications and in drug design. Applications of cystine knot molecules for the treatment of pain. and their potential use in antiviral and antibacterial applications are described. (C) 2000 Elsevier Science Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Computational simulations of the title reaction are presented, covering a temperature range from 300 to 2000 K. At lower temperatures we find that initial formation of the cyclopropene complex by addition of methylene to acetylene is irreversible, as is the stabilisation process via collisional energy transfer. Product branching between propargyl and the stable isomers is predicted at 300 K as a function of pressure for the first time. At intermediate temperatures (1200 K), complex temporal evolution involving multiple steady states begins to emerge. At high temperatures (2000 K) the timescale for subsequent unimolecular decay of thermalized intermediates begins to impinge on the timescale for reaction of methylene, such that the rate of formation of propargyl product does not admit a simple analysis in terms of a single time-independent rate constant until the methylene supply becomes depleted. Likewise, at the elevated temperatures the thermalized intermediates cannot be regarded as irreversible product channels. Our solution algorithm involves spectral propagation of a symmetrised version of the discretized master equation matrix, and is implemented in a high precision environment which makes hitherto unachievable low-temperature modelling a reality.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The QU-GENE Computing Cluster (QCC) is a hardware and software solution to the automation and speedup of large QU-GENE (QUantitative GENEtics) simulation experiments that are designed to examine the properties of genetic models, particularly those that involve factorial combinations of treatment levels. QCC automates the management of the distribution of components of the simulation experiments among the networked single-processor computers to achieve the speedup.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The vacancy solution theory of adsorption is re-formulated here through the mass-action law, and placed in a convenient framework permitting the development of thermodynamic ally consistent isotherms. It is shown that both the multisite Langmuir model and the classical vacancy solution theory expression are special cases of the more general approach when the Flory-Huggins activity coefficient model is used, with the former being the thermodynamically consistent result. The improved vacancy solution theory approach is further extended here to heterogeneous adsorbents by considering the pore-width dependent potential along with a pore size distribution. However, application of the model to numerous hydrocarbons as well as other adsorptives on microporous activated carbons shows that the multisite model has difficulty in the presence of a pore size distribution, because pores of different sizes can have different numbers of adsorbed layers and therefore different site occupancies. On the other hand, use of the classical vacancy solution theory expression for the local isotherm leads to good simultaneous fit of the data, while yielding a site diameter of about 0.257 nm, consistent with that expected for the potential well in aromatic rings on carbon pore surfaces. It is argued that the classical approach is successful because the Flory-Huggins term effectively represents adsorbate interactions in disguise. When used together with the ideal adsorbed solution theory the heterogeneous vacancy solution theory successfully predicts binary adsorption equilibria, and is found to perform better than the multisite Langmuir as well as the heterogeneous Langmuir model. (C) 2001 Elsevier Science Ltd. All rights reserved.