899 resultados para Particle Trajectory Computation
Resumo:
Numerical methods related to Krylov subspaces are widely used in large sparse numerical linear algebra. Vectors in these subspaces are manipulated via their representation onto orthonormal bases. Nowadays, on serial computers, the method of Arnoldi is considered as a reliable technique for constructing such bases. However, although easily parallelizable, this technique is not as scalable as expected for communications. In this work we examine alternative methods aimed at overcoming this drawback. Since they retrieve upon completion the same information as Arnoldi's algorithm does, they enable us to design a wide family of stable and scalable Krylov approximation methods for various parallel environments. We present timing results obtained from their implementation on two distributed-memory multiprocessor supercomputers: the Intel Paragon and the IBM Scalable POWERparallel SP2. (C) 1997 by John Wiley & Sons, Ltd.
Resumo:
A straightforward method is proposed for computing the magnetic field produced by a circular coil that contains a large number of turns wound onto a solenoid of rectangular cross section. The coil is thus approximated by a circular ring containing a continuous constant current density, which is very close to the real situation when sire of rectangular cross section is used. All that is required is to evaluate two functions, which are defined as integrals of periodic quantities; this is done accurately and efficiently using trapezoidal-rule quadrature. The solution can be obtained so rapidly that this procedure is ideally suited for use in stochastic optimization, An example is given, in which this approach is combined with a simulated annealing routine to optimize shielded profile coils for NMR.
Resumo:
Objective. To investigate the processing induced particle alignment on fracture behavior of four multiphase dental ceramics (one porcelain, two glass-ceramics and a glass-infiltrated-alumina composite). Methods. Disks (empty set12mm x 1.1 mm-thick) and bars (3 mm x 4 mm x 20 mm) of each material were processed according to manufacturer instructions, machined and polished. Fracture toughness (K(IC)) was determined by the indentation strength method using 3-point bending and biaxial flexure fixtures for the fracture of bars and disks, respectively. Microstructural and fractographic analyses were performed with scanning electron microscopy, energy dispersive spectroscopy and X-ray diffraction. Results. The isotropic microstructure of the porcelain and the leucite-based glass-ceramic resulted in similar fracture toughness values regardless of the specimen geometry. On the other hand, materials containing second-phase particles with high aspect ratio (lithium disilicate glass-ceramic and glass-infiltrated-alumina composite) showed lower fracture toughness for disk specimens compared to bars. For the lithium disilicate glass-ceramic disks, it was demonstrated that the occurrence of particle alignment during the heat-pressing procedure resulted in an unfavorable pattern that created weak microstructural paths during the biaxial test. For the glass-infiltrated-alumina composite, the microstructural analysis showed that the large alumina platelets tended to align their large surfaces perpendicularly to the direction of particle deposition during slip casting of green preforms. Significance. The fracture toughness of dental ceramics with anisotropic microstructure should be determined by means of biaxial testing, since it results in lower values. (C) 2009 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.
Resumo:
Seven hundred and nineteen samples from throughout the Cainozoic section in CRP-3 were analysed by a Malvern Mastersizer laser particle analyser, in order to derive a stratigraphic distribution of grain-size parameters downhole. Entropy analysis of these data (using the method of Woolfe and Michibayashi, 1995) allowed recognition of four groups of samples, each group characterised by a distinctive grain-size distribution. Group 1, which shows a multi-modal distribution, corresponds to mudrocks, interbedded mudrock/sandstone facies, muddy sandstones and diamictites. Group 2, with a sand-grade mode but showing wide dispersion of particle size, corresponds to muddy sandstones, a few cleaner sandstones and some conglomerates. Group 3 and Group 4 are also sand-dominated, with better grain-size sorting, and correspond to clean, well-washed sandstones of varying mean grain-size (medium and fine modes, respectively). The downhole disappearance of Group 1, and dominance of Groups 3 and 4 reflect a concomitant change from mudrock- and diamictite-rich lithology to a section dominated by clean, well-washed sandstones with minor conglomerates. Progressive downhole increases in percentage sand and principal mode also reflect these changes. Significant shifts in grain-size parameters and entropy group membership were noted across sequence boundaries and seismic reflectors, as recognised in others studies.
Resumo:
The step size determines the accuracy of a discrete element simulation. The position and velocity updating calculation uses a pre-calculated table and hence the control of step size can not use the integration formulas for step size control. A step size control scheme for use with the table driven velocity and position calculation uses the difference between the calculation result from one big step and that from two small steps. This variable time step size method chooses the suitable time step size for each particle at each step automatically according to the conditions. Simulation using fixed time step method is compared with that of using variable time step method. The difference in computation time for the same accuracy using a variable step size (compared to the fixed step) depends on the particular problem. For a simple test case the times are roughly similar. However, the variable step size gives the required accuracy on the first run. A fixed step size may require several runs to check the simulation accuracy or a conservative step size that results in longer run times. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
A generalised model for the prediction of single char particle gasification dynamics, accounting for multi-component mass transfer with chemical reaction, heat transfer, as well as structure evolution and peripheral fragmentation is developed in this paper. Maxwell-Stefan analysis is uniquely applied to both micro and macropores within the framework of the dusty-gas model to account for the bidisperse nature of the char, which differs significantly from the conventional models that are based on a single pore type. The peripheral fragmentation and random-pore correlation incorporated into the model enable prediction of structure/reactivity relationships. The occurrence of chemical reaction within the boundary layer reported by Biggs and Agarwal (Chem. Eng. Sci. 52 (1997) 941) has been confirmed through an analysis of CO/CO2 product ratio obtained from model simulations. However, it is also quantitatively observed that the significance of boundary layer reaction reduces notably with the reduction of oxygen concentration in the flue gas, operational pressure and film thickness. Computations have also shown that in the presence of diffusional gradients peripheral fragmentation occurs in the early stages on the surface, after which conversion quickens significantly due to small particle size. Results of the early commencement of peripheral fragmentation at relatively low overall conversion obtained from a large number of simulations agree well with experimental observations reported by Feng and Bhatia (Energy & Fuels 14 (2000) 297). Comprehensive analysis of simulation results is carried out based on well accepted physical principles to rationalise model prediction. (C) 2001 Elsevier Science Ltd. AH rights reserved.
Resumo:
Quantum feedback can stabilize a two-level atom against decoherence (spontaneous emission), putting it into an arbitrary (specified) pure state. This requires perfect homodyne detection of the atomic emission, and instantaneous feedback. Inefficient detection was considered previously by two of us. Here we allow for a non-zero delay time tau in the feedback circuit. Because a two-level atom is a non-linear optical system, an analytical solution is not possible. However, quantum trajectories allow a simple numerical simulation of the resulting non-Markovian process. We find the effect of the time delay to be qualitatively similar to chat of inefficient detection. The solution of the non-Markovian quantum trajectory will not remain fixed, so that the time-averaged state will be mixed, not pure. In the case where one tries to stabilize the atom in the excited state, an approximate analytical solution to the quantum trajectory is possible. The result, that the purity (P = 2Tr[rho (2)] - 1) of the average state is given by P = 1 - 4y tau (where gamma is the spontaneous emission rate) is found to agree very well with the numerical results. (C) 2001 Elsevier Science B.V. All rights reserved.
Resumo:
Colorimetric analysis of roadway dust is currently a method for monitoring the incombustible content of mine roadways within Australian underground coal mines. To test the accuracy of this method, and to eliminate errors of judgement introduced by human operators in the analysis procedure, a number of samples were tested using scanning software to determine absolute greyscale values. High variability and unpredictability of results was noted during this testing, indicating that colorimetric testing is sensitive to parameters within the mine that are not currently reproduced in the preparation of reference samples. This was linked to the dependence of colour on particle surface area, and hence also to the size distribution of coal particles within the mine environment. (C) 2001 Elsevier Science Ltd. All rights reserved.