24 resultados para Large-Eddy Simulation
em University of Queensland eSpace - Australia
Resumo:
Large-eddy simulation is used to predict heat transfer in the separated and reattached flow regions downstream of a backward-facing step. Simulations were carried out at a Reynolds number of 28 000 (based on the step height and the upstream centreline velocity) with a channel expansion ratio of 1.25. The Prandtl number was 0.71. Two subgrid-scale models were tested, namely the dynamic eddy-viscosity, eddy-diffusivity model and the dynamic mixed model. Both models showed good overall agreement with available experimental data. The simulations indicated that the peak in heat-transfer coefficient occurs slightly upstream of the mean reattachment location, in agreement with experimental data. The results of these simulations have been analysed to discover the mechanisms that cause this phenomenon. The peak in heat-transfer coefficient shows a direct correlation with the peak in wall shear-stress fluctuations. It is conjectured that the peak in these fluctuations is caused by an impingement mechanism, in which large eddies, originating in the shear layer, impact the wall just upstream of the mean reattachment location. These eddies cause a 'downwash', which increases the local heat-transfer coefficient by bringing cold fluid from above the shear layer towards the wall.
Resumo:
CFD simulations of the 75 mm, hydrocyclone of Hsieh (1988) have been conducted using Fluent TM. The simulations used 3-dimensional body fitted grids. The simulations were two phase simulations where the air core was resolved using the mixture (Manninen et al., 1996) and VOF (Hirt and Nichols, 1981) models. Velocity predictions from large eddy simulations (LES), using the Smagorinsky-Lilly sub grid scale model (Smagorinsky, 1963; Lilly, 1966) and RANS simulations using the differential Reynolds stress turbulence model (Launder et al., 1975) were compared with Hsieh's experimental velocity data. The LES simulations gave very good agreement with Hsieh's data but required very fine grids to predict the velocities correctly in the bottom of the apex. The DRSM/RANS simulations under predicted tangential velocities, and there was little difference between the velocity predictions using the linear (Launder, 1989) and quadratic (Speziale et al., 1991) pressure strain models. Velocity predictions using the DRSM turbulence model and the linear pressure strain model could be improved by adjusting the pressure strain model constants.
Resumo:
Numerical simulations of turbulent driven flow in a dense medium cyclone with magnetite medium have been conducted using Fluent. The predicted air core shape and diameter were found to be close to the experimental results measured by gamma ray tomography. It is possible that the Large eddy simulation (LES) turbulence model with Mixture multi-phase model can be used to predict the air/slurry interface accurately although the LES may need a finer grid. Multi-phase simulations (air/water/medium) are showing appropriate medium segregation effects but are over-predicting the level of segregation compared to that measured by gamma-ray tomography in particular with over prediction of medium concentrations near the wall. Further, investigated the accurate prediction of axial segregation of magnetite using the LES turbulence model together with the multi-phase mixture model and viscosity corrections according to the feed particle loading factor. Addition of lift forces and viscosity correction improved the predictions especially near the wall. Predicted density profiles are very close to gamma ray tomography data showing a clear density drop near the wall. The effect of size distribution of the magnetite has been fully studied. It is interesting to note that the ultra-fine magnetite sizes (i.e. 2 and 7 mu m) are distributed uniformly throughout the cyclone. As the size of magnetite increases, more segregation of magnetite occurs close to the wall. The cut-density (d(50)) of the magnetite segregation is 32 gm, which is expected with superfine magnetite feed size distribution. At higher feed densities the agreement between the [Dungilson, 1999; Wood, J.C., 1990. A performance model for coal-washing dense medium cyclones, Ph.D. Thesis, JKMRC, University of Queensland] correlations and the CFD are reasonably good, but the overflow density is lower than the model predictions. It is believed that the excessive underflow volumetric flow rates are responsible for under prediction of the overflow density. (c) 2006 Elsevier Ltd. All rights reserved.
Resumo:
A recently developed whole of surface electroplating technique was used to obtain mass-transfer rates in the separated flow region of a stepped rotating cylinder electrode. These data are compared with previously reported mass-transfer rates obtained with a patch electrode. It was found that the two methods yield different results, where at lower Reynolds numbers, the mass-transfer rate enhancement was noticeably higher for the whole of the surface electrode than for the patch electrode. The location of the peak mass transfer behind the step, as measured with a patch electrode, was reported to be independent of the Reynolds number in previous studies, whereas the whole of the surface electrode shows a definite Reynolds number dependence. Large eddy simulation results for the recirculating region behind a step are used in this work to show that this difference in behavior is related to the existence of a much thinner fluid layer at the wall for which the velocity is a linear junction of distance from the wall. Consequently, the diffusion layer no longer lies well within a laminar sublayer. It is concluded that the patch electrode responds to the wall shear stress for smooth wall flow as well as for the disturbed flow region behind the step. When the whole of the surface is electro-active, the response is to mass transfer even when this is not a sole function of wall shear stress. The results demonstrate that the choice of the mass-transfer measurement technique in corrosion studies can have a significant effect on the results obtained from empirical data.
Resumo:
The rate of generation of fluctuations with respect to the scalar values conditioned on the mixture fraction, which significantly affects turbulent nonpremixed combustion processes, is examined. Simulation of the rate in a major mixing model is investigated and the derived equations can assist in selecting the model parameters so that the level of conditional fluctuations is better reproduced by the models. A more general formulation of the multiple mapping conditioning (MMC) model that distinguishes the reference and conditioning variables is suggested. This formulation can be viewed as a methodology of enforcing certain desired conditional properties onto conventional mixing models. Examples of constructing consistent MMC models with dissipation and velocity conditioning as well as of combining MMC with large eddy simulations (LES) are also provided. (c) 2005 The Combustion Institute. Published by Elsevier Inc. All rights reserved.
Resumo:
Experimental and theoretical studies have shown the importance of stochastic processes in genetic regulatory networks and cellular processes. Cellular networks and genetic circuits often involve small numbers of key proteins such as transcriptional factors and signaling proteins. In recent years stochastic models have been used successfully for studying noise in biological pathways, and stochastic modelling of biological systems has become a very important research field in computational biology. One of the challenge problems in this field is the reduction of the huge computing time in stochastic simulations. Based on the system of the mitogen-activated protein kinase cascade that is activated by epidermal growth factor, this work give a parallel implementation by using OpenMP and parallelism across the simulation. Special attention is paid to the independence of the generated random numbers in parallel computing, that is a key criterion for the success of stochastic simulations. Numerical results indicate that parallel computers can be used as an efficient tool for simulating the dynamics of large-scale genetic regulatory networks and cellular processes
Resumo:
Genetic assignment methods use genotype likelihoods to draw inference about where individuals were or were not born, potentially allowing direct, real-time estimates of dispersal. We used simulated data sets to test the power and accuracy of Monte Carlo resampling methods in generating statistical thresholds for identifying F-0 immigrants in populations with ongoing gene flow, and hence for providing direct, real-time estimates of migration rates. The identification of accurate critical values required that resampling methods preserved the linkage disequilibrium deriving from recent generations of immigrants and reflected the sampling variance present in the data set being analysed. A novel Monte Carlo resampling method taking into account these aspects was proposed and its efficiency was evaluated. Power and error were relatively insensitive to the frequency assumed for missing alleles. Power to identify F-0 immigrants was improved by using large sample size (up to about 50 individuals) and by sampling all populations from which migrants may have originated. A combination of plotting genotype likelihoods and calculating mean genotype likelihood ratios (D-LR) appeared to be an effective way to predict whether F-0 immigrants could be identified for a particular pair of populations using a given set of markers.
Resumo:
In most magnetic resonance imaging (MRI) systems, pulsed magnetic gradient fields induce eddy currents in the conducting structures of the superconducting magnet. The eddy currents induced in structures within the cryostat are particularly problematic as they are characterized by long time constants by virtue of the low resistivity of the conductors. This paper presents a three-dimensional (3-D) finite-difference time-domain (FDTD) scheme in cylindrical coordinates for eddy-current calculation in conductors. This model is intended to be part of a complete FDTD model of an MRI system including all RF and low-frequency field generating units and electrical models of the patient. The singularity apparent in the governing equations is removed by using a series expansion method and the conductor-air boundary condition is handled using a variant of the surface impedance concept. The numerical difficulty due to the asymmetry of Maxwell equations for low-frequency eddy-current problems is circumvented by taking advantage of the known penetration behavior of the eddy-current fields. A perfectly matched layer absorbing boundary condition in 3-D cylindrical coordinates is also incorporated. The numerical method has been verified against analytical solutions for simple cases. Finally, the algorithm is illustrated by modeling a pulsed field gradient coil system within an MRI magnet system. The results demonstrate that the proposed FDTD scheme can be used to calculate large-scale eddy-current problems in materials with high conductivity at low frequencies.
Resumo:
The Lattice Solid Model has been used successfully as a virtual laboratory to simulate fracturing of rocks, the dynamics of faults, earthquakes and gouge processes. However, results from those simulations show that in order to make the next step towards more realistic experiments it will be necessary to use models containing a significantly larger number of particles than current models. Thus, those simulations will require a greatly increased amount of computational resources. Whereas the computing power provided by single processors can be expected to increase according to Moore's law, i.e., to double every 18-24 months, parallel computers can provide significantly larger computing power today. In order to make this computing power available for the simulation of the microphysics of earthquakes, a parallel version of the Lattice Solid Model has been implemented. Benchmarks using large models with several millions of particles have shown that the parallel implementation of the Lattice Solid Model can achieve a high parallel-efficiency of about 80% for large numbers of processors on different computer architectures.
Resumo:
This paper describes recent advances made in computational modelling of the sugar cane liquid extraction process. The saturated fibro-porous material is rolled between circumferentially grooved rolls, which enhance frictional grip and provide a low-resistance path for liquid flow during the extraction process. Previously reported two-dimensional (2D) computational models, account for the large deformation of the porous material by solving the fully coupled governing fibre stress and fluid-flow equations using finite element techniques. While the 2D simulations provide much insight into the overarching cause-effect relationships, predictions of mechanical quantities such as roll separating force and particularly torque as a function of roll speed and degree of compression are not satisfactory for industrial use. It is considered that the unsatisfactory response in roll torque prediction may be due to the stress levels that exist between the groove tips and roots which have been largely neglected in the geometrically simplified 2D model. This paper gives results for both two- and three-dimensional finite element models and highlights their strengths and weaknesses in predicting key milling parameters. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
The adsorption of simple Lennard-Jones fluids in a carbon slit pore of finite length was studied with Canonical Ensemble (NVT) and Gibbs Ensemble Monte Carlo Simulations (GEMC). The Canonical Ensemble was a collection of cubic simulation boxes in which a finite pore resides, while the Gibbs Ensemble was that of the pore space of the finite pore. Argon was used as a model for Lennard-Jones fluids, while the adsorbent was modelled as a finite carbon slit pore whose two walls were composed of three graphene layers with carbon atoms arranged in a hexagonal pattern. The Lennard-Jones (LJ) 12-6 potential model was used to compute the interaction energy between two fluid particles, and also between a fluid particle and a carbon atom. Argon adsorption isotherms were obtained at 87.3 K for pore widths of 1.0, 1.5 and 2.0 nm using both Canonical and Gibbs Ensembles. These results were compared with isotherms obtained with corresponding infinite pores using Grand Canonical Ensembles. The effects of the number of cycles necessary to reach equilibrium, the initial allocation of particles, the displacement step and the simulation box size were particularly investigated in the Monte Carlo simulation with Canonical Ensembles. Of these parameters, the displacement step had the most significant effect on the performance of the Monte Carlo simulation. The simulation box size was also important, especially at low pressures at which the size must be sufficiently large to have a statistically acceptable number of particles in the bulk phase. Finally, it was found that the Canonical Ensemble and the Gibbs Ensemble both yielded the same isotherm (within statistical error); however, the computation time for GEMC was shorter than that for canonical ensemble simulation. However, the latter method described the proper interface between the reservoir and the adsorbed phase (and hence the meniscus).
Resumo:
The buffer allocation problem (BAP) is a well-known difficult problem in the design of production lines. We present a stochastic algorithm for solving the BAP, based on the cross-entropy method, a new paradigm for stochastic optimization. The algorithm involves the following iterative steps: (a) the generation of buffer allocations according to a certain random mechanism, followed by (b) the modification of this mechanism on the basis of cross-entropy minimization. Through various numerical experiments we demonstrate the efficiency of the proposed algorithm and show that the method can quickly generate (near-)optimal buffer allocations for fairly large production lines.