170 resultados para Large-Eddy Simulation
em University of Queensland eSpace - Australia
Resumo:
Large-eddy simulation is used to predict heat transfer in the separated and reattached flow regions downstream of a backward-facing step. Simulations were carried out at a Reynolds number of 28 000 (based on the step height and the upstream centreline velocity) with a channel expansion ratio of 1.25. The Prandtl number was 0.71. Two subgrid-scale models were tested, namely the dynamic eddy-viscosity, eddy-diffusivity model and the dynamic mixed model. Both models showed good overall agreement with available experimental data. The simulations indicated that the peak in heat-transfer coefficient occurs slightly upstream of the mean reattachment location, in agreement with experimental data. The results of these simulations have been analysed to discover the mechanisms that cause this phenomenon. The peak in heat-transfer coefficient shows a direct correlation with the peak in wall shear-stress fluctuations. It is conjectured that the peak in these fluctuations is caused by an impingement mechanism, in which large eddies, originating in the shear layer, impact the wall just upstream of the mean reattachment location. These eddies cause a 'downwash', which increases the local heat-transfer coefficient by bringing cold fluid from above the shear layer towards the wall.
Resumo:
CFD simulations of the 75 mm, hydrocyclone of Hsieh (1988) have been conducted using Fluent TM. The simulations used 3-dimensional body fitted grids. The simulations were two phase simulations where the air core was resolved using the mixture (Manninen et al., 1996) and VOF (Hirt and Nichols, 1981) models. Velocity predictions from large eddy simulations (LES), using the Smagorinsky-Lilly sub grid scale model (Smagorinsky, 1963; Lilly, 1966) and RANS simulations using the differential Reynolds stress turbulence model (Launder et al., 1975) were compared with Hsieh's experimental velocity data. The LES simulations gave very good agreement with Hsieh's data but required very fine grids to predict the velocities correctly in the bottom of the apex. The DRSM/RANS simulations under predicted tangential velocities, and there was little difference between the velocity predictions using the linear (Launder, 1989) and quadratic (Speziale et al., 1991) pressure strain models. Velocity predictions using the DRSM turbulence model and the linear pressure strain model could be improved by adjusting the pressure strain model constants.
Resumo:
Numerical simulations of turbulent driven flow in a dense medium cyclone with magnetite medium have been conducted using Fluent. The predicted air core shape and diameter were found to be close to the experimental results measured by gamma ray tomography. It is possible that the Large eddy simulation (LES) turbulence model with Mixture multi-phase model can be used to predict the air/slurry interface accurately although the LES may need a finer grid. Multi-phase simulations (air/water/medium) are showing appropriate medium segregation effects but are over-predicting the level of segregation compared to that measured by gamma-ray tomography in particular with over prediction of medium concentrations near the wall. Further, investigated the accurate prediction of axial segregation of magnetite using the LES turbulence model together with the multi-phase mixture model and viscosity corrections according to the feed particle loading factor. Addition of lift forces and viscosity correction improved the predictions especially near the wall. Predicted density profiles are very close to gamma ray tomography data showing a clear density drop near the wall. The effect of size distribution of the magnetite has been fully studied. It is interesting to note that the ultra-fine magnetite sizes (i.e. 2 and 7 mu m) are distributed uniformly throughout the cyclone. As the size of magnetite increases, more segregation of magnetite occurs close to the wall. The cut-density (d(50)) of the magnetite segregation is 32 gm, which is expected with superfine magnetite feed size distribution. At higher feed densities the agreement between the [Dungilson, 1999; Wood, J.C., 1990. A performance model for coal-washing dense medium cyclones, Ph.D. Thesis, JKMRC, University of Queensland] correlations and the CFD are reasonably good, but the overflow density is lower than the model predictions. It is believed that the excessive underflow volumetric flow rates are responsible for under prediction of the overflow density. (c) 2006 Elsevier Ltd. All rights reserved.
Resumo:
A recently developed whole of surface electroplating technique was used to obtain mass-transfer rates in the separated flow region of a stepped rotating cylinder electrode. These data are compared with previously reported mass-transfer rates obtained with a patch electrode. It was found that the two methods yield different results, where at lower Reynolds numbers, the mass-transfer rate enhancement was noticeably higher for the whole of the surface electrode than for the patch electrode. The location of the peak mass transfer behind the step, as measured with a patch electrode, was reported to be independent of the Reynolds number in previous studies, whereas the whole of the surface electrode shows a definite Reynolds number dependence. Large eddy simulation results for the recirculating region behind a step are used in this work to show that this difference in behavior is related to the existence of a much thinner fluid layer at the wall for which the velocity is a linear junction of distance from the wall. Consequently, the diffusion layer no longer lies well within a laminar sublayer. It is concluded that the patch electrode responds to the wall shear stress for smooth wall flow as well as for the disturbed flow region behind the step. When the whole of the surface is electro-active, the response is to mass transfer even when this is not a sole function of wall shear stress. The results demonstrate that the choice of the mass-transfer measurement technique in corrosion studies can have a significant effect on the results obtained from empirical data.
Resumo:
The rate of generation of fluctuations with respect to the scalar values conditioned on the mixture fraction, which significantly affects turbulent nonpremixed combustion processes, is examined. Simulation of the rate in a major mixing model is investigated and the derived equations can assist in selecting the model parameters so that the level of conditional fluctuations is better reproduced by the models. A more general formulation of the multiple mapping conditioning (MMC) model that distinguishes the reference and conditioning variables is suggested. This formulation can be viewed as a methodology of enforcing certain desired conditional properties onto conventional mixing models. Examples of constructing consistent MMC models with dissipation and velocity conditioning as well as of combining MMC with large eddy simulations (LES) are also provided. (c) 2005 The Combustion Institute. Published by Elsevier Inc. All rights reserved.
Resumo:
The QU-GENE Computing Cluster (QCC) is a hardware and software solution to the automation and speedup of large QU-GENE (QUantitative GENEtics) simulation experiments that are designed to examine the properties of genetic models, particularly those that involve factorial combinations of treatment levels. QCC automates the management of the distribution of components of the simulation experiments among the networked single-processor computers to achieve the speedup.
Resumo:
Experimental and theoretical studies have shown the importance of stochastic processes in genetic regulatory networks and cellular processes. Cellular networks and genetic circuits often involve small numbers of key proteins such as transcriptional factors and signaling proteins. In recent years stochastic models have been used successfully for studying noise in biological pathways, and stochastic modelling of biological systems has become a very important research field in computational biology. One of the challenge problems in this field is the reduction of the huge computing time in stochastic simulations. Based on the system of the mitogen-activated protein kinase cascade that is activated by epidermal growth factor, this work give a parallel implementation by using OpenMP and parallelism across the simulation. Special attention is paid to the independence of the generated random numbers in parallel computing, that is a key criterion for the success of stochastic simulations. Numerical results indicate that parallel computers can be used as an efficient tool for simulating the dynamics of large-scale genetic regulatory networks and cellular processes
Resumo:
We analyze folding phenomena in finely layered viscoelastic rock. Fine is meant in the sense that the thickness of each layer is considerably smaller than characteristic structural dimensions. For this purpose we derive constitutive relations and apply a computational simulation scheme (a finite-element based particle advection scheme; see MORESI et al., 2001) suitable for problems involving very large deformations of layered viscous and viscoelastic rocks. An algorithm for the time integration of the governing equations as well as details of the finite-element implementation is also given. We then consider buckling instabilities in a finite, rectangular domain. Embedded within this domain, parallel to the longer dimension we consider a stiff, layered plate. The domain is compressed along the layer axis by prescribing velocities along the sides. First, for the viscous limit we consider the response to a series of harmonic perturbations of the director orientation. The Fourier spectra of the initial folding velocity are compared for different viscosity ratios. Turning to the nonlinear regime we analyze viscoelastic folding histories up to 40% shortening. The effect of layering manifests itself in that appreciable buckling instabilities are obtained at much lower viscosity ratios (1:10) as is required for the buckling of isotropic plates (1:500). The wavelength induced by the initial harmonic perturbation of the director orientation seems to be persistent. In the section of the parameter space considered here elasticity seems to delay or inhibit the occurrence of a second, larger wavelength. Finally, in a linear instability analysis we undertake a brief excursion into the potential role of couple stresses on the folding process. The linear instability analysis also provides insight into the expected modes of deformation at the onset of instability, and the different regimes of behavior one might expect to observe.
Resumo:
The development of cropping systems simulation capabilities world-wide combined with easy access to powerful computing has resulted in a plethora of agricultural models and consequently, model applications. Nonetheless, the scientific credibility of such applications and their relevance to farming practice is still being questioned. Our objective in this paper is to highlight some of the model applications from which benefits for farmers were or could be obtained via changed agricultural practice or policy. Changed on-farm practice due to the direct contribution of modelling, while keenly sought after, may in some cases be less achievable than a contribution via agricultural policies. This paper is intended to give some guidance for future model applications. It is not a comprehensive review of model applications, nor is it intended to discuss modelling in the context of social science or extension policy. Rather, we take snapshots around the globe to 'take stock' and to demonstrate that well-defined financial and environmental benefits can be obtained on-farm from the use of models. We highlight the importance of 'relevance' and hence the importance of true partnerships between all stakeholders (farmer, scientists, advisers) for the successful development and adoption of simulation approaches. Specifically, we address some key points that are essential for successful model applications such as: (1) issues to be addressed must be neither trivial nor obvious; (2) a modelling approach must reduce complexity rather than proliferate choices in order to aid the decision-making process (3) the cropping systems must be sufficiently flexible to allow management interventions based on insights gained from models. The pro and cons of normative approaches (e.g. decision support software that can reach a wide audience quickly but are often poorly contextualized for any individual client) versus model applications within the context of an individual client's situation will also be discussed. We suggest that a tandem approach is necessary whereby the latter is used in the early stages of model application for confidence building amongst client groups. This paper focuses on five specific regions that differ fundamentally in terms of environment and socio-economic structure and hence in their requirements for successful model applications. Specifically, we will give examples from Australia and South America (high climatic variability, large areas, low input, technologically advanced); Africa (high climatic variability, small areas, low input, subsistence agriculture); India (high climatic variability, small areas, medium level inputs, technologically progressing; and Europe (relatively low climatic variability, small areas, high input, technologically advanced). The contrast between Australia and Europe will further demonstrate how successful model applications are strongly influenced by the policy framework within which producers operate. We suggest that this might eventually lead to better adoption of fully integrated systems approaches and result in the development of resilient farming systems that are in tune with current climatic conditions and are adaptable to biophysical and socioeconomic variability and change. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
The two-node tandem Jackson network serves as a convenient reference model for the analysis and testing of different methodologies and techniques in rare event simulation. In this paper we consider a new approach to efficiently estimate the probability that the content of the second buffer exceeds some high level L before it becomes empty, starting from a given state. The approach is based on a Markov additive process representation of the buffer processes, leading to an exponential change of measure to be used in an importance sampling procedure. Unlike changes of measures proposed and studied in recent literature, the one derived here is a function of the content of the first buffer. We prove that when the first buffer is finite, this method yields asymptotically efficient simulation for any set of arrival and service rates. In fact, the relative error is bounded independent of the level L; a new result which is not established for any other known method. When the first buffer is infinite, we propose a natural extension of the exponential change of measure for the finite buffer case. In this case, the relative error is shown to be bounded (independent of L) only when the second server is the bottleneck; a result which is known to hold for some other methods derived through large deviations analysis. When the first server is the bottleneck, experimental results using our method seem to suggest that the relative error is bounded linearly in L.
Resumo:
This paper presents a new approach to the LU decomposition method for the simulation of stationary and ergodic random fields. The approach overcomes the size limitations of LU and is suitable for any size simulation. The proposed approach can facilitate fast updating of generated realizations with new data, when appropriate, without repeating the full simulation process. Based on a novel column partitioning of the L matrix, expressed in terms of successive conditional covariance matrices, the approach presented here demonstrates that LU simulation is equivalent to the successive solution of kriging residual estimates plus random terms. Consequently, it can be used for the LU decomposition of matrices of any size. The simulation approach is termed conditional simulation by successive residuals as at each step, a small set (group) of random variables is simulated with a LU decomposition of a matrix of updated conditional covariance of residuals. The simulated group is then used to estimate residuals without the need to solve large systems of equations.
Resumo:
Numerical modeling of the eddy currents induced in the human body by the pulsed field gradients in MRI presents a difficult computational problem. It requires an efficient and accurate computational method for high spatial resolution analyses with a relatively low input frequency. In this article, a new technique is described which allows the finite difference time domain (FDTD) method to be efficiently applied over a very large frequency range, including low frequencies. This is not the case in conventional FDTD-based methods. A method of implementing streamline gradients in FDTD is presented, as well as comparative analyses which show that the correct source injection in the FDTD simulation plays a crucial rule in obtaining accurate solutions. In particular, making use of the derivative of the input source waveform is shown to provide distinct benefits in accuracy over direct source injection. In the method, no alterations to the properties of either the source or the transmission media are required. The method is essentially frequency independent and the source injection method has been verified against examples with analytical solutions. Results are presented showing the spatial distribution of gradient-induced electric fields and eddy currents in a complete body model.