50 resultados para Large-Eddy Simulation
Resumo:
Large eddy simulation (LES) is an emerging technique for obtaining an approximation to turbulent flow fields It is an improvement over the widely prevalent practice of obtaining means of turbulent flows when the flow has large scale, low frequency, unsteadiness An introduction to the method, its general formulation, and the more common modelling for flows without reaction, is discussed Some attempts at extension to flows with combustion have been made Examples from present work for flows with and without combustion are given The final example of the LES of the combustor of a helicopter engine illustrates the state-of-the-art in application of the technique
Resumo:
An implicit sub-grid scale model for large eddy simulation is presented by utilising the concept of a relaxation system for one dimensional Burgers' equation in a novel way. The Burgers' equation is solved for three different unsteady flow situations by varying the ratio of relaxation parameter (epsilon) to time step. The coarse mesh results obtained with a relaxation scheme are compared with the filtered DNS solution of the same problem on a fine mesh using a fourth-order CWENO discretisation in space and third-order TVD Runge-Kutta discretisation in time. The numerical solutions obtained through the relaxation system have the same order of accuracy in space and time and they closely match with the filtered DNS solutions.
Resumo:
Approximate deconvolution modeling is a very recent approach to large eddy simulation of turbulent flows. It has been applied to compressible flows with success. Here, a premixed flame which forms in the wake of a flameholder has been selected to examine the subgrid-scale modeling of reaction rate by this new method because a previous plane two-dimensional simulation of this wake flame, using a wrinkling function and artificial flame thickening, had revealed discrepancies when compared with experiment. The present simulation is of the temporal evolution of a round wakelike flow at two Reynolds numbers, Re = 2000 and 10,000, based on wake defect velocity and wake diameter. A Fourier-spectral code has been used. The reaction is single-step and irreversible, and the rate follows an Arrhenius law. The reference simulation at the lower Reynolds number is fully resolved. At Re = 10,000, subgrid-scale contributions are significant. It was found that subgrid-scale modeling in the present simulation agrees more closely with unresolved subgrid-scale effects observed in experiment. Specifically, the highest contributions appeared in thin folded regions created by vortex convection. The wrinkling function approach had not selected subgrid-scale effects in these regions.
Resumo:
A methodology termed the “filtered density function” (FDF) is developed and implemented for large eddy simulation (LES) of chemically reacting turbulent flows. In this methodology, the effects of the unresolved scalar fluctuations are taken into account by considering the probability density function (PDF) of subgrid scale (SGS) scalar quantities. A transport equation is derived for the FDF in which the effect of chemical reactions appears in a closed form. The influences of scalar mixing and convection within the subgrid are modeled. The FDF transport equation is solved numerically via a Lagrangian Monte Carlo scheme in which the solutions of the equivalent stochastic differential equations (SDEs) are obtained. These solutions preserve the Itô-Gikhman nature of the SDEs. The consistency of the FDF approach, the convergence of its Monte Carlo solution and the performance of the closures employed in the FDF transport equation are assessed by comparisons with results obtained by direct numerical simulation (DNS) and by conventional LES procedures in which the first two SGS scalar moments are obtained by a finite difference method (LES-FD). These comparative assessments are conducted by implementations of all three schemes (FDF, DNS and LES-FD) in a temporally developing mixing layer and a spatially developing planar jet under both non-reacting and reacting conditions. In non-reacting flows, the Monte Carlo solution of the FDF yields results similar to those via LES-FD. The advantage of the FDF is demonstrated by its use in reacting flows. In the absence of a closure for the SGS scalar fluctuations, the LES-FD results are significantly different from those based on DNS. The FDF results show a much closer agreement with filtered DNS results. © 1998 American Institute of Physics.
Resumo:
Two models for large eddy simulation of turbulent reacting flow in homogeneous turbulence were studied. The sub-grid stress arising out of non-linearities of the Navier-Stokes equations were modeled using an explicit filtering approach. A filtered mass density function (FMDF) approach was used for closure of the sub-grid scalar fluctuations. A posteriori calculations, when compared with the results from the direct numerical simulation, indicate that the explicit filtering is adequate in representing the effect of sub-grid stress on the filtered velocity field in the absence of reaction. Discrepancies arise when reactions occur, but the FMDF approach suffices to account for sub-grid scale fluctuations of the reacting scalars, accurately.
Resumo:
Generalized spatial modulation (GSM) uses n(t) transmit antenna elements but fewer transmit radio frequency (RF) chains, n(rf). Spatial modulation (SM) and spatial multiplexing are special cases of GSM with n(rf) = 1 and n(rf) = n(t), respectively. In GSM, in addition to conveying information bits through n(rf) conventional modulation symbols (for example, QAM), the indices of the n(rf) active transmit antennas also convey information bits. In this paper, we investigate GSM for large-scale multiuser MIMO communications on the uplink. Our contributions in this paper include: 1) an average bit error probability (ABEP) analysis for maximum-likelihood detection in multiuser GSM-MIMO on the uplink, where we derive an upper bound on the ABEP, and 2) low-complexity algorithms for GSM-MIMO signal detection and channel estimation at the base station receiver based on message passing. The analytical upper bounds on the ABEP are found to be tight at moderate to high signal-to-noise ratios (SNR). The proposed receiver algorithms are found to scale very well in complexity while achieving near-optimal performance in large dimensions. Simulation results show that, for the same spectral efficiency, multiuser GSM-MIMO can outperform multiuser SM-MIMO as well as conventional multiuser MIMO, by about 2 to 9 dB at a bit error rate of 10(-3). Such SNR gains in GSM-MIMO compared to SM-MIMO and conventional MIMO can be attributed to the fact that, because of a larger number of spatial index bits, GSM-MIMO can use a lower-order QAM alphabet which is more power efficient.
Resumo:
A primary motivation for this work arises from the contradictory results obtained in some recent measurements of the zero-crossing frequency of turbulent fluctuations in shear flows. A systematic study of the various factors involved in zero-crossing measurements shows that the dynamic range of the signal, the discriminator characteristics, filter frequency and noise contamination have a strong bearing on the results obtained. These effects are analysed, and explicit corrections for noise contamination have been worked out. New measurements of the zero-crossing frequency N0 have been made for the longitudinal velocity fluctuation in boundary layers and a wake, for wall shear stress in a channel, and for temperature derivatives in a heated boundary layer. All these measurements show that a zero-crossing microscale, defined as Λ = (2πN0)−1, is always nearly equal to the well-known Taylor microscale λ (in time). These measurements, as well as a brief analysis, show that even strong departures from Gaussianity do not necessarily yield values appreciably different from unity for the ratio Λ/λ. Further, the variation of N0/N0 max across the boundary layer is found to correlate with the familiar wall and outer coordinates; the outer scaling for N0 max is totally inappropriate, and the inner scaling shows only a weak Reynolds-number dependence. It is also found that the distribution of the interval between successive zero-crossings can be approximated by a combination of a lognormal and an exponential, or (if the shortest intervals are ignored) even of two exponentials, one of which characterizes crossings whose duration is of the order of the wall-variable timescale ν/U2*, while the other characterizes crossings whose duration is of the order of the large-eddy timescale δ/U[infty infinity]. The significance of these results is discussed, and it is particularly argued that the pulse frequency of Rao, Narasimha & Badri Narayanan (1971) is appreciably less than the zero-crossing rate.
Resumo:
A zonally averaged version of the Goddard Laboratory for Atmospheric Sciences (GLAS) climate model is used to study the sensitivity of the northern hemisphere (NH) summer mean meridional circulation to changes in the large scale eddy forcing. A standard solution is obtained by prescribing the latent heating field and climatological horizontal transports of heat and momentum by the eddies. The radiative heating and surface fluxes are calculated by model parameterizations. This standard solution is compared with the results of several sensitivity studies. When the eddy forcing is reduced to 0.5 times or increased to 1.5 times the climatological values, the strength of the Ferrel cells decrease or increase proportionally. It is also seen that such changes in the eddy forcing can influence the strength of theNH Hadley cell significantly. Possible impact of such changes in the large scale eddy forcing on the monsoon circulation via changes in the Hadley circulation is discussed. Sensitivity experiments including only one component of eddy forcing at a time show that the eddy momentum fluxes seem to be more important in maintaining the Ferrel cells than the eddy heat fluxes. In the absence of the eddy heat fluxes, the observed eddy momentum fluxes alone produce subtropical westerly jets which are weaker than those in the standard solution. On the other hand, the observed eddy heat fluxes alone produce subtropical westerly jets which are stronger than those in the standard solution.
Resumo:
The results are presented of applying multi-time scale analysis using the singular perturbation technique for long time simulation of power system problems. A linear system represented in state-space form can be decoupled into slow and fast subsystems. These subsystems can be simulated with different time steps and then recombined to obtain the system response. Simulation results with a two-time scale analysis of a power system show a large saving in computational costs.
Resumo:
Monte Carlo simulations of a binary alloy with impurity concentrations between 20 and 45 at.% have been carried out. The proportion of large clusters relative to that of small clusters increases with the number of MC diffusion steps as well as impurity concentration. Magnetic susceptibility peaks become more prominent and occur at higher temperatures with increasing impurity concentration. The different peaks in the susceptibility and specific heat curves seem to correspond to different sized clusters. A freezing model would explain the observed behaviour with the large clusters freezing first and the small clusters contributing to susceptibility (specific heat) peaks at lower temperatures.
Resumo:
This paper presents a simple hybrid computer technique to study the transient behaviour of queueing systems. This method is superior to stand-alone analog or digital solution because the hardware requirement is excessive for analog technique whereas computation time is appreciable in the latter case. By using a hybrid computer one can share the analog hardware thus requiring fewer integrators. The digital processor can store the values, play them back at required time instants and change the coefficients of differential equations. By speeding up the integration on the analog computer it is feasible to solve a large number of these equations very fast. Hybrid simulation is even superior to the analytic technique because in the latter case it is difficult to solve time-varying differential equations.