997 resultados para Mach number
Resumo:
Numerical optimization is a technique where a computer is used to explore design parameter combinations to find extremes in performance factors. In multi-objective optimization several performance factors can be optimized simultaneously. The solution to multi-objective optimization problems is not a single design, but a family of optimized designs referred to as the Pareto frontier. The Pareto frontier is a trade-off curve in the objective function space composed of solutions where performance in one objective function is traded for performance in others. A Multi-Objective Hybridized Optimizer (MOHO) was created for the purpose of solving multi-objective optimization problems by utilizing a set of constituent optimization algorithms. MOHO tracks the progress of the Pareto frontier approximation development and automatically switches amongst those constituent evolutionary optimization algorithms to speed the formation of an accurate Pareto frontier approximation. Aerodynamic shape optimization is one of the oldest applications of numerical optimization. MOHO was used to perform shape optimization on a 0.5-inch ballistic penetrator traveling at Mach number 2.5. Two objectives were simultaneously optimized: minimize aerodynamic drag and maximize penetrator volume. This problem was solved twice. The first time the problem was solved by using Modified Newton Impact Theory (MNIT) to determine the pressure drag on the penetrator. In the second solution, a Parabolized Navier-Stokes (PNS) solver that includes viscosity was used to evaluate the drag on the penetrator. The studies show the difference in the optimized penetrator shapes when viscosity is absent and present in the optimization. In modern optimization problems, objective function evaluations may require many hours on a computer cluster to perform these types of analysis. One solution is to create a response surface that models the behavior of the objective function. Once enough data about the behavior of the objective function has been collected, a response surface can be used to represent the actual objective function in the optimization process. The Hybrid Self-Organizing Response Surface Method (HYBSORSM) algorithm was developed and used to make response surfaces of objective functions. HYBSORSM was evaluated using a suite of 295 non-linear functions. These functions involve from 2 to 100 variables demonstrating robustness and accuracy of HYBSORSM.
Resumo:
The drag on a nacelle model was investigated experimentally and computationally to provide guidance and insight into the capabilities of RANS-based CFD. The research goal was to determine whether industry constrained CFD could participate in the aerodynamic design of nacelle bodies. Grid refinement level, turbulence model and near wall treatment settings, to predict drag to the highest accuracy, were key deliverables. Cold flow low-speed wind tunnel experiments were conducted at a Reynolds number of 6∙〖10〗^5, 293 K and a Mach number of 0.1. Total drag force was measured by a six-component force balance. Detailed wake analysis, using a seven-hole pressure probe traverse, allowed for drag decomposition via the far-field method. Drag decomposition was performed through a range of angles of attack between 0o and 45o. Both methods agreed on total drag within their respective uncertainties. Reversed flow at the measurement plane and saturation of the load cell caused discrepancies at high angles of attack. A parallel CFD study was conducted using commercial software, ICEM 15.0 and FLUENT 15.0. Simulating a similar nacelle geometry operating under inlet boundary conditions obtained through wind tunnel characterization allowed for direct comparisons with experiment. It was determined that the Realizable k-ϵ was best suited for drag prediction of this geometry. This model predicted the axial momentum loss and secondary flow in the wake, as well as the integrated surface forces, within experimental error up to 20o angle of attack. SST k-ω required additional surface grid resolution on the nacelle suction side, resulting in 15% more elements, due to separation point prediction sensitivity. It was further recommended to apply enhanced wall treatment to more accurately capture the viscous drag and separated flow structures. Overall, total drag was predicted within 5% at 0o angle of attack and 10% at 20o, each within experimental uncertainty. What is more, the form and induced drag predicted by CFD and measured by the wake traverse shared good agreement. Which indicated CFD captured the key flow features accurately despite simplification of the nacelle interior geometry.
Resumo:
Since core-collapse supernova simulations still struggle to produce robust neutrino-driven explosions in 3D, it has been proposed that asphericities caused by convection in the progenitor might facilitate shock revival by boosting the activity of non-radial hydrodynamic instabilities in the post-shock region. We investigate this scenario in depth using 42 relativistic 2D simulations with multigroup neutrino transport to examine the effects of velocity and density perturbations in the progenitor for different perturbation geometries that obey fundamental physical constraints (like the anelastic condition). As a framework for analysing our results, we introduce semi-empirical scaling laws relating neutrino heating, average turbulent velocities in the gain region, and the shock deformation in the saturation limit of non-radial instabilities. The squared turbulent Mach number, 〈Ma2〉, reflects the violence of aspherical motions in the gain layer, and explosive runaway occurs for 〈Ma2〉 ≳ 0.3, corresponding to a reduction of the critical neutrino luminosity by ∼25∼25 per cent compared to 1D. In the light of this theory, progenitor asphericities aid shock revival mainly by creating anisotropic mass flux on to the shock: differential infall efficiently converts velocity perturbations in the progenitor into density perturbations δρ/ρ at the shock of the order of the initial convective Mach number Maprog. The anisotropic mass flux and ram pressure deform the shock and thereby amplify post-shock turbulence. Large-scale (ℓ = 2, ℓ = 1) modes prove most conducive to shock revival, whereas small-scale perturbations require unrealistically high convective Mach numbers. Initial density perturbations in the progenitor are only of the order of Ma2progMaprog2 and therefore play a subdominant role.
Resumo:
Combustion noise is becoming increasingly important as a major noise source in aeroengines and ground based gas turbines. This is partially because advances in design have reduced the other noise sources, and partially because next generation combustion modes burn more unsteadily, resulting in increased external noise from the combustion. This review reports recent progress made in understanding combustion noise by theoretical, numerical and experimental investigations. We first discuss the fundamentals of the sound emission from a combustion region. Then the noise of open turbulent flames is summarized. We subsequently address the effects of confinement on combustion noise. In this case not only is the sound generated by the combustion influenced by its transmission through the boundaries of the combustion chamber, there is also the possibility of a significant additional source, the so-called ‘indirect’ combustion noise. This involves hot spots (entropy fluctuations) or vorticity perturbations produced by temporal variations in combustion, which generate pressure waves (sound) as they accelerate through any restriction at the exit of the combustor. We describe the general characteristics of direct and indirect noise. To gain further insight into the physical phenomena of direct and indirect sound, we investigate a simple configuration consisting of a cylindrical or annular combustor with a low Mach number flow in which a flame zone burns unsteadily. Using a low Mach number approximation, algebraic exact solutions are developed so that the parameters controlling the generation of acoustic, entropic and vortical waves can be investigated. The validity of the low Mach number approximation is then verified by solving the linearized Euler equations numerically for a wide range of inlet Mach numbers, stagnation temperature ratios, frequency and mode number of heat release fluctuations. The effects of these parameters on the magnitude of the waves produced by the unsteady combustion are investigated. In particular the magnitude of the indirect and direct noise generated in a model combustor with a choked outlet is analyzed for a wide range of frequencies, inlet Mach numbers and stagnation temperature ratios. Finally, we summarize some of the unsolved questions that need to be the focus of future research
Resumo:
Present work examines numerically the asymmetric behavior of hydrogen/air flame in a micro-channel subjected to a non-uniform wall temperature distribution. A high resolution (with cell size of 25 μm × 25 μm) of two-dimensional transient Navier–Stokes simulation is conducted in the low-Mach number formulation using detailed chemistry evolving 9 chemical species and 21 elementary reactions. Firstly, effects of hydrodynamic and diffusive-thermal instabilities are studied by performing the computations for different Lewis numbers. Then, the effects of preferential diffusion of heat and mass transfer on the asymmetric behavior of the hydrogen flame are analyzed for different inlet velocities and equivalence ratios. Results show that for the flames in micro-channels, interactions between thermal diffusion and molecular diffusion play major role in evolution of a symmetric flame into an asymmetric one. Furthermore, the role of Darrieus–Landau instability found to be minor. It is also found that in symmetric flames, the Lewis number decreases behind the flame front. This is related to the curvature of flame which leads to the inclination of thermal and mass fluxes. The mass diffusion vectors point toward the walls and the thermal diffusion vectors point toward the centerline. Asymmetric flame is observed when the length of flame front is about 1.1–1.15 times of the channel width.
Resumo:
We present the first 3D simulation of the last minutes of oxygen shell burning in an 18 solar mass supernova progenitor up to the onset of core collapse. A moving inner boundary is used to accurately model the contraction of the silicon and iron core according to a 1D stellar evolution model with a self-consistent treatment of core deleptonization and nuclear quasi-equilibrium. The simulation covers the full solid angle to allow the emergence of large-scale convective modes. Due to core contraction and the concomitant acceleration of nuclear burning, the convective Mach number increases to ~0.1 at collapse, and an l=2 mode emerges shortly before the end of the simulation. Aside from a growth of the oxygen shell from 0.51 to 0.56 solar masses due to entrainment from the carbon shell, the convective flow is reasonably well described by mixing length theory, and the dominant scales are compatible with estimates from linear stability analysis. We deduce that artificial changes in the physics, such as accelerated core contraction, can have precarious consequences for the state of convection at collapse. We argue that scaling laws for the convective velocities and eddy sizes furnish good estimates for the state of shell convection at collapse and develop a simple analytic theory for the impact of convective seed perturbations on shock revival in the ensuing supernova. We predict a reduction of the critical luminosity for explosion by 12--24% due to seed asphericities for our 3D progenitor model relative to the case without large seed perturbations.
Resumo:
The performance of supersonic engine inlets and external aerodynamic surfaces can be critically affected by shock wave / boundary layer interactions (SBLIs), whose severe adverse pressure gradients can cause boundary layer separation. Currently such problems are avoided primarily through the use of boundary layer bleed/suction which can be a source of significant performance degradation. This study investigates a novel type of flow control device called micro-vortex generators (µVGs) which may offer similar control benefits without the bleed penalties. µVGs have the ability to alter the near-wall structure of compressible turbulent boundary layers to provide increased mixing of high speed fluid which improves the boundary layer health when subjected to flow disturbance. Due to their small size,µVGs are embedded in the boundary layer which provide reduced drag compared to the traditional vortex generators while they are cost-effective, physically robust and do not require a power source. To examine the potential of µVGs, a detailed experimental and computational study of micro-ramps in a supersonic boundary layer at Mach 3 subjected to an oblique shock was undertaken. The experiments employed a flat plate boundary layer with an impinging oblique shock with downstream total pressure measurements. The moderate Reynolds number of 3,800 based on displacement thickness allowed the computations to use Large Eddy Simulations without the subgrid stress model (LES-nSGS). The LES predictions indicated that the shock changes the structure of the turbulent eddies and the primary vortices generated from the micro-ramp. Furthermore, they generally reproduced the experimentally obtained mean velocity profiles, unlike similarly-resolved RANS computations. The experiments and the LES results indicate that the micro-ramps, whose height is h≈0.5δ, can significantly reduce boundary layer thickness and improve downstream boundary layer health as measured by the incompressible shape factor, H. Regions directly behind the ramp centerline tended to have increased boundary layer thickness indicating the significant three-dimensionality of the flow field. Compared to baseline sizes, smaller micro-ramps yielded improved total pressure recovery. Moving the smaller ramps closer to the shock interaction also reduced the displacement thickness and the separated area. This effect is attributed to decreased wave drag and the closer proximity of the vortex pairs to the wall. In the second part of the study, various types of µVGs are investigated including micro-ramps and micro-vanes. The results showed that vortices generated from µVGs can partially eliminate shock induced flow separation and can continue to entrain high momentum flux for boundary layer recovery downstream. The micro-ramps resulted in thinner downstream displacement thickness in comparison to the micro-vanes. However, the strength of the streamwise vorticity for the micro-ramps decayed faster due to dissipation especially after the shock interaction. In addition, the close spanwise distance between each vortex for the ramp geometry causes the vortex cores to move upwards from the wall due to induced upwash effects. Micro-vanes, on the other hand, yielded an increased spanwise spacing of the streamwise vortices at the point of formation. This resulted in streamwise vortices staying closer to the wall with less circulation decay, and the reduction in overall flow separation is attributed to these effects. Two hybrid concepts, named “thick-vane” and “split-ramp”, were also studied where the former is a vane with side supports and the latter has a uniform spacing along the centerline of the baseline ramp. These geometries behaved similar to the micro-vanes in terms of the streamwise vorticity and the ability to reduce flow separation, but are more physically robust than the thin vanes. Next, Mach number effect on flow past the micro-ramps (h~0.5δ) are examined in a supersonic boundary layer at M=1.4, 2.2 and 3.0, but with no shock waves present. The LES results indicate that micro-ramps have a greater impact at lower Mach number near the device but its influence decays faster than that for the higher Mach number cases. This may be due to the additional dissipation caused by the primary vortices with smaller effective diameter at the lower Mach number such that their coherency is easily lost causing the streamwise vorticity and the turbulent kinetic energy to decay quickly. The normal distance between the vortex core and the wall had similar growth indicating weak correlation with the Mach number; however, the spanwise distance between the two counter-rotating cores further increases with lower Mach number. Finally, various µVGs which include micro-ramp, split-ramp and a new hybrid concept “ramped-vane” are investigated under normal shock conditions at Mach number of 1.3. In particular, the ramped-vane was studied extensively by varying its size, interior spacing of the device and streamwise position respect to the shock. The ramped-vane provided increased vorticity compared to the micro-ramp and the split-ramp. This significantly reduced the separation length downstream of the device centerline where a larger ramped-vane with increased trailing edge gap yielded a fully attached flow at the centerline of separation region. The results from coarse-resolution LES studies show that the larger ramped-vane provided the most reductions in the turbulent kinetic energy and pressure fluctuation compared to other devices downstream of the shock. Additional benefits include negligible drag while the reductions in displacement thickness and shape factor were seen compared to other devices. Increased wall shear stress and pressure recovery were found with the larger ramped-vane in the baseline resolution LES studies which also gave decreased amplitudes of the pressure fluctuations downstream of the shock.
Resumo:
A methodology has been developed and presented to enable the use of small to medium scale acoustic hover facilities for the quantitative measurement of rotor impulsive noise. The methodology was applied to the University of Maryland Acoustic Chamber resulting in accurate measurements of High Speed Impulsive (HSI) noise for rotors running at tip Mach numbers between 0.65 and 0.85 – with accuracy increasing as the tip Mach number was increased. Several factors contributed to the success of this methodology including: • High Speed Impulsive (HSI) noise is characterized by very distinct pulses radiated from the rotor. The pulses radiate high frequency energy – but the energy is contained in short duration time pulses. • The first reflections from these pulses can be tracked (using ray theory) and, through adjustment of the microphone position and suitably applied acoustic treatment at the reflected surface, reduced to small levels. A computer code was developed that automates this process. The code also tracks first bounce reflection timing, making it possible to position the first bounce reflections outside of a measurement window. • Using a rotor with a small number of blades (preferably one) reduces the number of interfering first bounce reflections and generally improves the measured signal fidelity. The methodology will help the gathering of quantitative hovering rotor noise data in less than optimal acoustic facilities and thus enable basic rotorcraft research and rotor blade acoustic design.
Resumo:
In this research the recovery of a DQPSK signal will be demonstrated using a single Mach-Zehnder Interferometer (MZI). By changing the phase delay in one of the arms it will be shown that different delays will produce different output levels. It will also be shown that with a certain level of phase shift the DQPSK signal can be converted into four different equally spaced optical power levels. With each decoded level representing one of the four possible bit permutations. By using this additional phase shift in one of the arms the number of MZIs required for decoding can be reduced from two to one.
Resumo:
A new method for estimating the time to colonization of Methicillin-resistant Staphylococcus Aureus (MRSA) patients is developed in this paper. The time to colonization of MRSA is modelled using a Bayesian smoothing approach for the hazard function. There are two prior models discussed in this paper: the first difference prior and the second difference prior. The second difference prior model gives smoother estimates of the hazard functions and, when applied to data from an intensive care unit (ICU), clearly shows increasing hazard up to day 13, then a decreasing hazard. The results clearly demonstrate that the hazard is not constant and provide a useful quantification of the effect of length of stay on the risk of MRSA colonization which provides useful insight.
Resumo:
Knowledge of particle emission characteristics associated with forest fires and in general, biomass burning, is becoming increasingly important due to the impact of these emissions on human health. Of particular importance is developing a better understanding of the size distribution of particles generated from forest combustion under different environmental conditions, as well as provision of emission factors for different particle size ranges. This study was aimed at quantifying particle emission factors from four types of wood found in South East Queensland forests: Spotted Gum (Corymbia citriodora), Red Gum (Eucalypt tereticornis), Blood Gum (Eucalypt intermedia), and Iron bark (Eucalypt decorticans); under controlled laboratory conditions. The experimental set up included a modified commercial stove connected to a dilution system designed for the conditions of the study. Measurements of particle number size distribution and concentration resulting from the burning of woods with a relatively homogenous moisture content (in the range of 15 to 26 %) and for different rates of burning were performed using a TSI Scanning Mobility Particle Sizer (SMPS) in the size range from 10 to 600 nm and a TSI Dust Trak for PM2.5. The results of the study in terms of the relationship between particle number size distribution and different condition of burning for different species show that particle number emission factors and PM2.5 mass emission factors depend on the type of wood and the burning rate; fast burning or slow burning. The average particle number emission factors for fast burning conditions are in the range of 3.3 x 1015 to 5.7 x 1015 particles/kg, and for PM2.5 are in the range of 139 to 217 mg/kg.
Resumo:
The measurement of submicrometre (< 1.0 m) and ultrafine particles (diameter < 0.1 m) number concentration have attracted attention since the last decade because the potential health impacts associated with exposure to these particles can be more significant than those due to exposure to larger particles. At present, ultrafine particles are not regularly monitored and they are yet to be incorporated into air quality monitoring programs. As a result, very few studies have analysed their long-term and spatial variations in ultrafine particle concentration, and none have been in Australia. To address this gap in scientific knowledge, the aim of this research was to investigate the long-term trends and seasonal variations in particle number concentrations in Brisbane, Australia. Data collected over a five-year period were analysed using weighted regression models. Monthly mean concentrations in the morning (6:00-10:00) and the afternoon (16:00-19:00) were plotted against time in months, using the monthly variance as the weights. During the five-year period, submicrometre and ultrafine particle concentrations increased in the morning by 105.7% and 81.5% respectively whereas in the afternoon there was no significant trend. The morning concentrations were associated with fresh traffic emissions and the afternoon concentrations with the background. The statistical tests applied to the seasonal models, on the other hand, indicated that there was no seasonal component. The spatial variation in size distribution in a large urban area was investigated using particle number size distribution data collected at nine different locations during different campaigns. The size distributions were represented by the modal structures and cumulative size distributions. Particle number peaked at around 30 nm, except at an isolated site dominated by diesel trucks, where the particle number peaked at around 60 nm. It was found that ultrafine particles contributed to 82%-90% of the total particle number. At the sites dominated by petrol vehicles, nanoparticles (< 50 nm) contributed 60%-70% of the total particle number, and at the site dominated by diesel trucks they contributed 50%. Although the sampling campaigns took place during different seasons and were of varying duration these variations did not have an effect on the particle size distributions. The results suggested that the distributions were rather affected by differences in traffic composition and distance to the road. To investigate the occurrence of nucleation events, that is, secondary particle formation from gaseous precursors, particle size distribution data collected over a 13 month period during 5 different campaigns were analysed. The study area was a complex urban environment influenced by anthropogenic and natural sources. The study introduced a new application of time series differencing for the identification of nucleation events. To evaluate the conditions favourable to nucleation, the meteorological conditions and gaseous concentrations prior to and during nucleation events were recorded. Gaseous concentrations did not exhibit a clear pattern of change in concentration. It was also found that nucleation was associated with sea breeze and long-range transport. The implications of this finding are that whilst vehicles are the most important source of ultrafine particles, sea breeze and aged gaseous emissions play a more important role in secondary particle formation in the study area.
Resumo:
Having flexible notions of the unit (e.g., 26 ones can be thought of as 2.6 tens, 1 ten 16 ones, 260 tenths, etc.) should be a major focus of elementary mathematics education. However, often these powerful notions are relegated to computations where the major emphasis is on "getting the right answer" thus procedural knowledge rather than conceptual knowledge becomes the primary focus. This paper reports on 22 high-performing students' reunitising processes ascertained from individual interviews on tasks requiring unitising, reunitising and regrouping; errors were categorised to depict particular thinking strategies. The results show that, even for high-performing students, regrouping is a cognitively complex task. This paper analyses this complexity and draws inferences for teaching.