997 resultados para Turbulent Modeling
Resumo:
Despite their astrophysical significanceas a major contributor to cosmic nucleosynthesis and as distance indicators in observational cosmologyType Ia supernovae lack theoretical explanation. Not only is the explosion mechanism complex due to the interaction of (potentially turbulent) hydrodynamics and nuclear reactions, but even the initial conditions for the explosion are unknown. Various progenitor scenarios have been proposed. After summarizing some general aspects of Type Ia supernova modeling, recent simulations of our group are discussed. With a sequence of modeling starting (in some cases) from the progenitor evolution and following the explosion hydrodynamics and nucleosynthesis we connect to the formation of the observables through radiation transport in the ejecta cloud. This allows us to analyze several models and to compare their outcomes with observations. While pure deflagrations of Chandrasekhar-mass white dwarfs and violent mergers of two white dwarfs lead to peculiar events (that may, however, find their correspondence in the observed sample of SNe Ia), only delayed detonations in Chandrasekhar-mass white dwarfs or sub-Chandrasekhar-mass explosions remain promising candidates for explaining normal Type Ia supernovae. © 2011 Elsevier B.V. All rights reserved.
Resumo:
Using a novel numerical method at unprecedented resolution, we demonstrate that structures of small to intermediate scale in rotating, stratified flows are intrinsically three-dimensional. Such flows are characterized by vortices (spinning volumes of fluid), regions of large vorticity gradients, and filamentary structures at all scales. It is found that such structures have predominantly three-dimensional dynamics below a horizontal scale LLR, where LR is the so-called Rossby radius of deformation, equal to the characteristic vertical scale of the fluid H divided by the ratio of the rotational and buoyancy frequencies f/N. The breakdown of two-dimensional dynamics at these scales is attributed to the so-called "tall-column instability" [D. G. Dritschel and M. de la Torre Juárez, J. Fluid. Mech. 328, 129 (1996)], which is active on columnar vortices that are tall after scaling by f/N, or, equivalently, that are narrow compared with LR. Moreover, this instability eventually leads to a simple relationship between typical vertical and horizontal scales: for each vertical wave number (apart from the vertically averaged, barotropic component of the flow) the average horizontal wave number is equal to f/N times the vertical wave number. The practical implication is that three-dimensional modeling is essential to capture the behavior of rotating, stratified fluids. Two-dimensional models are not valid for scales below LR. ©1999 American Institute of Physics.
Resumo:
The formulation and implementation of LEAF-2, the Land Ecosystem–Atmosphere Feedback model, which comprises the representation of land–surface processes in the Regional Atmospheric Modeling System (RAMS), is described. LEAF-2 is a prognostic model for the temperature and water content of soil, snow cover, vegetation, and canopy air, and includes turbulent and radiative exchanges between these components and with the atmosphere. Subdivision of a RAMS surface grid cell into multiple areas of distinct land-use types is allowed, with each subgrid area, or patch, containing its own LEAF-2 model, and each patch interacts with the overlying atmospheric column with a weight proportional to its fractional area in the grid cell. A description is also given of TOPMODEL, a land hydrology model that represents surface and subsurface downslope lateral transport of groundwater. Details of the incorporation of a modified form of TOPMODEL into LEAF-2 are presented. Sensitivity tests of the coupled system are presented that demonstrate the potential importance of the patch representation and of lateral water transport in idealized model simulations. Independent studies that have applied LEAF-2 and verified its performance against observational data are cited. Linkage of RAMS and TOPMODEL through LEAF-2 creates a modeling system that can be used to explore the coupled atmosphere–biophysical–hydrologic response to altered climate forcing at local watershed and regional basin scales.
Resumo:
The Weather Research and Forecasting model was applied to analyze variations in the planetary boundary layer (PBL) structure over Southeast England including central and suburban London. The parameterizations and predictive skills of two nonlocal mixing PBL schemes, YSU and ACM2, and two local mixing PBL schemes, MYJ and MYNN2, were evaluated over a variety of stability conditions, with model predictions at a 3 km grid spacing. The PBL height predictions, which are critical for scaling turbulence and diffusion in meteorological and air quality models, show significant intra-scheme variance (> 20%), and the reasons are presented. ACM2 diagnoses the PBL height thermodynamically using the bulk Richardson number method, which leads to a good agreement with the lidar data for both unstable and stable conditions. The modeled vertical profiles in the PBL, such as wind speed, turbulent kinetic energy (TKE), and heat flux, exhibit large spreads across the PBL schemes. The TKE predicted by MYJ were found to be too small and show much less diurnal variation as compared with observations over London. MYNN2 produces better TKE predictions at low levels than MYJ, but its turbulent length scale increases with height in the upper part of the strongly convective PBL, where it should decrease. The local PBL schemes considerably underestimate the entrainment heat fluxes for convective cases. The nonlocal PBL schemes exhibit stronger mixing in the mean wind fields under convective conditions than the local PBL schemes and agree better with large-eddy simulation (LES) studies.
Resumo:
Model intercomparisons have identified important deficits in the representation of the stable boundary layer by turbulence parametrizations used in current weather and climate models. However, detrimental impacts of more realistic schemes on the large-scale flow have hindered progress in this area. Here we implement a total turbulent energy scheme into the climate model ECHAM6. The total turbulent energy scheme considers the effects of Earth’s rotation and static stability on the turbulence length scale. In contrast to the previously used turbulence scheme, the TTE scheme also implicitly represents entrainment flux in a dry convective boundary layer. Reducing the previously exaggerated surface drag in stable boundary layers indeed causes an increase in southern hemispheric zonal winds and large-scale pressure gradients beyond observed values. These biases can be largely removed by increasing the parametrized orographic drag. Reducing the neutral limit turbulent Prandtl number warms and moistens low-latitude boundary layers and acts to reduce longstanding radiation biases in the stratocumulus regions, the Southern Ocean and the equatorial cold tongue that are common to many climate models.
Resumo:
Although estimation of turbulent transport parameters using inverse methods is not new, there is little evaluation of the method in the literature. Here, it is shown that extended observation of the broad scale hydrography by Argo provides a path to improved estimates of regional turbulent transport rates. Results from a 20 year ocean state estimate produced with the ECCO v4 non-linear inverse modeling framework provide supporting evidence. Turbulent transport parameter maps are estimated under the constraints of fitting the extensive collection of Argo profiles collected through 2011. The adjusted parameters dramatically reduce misfits to in situ profiles as compared with earlier ECCO solutions. They also yield a clear reduction in the model drift away from observations over multi-century long simulations, both for assimilated variables (temperature and salinity) and independent variables (bio-geochemical tracers). Despite the minimal constraints imposed specifically on the estimated parameters, their geography is physically plausible and exhibits close connections with the upper ocean ocean stratification as observed by Argo. The estimated parameter adjustments furthermore have first order impacts on upper-ocean stratification and mixed layer depths over 20 years. These results identify the constraint of fitting Argo profiles as an effective observational basis for regional turbulent transport rates. Uncertainties and further improvements of the method are discussed.
Resumo:
This paper presents numerical modeling of a turbulent natural gas flow through a non-premixed industrial burner of a slab reheating furnace. The furnace is equipped with diffusion side swirl burners capable of utilizing natural gas or coke oven gas alternatively through the same nozzles. The study is focused on one of the burners of the preheating zone. Computational Fluid Dynamics simulation has been used to predict the burner orifice turbulent flow. Flow rate and pressure at burner upstream were validated by experimental measurements. The outcomes of the numerical modeling are analyzed for the different turbulence models in terms of pressure drop, velocity profiles, and orifice discharge coefficient. The standard, RNG, and Realizable k-epsilon models and Reynolds Stress Model (RSM) have been used. The main purpose of the numerical investigation is to determine the turbulence model that more consistently reproduces the experimental results of the flow through an industrial non-premixed burner orifice. The comparisons between simulations indicate that all the models tested satisfactorily and represent the experimental conditions. However, the Realizable k-epsilon model seems to be the most appropriate turbulence model, since it provides results that are quite similar to the RSM and RNG k-epsilon models, requiring only slightly more computational power than the standard k-epsilon model. (C) 2014 Elsevier Ltd. All rights reserved.
Resumo:
The classic conservative approach for thermal process design can lead to over-processing, especially for laminar flow, when a significant distribution of temperature and of residence time occurs. In order to optimize quality retention, a more comprehensive model is required. A model comprising differential equations for mass and heat transfer is proposed for the simulation of the continuous thermal processing of a non-Newtonian food in a tubular system. The model takes into account the contribution from heating and cooling sections, the heat exchange with the ambient air and effective diffusion associated with non-ideal laminar flow. The study case of soursop juice processing was used to test the model. Various simulations were performed to evaluate the effect of the model assumptions. An expressive difference in the predicted lethality was observed between the classic approach and the proposed model. The main advantage of the model is its flexibility to represent different aspects with a small computational time, making it suitable for process evaluation and design. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
A detailed numerical simulation of ethanol turbulent spray combustion on a rounded jet flame is pre- sented in this article. The focus is to propose a robust mathematical model with relatively low complexity sub- models to reproduce the main characteristics of the cou- pling between both phases, such as the turbulence modulation, turbulent droplets dissipation, and evaporative cooling effect. A RANS turbulent model is implemented. Special features of the model include an Eulerian– Lagrangian procedure under a fully two-way coupling and a modified flame sheet model with a joint mixture fraction– enthalpy b -PDF. Reasonable agreement between measured and computed mean profiles of temperature of the gas phase and droplet size distributions is achieved. Deviations found between measured and predicted mean velocity profiles are attributed to the turbulent combustion modeling adopted
Resumo:
This thesis tackles the problem of the automated detection of the atmospheric boundary layer (BL) height, h, from aerosol lidar/ceilometer observations. A new method, the Bayesian Selective Method (BSM), is presented. It implements a Bayesian statistical inference procedure which combines in an statistically optimal way different sources of information. Firstly atmospheric stratification boundaries are located from discontinuities in the ceilometer back-scattered signal. The BSM then identifies the discontinuity edge that has the highest probability to effectively mark the BL height. Information from the contemporaneus physical boundary layer model simulations and a climatological dataset of BL height evolution are combined in the assimilation framework to assist this choice. The BSM algorithm has been tested for four months of continuous ceilometer measurements collected during the BASE:ALFA project and is shown to realistically diagnose the BL depth evolution in many different weather conditions. Then the BASE:ALFA dataset is used to investigate the boundary layer structure in stable conditions. Functions from the Obukhov similarity theory are used as regression curves to fit observed velocity and temperature profiles in the lower half of the stable boundary layer. Surface fluxes of heat and momentum are best-fitting parameters in this exercise and are compared with what measured by a sonic anemometer. The comparison shows remarkable discrepancies, more evident in cases for which the bulk Richardson number turns out to be quite large. This analysis supports earlier results, that surface turbulent fluxes are not the appropriate scaling parameters for profiles of mean quantities in very stable conditions. One of the practical consequences is that boundary layer height diagnostic formulations which mainly rely on surface fluxes are in disagreement to what obtained by inspecting co-located radiosounding profiles.
Resumo:
Ozon (O3) ist ein wichtiges Oxidierungs- und Treibhausgas in der Erdatmosphäre. Es hat Einfluss auf das Klima, die Luftqualität sowie auf die menschliche Gesundheit und die Vegetation. Ökosysteme, wie beispielsweise Wälder, sind Senken für troposphärisches Ozon und werden in Zukunft, bedingt durch Stürme, Pflanzenschädlinge und Änderungen in der Landnutzung, heterogener sein. Es ist anzunehmen, dass diese Heterogenitäten die Aufnahme von Treibhausgasen verringern und signifikante Rückkopplungen auf das Klimasystem bewirken werden. Beeinflusst wird der Atmosphären-Biosphären-Austausch von Ozon durch stomatäre Aufnahme, Deposition auf Pflanzenoberflächen und Böden sowie chemische Umwandlungen. Diese Prozesse zu verstehen und den Ozonaustausch für verschiedene Ökosysteme zu quantifizieren sind Voraussetzungen, um von lokalen Messungen auf regionale Ozonflüsse zu schließen.rnFür die Messung von vertikalen turbulenten Ozonflüssen wird die Eddy Kovarianz Methode genutzt. Die Verwendung von Eddy Kovarianz Systemen mit geschlossenem Pfad, basierend auf schnellen Chemilumineszenz-Ozonsensoren, kann zu Fehlern in der Flussmessung führen. Ein direkter Vergleich von nebeneinander angebrachten Ozonsensoren ermöglichte es einen Einblick in die Faktoren zu erhalten, die die Genauigkeit der Messungen beeinflussen. Systematische Unterschiede zwischen einzelnen Sensoren und der Einfluss von unterschiedlichen Längen des Einlassschlauches wurden untersucht, indem Frequenzspektren analysiert und Korrekturfaktoren für die Ozonflüsse bestimmt wurden. Die experimentell bestimmten Korrekturfaktoren zeigten keinen signifikanten Unterschied zu Korrekturfaktoren, die mithilfe von theoretischen Transferfunktionen bestimmt wurden, wodurch die Anwendbarkeit der theoretisch ermittelten Faktoren zur Korrektur von Ozonflüssen bestätigt wurde.rnIm Sommer 2011 wurden im Rahmen des EGER (ExchanGE processes in mountainous Regions) Projektes Messungen durchgeführt, um zu einem besseren Verständnis des Atmosphären-Biosphären Ozonaustauschs in gestörten Ökosystemen beizutragen. Ozonflüsse wurden auf beiden Seiten einer Waldkante gemessen, die einen Fichtenwald und einen Windwurf trennt. Auf der straßenähnlichen Freifläche, die durch den Sturm "Kyrill" (2007) entstand, entwickelte sich eine Sekundärvegetation, die sich in ihrer Phänologie und Blattphysiologie vom ursprünglich vorherrschenden Fichtenwald unterschied. Der mittlere nächtliche Fluss über dem Fichtenwald war -6 bis -7 nmol m2 s-1 und nahm auf -13 nmol m2 s-1 um die Mittagszeit ab. Die Ozonflüsse zeigten eine deutliche Beziehung zur Pflanzenverdunstung und CO2 Aufnahme, was darauf hinwies, dass während des Tages der Großteil des Ozons von den Pflanzenstomata aufgenommen wurde. Die relativ hohe nächtliche Deposition wurde durch nicht-stomatäre Prozesse verursacht. Die Deposition über dem Wald war im gesamten Tagesverlauf in etwa doppelt so hoch wie über der Freifläche. Dieses Verhältnis stimmte mit dem Verhältnis des Pflanzenflächenindex (PAI) überein. Die Störung des Ökosystems verringerte somit die Fähigkeit des Bewuchses, als Senke für troposphärisches Ozon zu fungieren. Der deutliche Unterschied der Ozonflüsse der beiden Bewuchsarten verdeutlichte die Herausforderung bei der Regionalisierung von Ozonflüssen in heterogen bewaldeten Gebieten.rnDie gemessenen Flüsse wurden darüber hinaus mit Simulationen verglichen, die mit dem Chemiemodell MLC-CHEM durchgeführt wurden. Um das Modell bezüglich der Berechnung von Ozonflüssen zu evaluieren, wurden gemessene und modellierte Flüsse von zwei Positionen im EGER-Gebiet verwendet. Obwohl die Größenordnung der Flüsse übereinstimmte, zeigten die Ergebnisse eine signifikante Differenz zwischen gemessenen und modellierten Flüssen. Zudem gab es eine klare Abhängigkeit der Differenz von der relativen Feuchte, mit abnehmender Differenz bei zunehmender Feuchte, was zeigte, dass das Modell vor einer Verwendung für umfangreiche Studien des Ozonflusses weiterer Verbesserungen bedarf.rn
Resumo:
Sub-grid scale (SGS) models are required in order to model the influence of the unresolved small scales on the resolved scales in large-eddy simulations (LES), the flow at the smallest scales of turbulence. In the following work two SGS models are presented and deeply analyzed in terms of accuracy through several LESs with different spatial resolutions, i.e. grid spacings. The first part of this thesis focuses on the basic theory of turbulence, the governing equations of fluid dynamics and their adaptation to LES. Furthermore, two important SGS models are presented: one is the Dynamic eddy-viscosity model (DEVM), developed by \cite{germano1991dynamic}, while the other is the Explicit Algebraic SGS model (EASSM), by \cite{marstorp2009explicit}. In addition, some details about the implementation of the EASSM in a Pseudo-Spectral Navier-Stokes code \cite{chevalier2007simson} are presented. The performance of the two aforementioned models will be investigated in the following chapters, by means of LES of a channel flow, with friction Reynolds numbers $Re_\tau=590$ up to $Re_\tau=5200$, with relatively coarse resolutions. Data from each simulation will be compared to baseline DNS data. Results have shown that, in contrast to the DEVM, the EASSM has promising potentials for flow predictions at high friction Reynolds numbers: the higher the friction Reynolds number is the better the EASSM will behave and the worse the performances of the DEVM will be. The better performance of the EASSM is contributed to the ability to capture flow anisotropy at the small scales through a correct formulation for the SGS stresses. Moreover, a considerable reduction in the required computational resources can be achieved using the EASSM compared to DEVM. Therefore, the EASSM combines accuracy and computational efficiency, implying that it has a clear potential for industrial CFD usage.
Resumo:
El agotamiento, la ausencia o, simplemente, la incertidumbre sobre la cantidad de las reservas de combustibles fósiles se añaden a la variabilidad de los precios y a la creciente inestabilidad en la cadena de aprovisionamiento para crear fuertes incentivos para el desarrollo de fuentes y vectores energéticos alternativos. El atractivo de hidrógeno como vector energético es muy alto en un contexto que abarca, además, fuertes inquietudes por parte de la población sobre la contaminación y las emisiones de gases de efecto invernadero. Debido a su excelente impacto ambiental, la aceptación pública del nuevo vector energético dependería, a priori, del control de los riesgos asociados su manipulación y almacenamiento. Entre estos, la existencia de un innegable riesgo de explosión aparece como el principal inconveniente de este combustible alternativo. Esta tesis investiga la modelización numérica de explosiones en grandes volúmenes, centrándose en la simulación de la combustión turbulenta en grandes dominios de cálculo en los que la resolución que es alcanzable está fuertemente limitada. En la introducción, se aborda una descripción general de los procesos de explosión. Se concluye que las restricciones en la resolución de los cálculos hacen necesario el modelado de los procesos de turbulencia y de combustión. Posteriormente, se realiza una revisión crítica de las metodologías disponibles tanto para turbulencia como para combustión, que se lleva a cabo señalando las fortalezas, deficiencias e idoneidad de cada una de las metodologías. Como conclusión de esta investigación, se obtiene que la única estrategia viable para el modelado de la combustión, teniendo en cuenta las limitaciones existentes, es la utilización de una expresión que describa la velocidad de combustión turbulenta en función de distintos parámetros. Este tipo de modelos se denominan Modelos de velocidad de llama turbulenta y permiten cerrar una ecuación de balance para la variable de progreso de combustión. Como conclusión también se ha obtenido, que la solución más adecuada para la simulación de la turbulencia es la utilización de diferentes metodologías para la simulación de la turbulencia, LES o RANS, en función de la geometría y de las restricciones en la resolución de cada problema particular. Sobre la base de estos hallazgos, el crea de un modelo de combustión en el marco de los modelos de velocidad de la llama turbulenta. La metodología propuesta es capaz de superar las deficiencias existentes en los modelos disponibles para aquellos problemas en los que se precisa realizar cálculos con una resolución moderada o baja. Particularmente, el modelo utiliza un algoritmo heurístico para impedir el crecimiento del espesor de la llama, una deficiencia que lastraba el célebre modelo de Zimont. Bajo este enfoque, el énfasis del análisis se centra en la determinación de la velocidad de combustión, tanto laminar como turbulenta. La velocidad de combustión laminar se determina a través de una nueva formulación capaz de tener en cuenta la influencia simultánea en la velocidad de combustión laminar de la relación de equivalencia, la temperatura, la presión y la dilución con vapor de agua. La formulación obtenida es válida para un dominio de temperaturas, presiones y dilución con vapor de agua más extenso de cualquiera de las formulaciones previamente disponibles. Por otra parte, el cálculo de la velocidad de combustión turbulenta puede ser abordado mediante el uso de correlaciones que permiten el la determinación de esta magnitud en función de distintos parámetros. Con el objetivo de seleccionar la formulación más adecuada, se ha realizado una comparación entre los resultados obtenidos con diversas expresiones y los resultados obtenidos en los experimentos. Se concluye que la ecuación debida a Schmidt es la más adecuada teniendo en cuenta las condiciones del estudio. A continuación, se analiza la importancia de las inestabilidades de la llama en la propagación de los frentes de combustión. Su relevancia resulta significativa para mezclas pobres en combustible en las que la intensidad de la turbulencia permanece moderada. Estas condiciones son importantes dado que son habituales en los accidentes que ocurren en las centrales nucleares. Por ello, se lleva a cabo la creación de un modelo que permita estimar el efecto de las inestabilidades, y en concreto de la inestabilidad acústica-paramétrica, en la velocidad de propagación de llama. El modelado incluye la derivación matemática de la formulación heurística de Bauwebs et al. para el cálculo de la incremento de la velocidad de combustión debido a las inestabilidades de la llama, así como el análisis de la estabilidad de las llamas con respecto a una perturbación cíclica. Por último, los resultados se combinan para concluir el modelado de la inestabilidad acústica-paramétrica. Tras finalizar esta fase, la investigación se centro en la aplicación del modelo desarrollado en varios problemas de importancia para la seguridad industrial y el posterior análisis de los resultados y la comparación de los mismos con los datos experimentales correspondientes. Concretamente, se abordo la simulación de explosiones en túneles y en contenedores, con y sin gradiente de concentración y ventilación. Como resultados generales, se logra validar el modelo confirmando su idoneidad para estos problemas. Como última tarea, se ha realizado un analisis en profundidad de la catástrofe de Fukushima-Daiichi. El objetivo del análisis es determinar la cantidad de hidrógeno que explotó en el reactor número uno, en contraste con los otros estudios sobre el tema que se han centrado en la determinación de la cantidad de hidrógeno generado durante el accidente. Como resultado de la investigación, se determinó que la cantidad más probable de hidrogeno que fue consumida durante la explosión fue de 130 kg. Es un hecho notable el que la combustión de una relativamente pequeña cantidad de hidrogeno pueda causar un daño tan significativo. Esta es una muestra de la importancia de este tipo de investigaciones. Las ramas de la industria para las que el modelo desarrollado será de interés abarca la totalidad de la futura economía de hidrógeno (pilas de combustible, vehículos, almacenamiento energético, etc) con un impacto especial en los sectores del transporte y la energía nuclear, tanto para las tecnologías de fisión y fusión. ABSTRACT The exhaustion, absolute absence or simply the uncertainty on the amount of the reserves of fossil fuels sources added to the variability of their prices and the increasing instability and difficulties on the supply chain are strong incentives for the development of alternative energy sources and carriers. The attractiveness of hydrogen in a context that additionally comprehends concerns on pollution and emissions is very high. Due to its excellent environmental impact, the public acceptance of the new energetic vector will depend on the risk associated to its handling and storage. Fromthese, the danger of a severe explosion appears as the major drawback of this alternative fuel. This thesis investigates the numerical modeling of large scale explosions, focusing on the simulation of turbulent combustion in large domains where the resolution achievable is forcefully limited. In the introduction, a general description of explosion process is undertaken. It is concluded that the restrictions of resolution makes necessary the modeling of the turbulence and combustion processes. Subsequently, a critical review of the available methodologies for both turbulence and combustion is carried out pointing out their strengths and deficiencies. As a conclusion of this investigation, it appears clear that the only viable methodology for combustion modeling is the utilization of an expression for the turbulent burning velocity to close a balance equation for the combustion progress variable, a model of the Turbulent flame velocity kind. Also, that depending on the particular resolution restriction of each problem and on its geometry the utilization of different simulation methodologies, LES or RANS, is the most adequate solution for modeling the turbulence. Based on these findings, the candidate undertakes the creation of a combustion model in the framework of turbulent flame speed methodology which is able to overcome the deficiencies of the available ones for low resolution problems. Particularly, the model utilizes a heuristic algorithm to maintain the thickness of the flame brush under control, a serious deficiency of the Zimont model. Under the approach utilized by the candidate, the emphasis of the analysis lays on the accurate determination of the burning velocity, both laminar and turbulent. On one side, the laminar burning velocity is determined through a newly developed correlation which is able to describe the simultaneous influence of the equivalence ratio, temperature, steam dilution and pressure on the laminar burning velocity. The formulation obtained is valid for a larger domain of temperature, steam dilution and pressure than any of the previously available formulations. On the other side, a certain number of turbulent burning velocity correlations are available in the literature. For the selection of the most suitable, they have been compared with experiments and ranked, with the outcome that the formulation due to Schmidt was the most adequate for the conditions studied. Subsequently, the role of the flame instabilities on the development of explosions is assessed. Their significance appears to be of importance for lean mixtures in which the turbulence intensity remains moderate. These are important conditions which are typical for accidents on Nuclear Power Plants. Therefore, the creation of a model to account for the instabilities, and concretely, the acoustic parametric instability is undertaken. This encloses the mathematical derivation of the heuristic formulation of Bauwebs et al. for the calculation of the burning velocity enhancement due to flame instabilities as well as the analysis of the stability of flames with respect to a cyclic velocity perturbation. The results are combined to build a model of the acoustic-parametric instability. The following task in this research has been to apply the model developed to several problems significant for the industrial safety and the subsequent analysis of the results and comparison with the corresponding experimental data was performed. As a part of such task simulations of explosions in a tunnel and explosions in large containers, with and without gradient of concentration and venting have been carried out. As a general outcome, the validation of the model is achieved, confirming its suitability for the problems addressed. As a last and final undertaking, a thorough study of the Fukushima-Daiichi catastrophe has been carried out. The analysis performed aims at the determination of the amount of hydrogen participating on the explosion that happened in the reactor one, in contrast with other analysis centered on the amount of hydrogen generated during the accident. As an outcome of the research, it was determined that the most probable amount of hydrogen exploding during the catastrophe was 130 kg. It is remarkable that the combustion of such a small quantity of material can cause tremendous damage. This is an indication of the importance of these types of investigations. The industrial branches that can benefit from the applications of the model developed in this thesis include the whole future hydrogen economy, as well as nuclear safety both in fusion and fission technology.
Resumo:
The phase transition for turbulent diffusion, reported by Avellaneda and Majda [Avellaneda, M. & Majda, A. J. (1994) Philos. Trans. R. Soc. London A 346, 205-233, and several earlier papers], is traced to a modeling assumption in which the energy spectrum of the turbulent fluid is singularly dependent on the viscosity in the inertial range. Phenomenological models of turbulence and intermittency, by contrast, require that the energy spectrum be independent of the viscosity in the inertial range. When the energy spectrum is assumed to be consistent with the phenomenological models, there is no phase transition for turbulent diffusion.
Resumo:
This paper proposes the implementation of different non-local Planetary Boundary Layer schemes within the Regional Atmospheric Modeling System (RAMS) model. The two selected PBL parameterizations are the Medium-Range Forecast (MRF) PBL and its updated version, known as the Yonsei University (YSU) PBL. YSU is a first-order scheme that uses non-local eddy diffusivity coefficients to compute turbulent fluxes. It is based on the MRF, and improves it with an explicit treatment of the entrainment. With the aim of evaluating the RAMS results for these PBL parameterizations, a series of numerical simulations have been performed and contrasted with the results obtained using the Mellor and Yamada (MY) scheme, also widely used, and the standard PBL scheme in the RAMS model. The numerical study carried out here is focused on mesoscale circulation events during the summer, as these meteorological situations dominate this season of the year in the Western Mediterranean coast. In addition, the sensitivity of these PBL parameterizations to the initial soil moisture content is also evaluated. The results show a warmer and moister PBL for the YSU scheme compared to both MRF and MY. The model presents as well a tendency to overestimate the observed temperature and to underestimate the observed humidity, considering all PBL schemes and a low initial soil moisture content. In addition, the bias between the model and the observations is significantly reduced moistening the initial soil moisture of the corresponding run. Thus, varying this parameter has a positive effect and improves the simulated results in relation to the observations. However, there is still a significant overestimation of the wind speed over flatter terrain, independently of the PBL scheme and the initial soil moisture used, even though a different degree of accuracy is reproduced by RAMS taking into account the different sensitivity tests.