976 resultados para Numerical Models
Resumo:
Since the advent of automobiles, alcohol has been considered a possible engine fuel1,2. With the recent increased concern about the high price of crude oil due to fluctuating supply and demand and environmental issues, interest in alcohol based fuels has increased2,3. However, using pure alcohols or blends with conventional fuels in high percentages requires changes to the engine and fuel system design2. This leads to the need for a simple and accurate conventional fuels-alcohol blends combustion models that can be used in developing parametric burn rate and knock combustion models for designing more efficient Spark Ignited (SI) engines. To contribute to this understanding, numerical simulations were performed to obtain detailed characteristics of Gasoline-Ethanol blends with respect to Laminar Flame Speed (LFS), autoignition and Flame-Wall interactions. The one-dimensional premixed flame code CHEMKIN® was applied to simulate the burning velocity and autoignition characteristics using the freely propagating model and closed homogeneous reactor model respectively. Computational Fluid Dynamics (CFD) was used to obtain detailed flow, temperature, and species fields for Flame-wall interactions. A semi-detailed validated chemical kinetic model for a gasoline surrogate fuel developed by Andrae and Head4 was used for the study of LFS and Autoignition. For the quenching study, a skeletal chemical kinetic mechanism of gasoline surrogate, having 50 species and 174 reactions was used. The surrogate fuel was defined as a mixture of pure n-heptane, isooctane, and toluene. For LFS study, the ethanol volume fraction was varied from 0 to 85%, initial pressure from 4 to 8 bar, initial temperature from 300 to 900K, and dilution from 0 to 32%. Whereas for Autoignition study, the ethanol volume fraction was varied between 0 to 85%, initial pressure was varied between 20 to 60 bar, initial temperature was varied between 800 to 1200K, and the dilution was varied between 0 to 32% at equivalence ratios of 0.5, 1.0 and 1.5 to represent the in-cylinder conditions of a SI engine. For quenching study three Ethanol blends, namely E0, E25 and E85 are described in detail at an initial pressure of 8 atm and 17 atm. Initial wall temperature was taken to be 400 K. Quenching thicknesses and heat fluxes to the wall were computed. The laminar flame speed was found to increase with ethanol concentration and temperature but decrease with pressure and dilution. The autoignition time was found to increase with ethanol concentration at lower temperatures but was found to decrease marginally at higher temperatures. The autoignition time was also found to decrease with pressure and equivalence ratio but increase with dilution. The average quenching thickness was found to decrease with an increase in Ethanol concentration in the blend. Heat flux to the wall increased with increase in ethanol percentage in the blend and at higher initial pressures. Whereas the wall heat flux decreased with an increase in dilution. Unburned Hydrocarbon (UHC) and CO % was also found to decrease with ethanol concentration in the blend.
Resumo:
The maintenance of genetic variation in a spatially heterogeneous environment has been one of the main research themes in theoretical population genetics. Despite considerable progress in understanding the consequences of spatially structured environments on genetic variation, many problems remain unsolved. One of them concerns the relationship between the number of demes, the degree of dominance, and the maximum number of alleles that can be maintained by selection in a subdivided population. In this work, we study the potential of maintaining genetic variation in a two-deme model with deme-independent degree of intermediate dominance, which includes absence of G x E interaction as a special case. We present a thorough numerical analysis of a two-deme three-allele model, which allows us to identify dominance and selection patterns that harbor the potential for stable triallelic equilibria. The information gained by this approach is then used to construct an example in which existence and asymptotic stability of a fully polymorphic equilibrium can be proved analytically. Noteworthy, in this example the parameter range in which three alleles can coexist is maximized for intermediate migration rates. Our results can be interpreted in a specialist-generalist context and (among others) show when two specialists can coexist with a generalist in two demes if the degree of dominance is deme independent and intermediate. The dominance relation between the generalist allele and the specialist alleles play a decisive role. We also discuss linear selection on a quantitative trait and show that G x E interaction is not necessary for the maintenance of more than two alleles in two demes.
Resumo:
We present a comprehensive analytical study of radiative transfer using the method of moments and include the effects of non-isotropic scattering in the coherent limit. Within this unified formalism, we derive the governing equations and solutions describing two-stream radiative transfer (which approximates the passage of radiation as a pair of outgoing and incoming fluxes), flux-limited diffusion (which describes radiative transfer in the deep interior) and solutions for the temperature-pressure profiles. Generally, the problem is mathematically under-determined unless a set of closures (Eddington coefficients) is specified. We demonstrate that the hemispheric (or hemi-isotropic) closure naturally derives from the radiative transfer equation if energy conservation is obeyed, while the Eddington closure produces spurious enhancements of both reflected light and thermal emission. We concoct recipes for implementing two-stream radiative transfer in stand-alone numerical calculations and general circulation models. We use our two-stream solutions to construct toy models of the runaway greenhouse effect. We present a new solution for temperature-pressure profiles with a non-constant optical opacity and elucidate the effects of non-isotropic scattering in the optical and infrared. We derive generalized expressions for the spherical and Bond albedos and the photon deposition depth. We demonstrate that the value of the optical depth corresponding to the photosphere is not always 2/3 (Milne's solution) and depends on a combination of stellar irradiation, internal heat and the properties of scattering both in optical and infrared. Finally, we derive generalized expressions for the total, net, outgoing and incoming fluxes in the convective regime.
Resumo:
Previous studies of the sediments of Lake Lucerne have shown that massive subaqueous mass movements affecting unconsolidated sediments on lateral slopes are a common process in this lake, and, in view of historical reports describing damaging waves on the lake, it was suggested that tsunamis generated by mass movements represent a considerable natural hazard on the lakeshores. Newly performed numerical simulations combining two-dimensional, depth-averaged models for mass-movement propagation and for tsunami generation, propagation and inunda- tion reproduce a number of reported tsunami effects. Four analysed mass-movement scenarios—three based on documented slope failures involving volumes of 5.5 to 20.8 9 106 m3—show peak wave heights of several metres and maximum runup of 6 to [10 m in the directly affected basins, while effects in neighbouring basins are less drastic. The tsunamis cause large-scale inundation over distances of several hundred metres on flat alluvial plains close to the mass-movement source areas. Basins at the ends of the lake experience regular water-level oscillations with characteristic periods of several minutes. The vulnerability of potentially affected areas has increased dramatically since the times of the damaging historical events, recommending a thorough evaluation of the hazard.
Resumo:
67P/Churyumov-Gerasimenko (67P) is a Jupiter-family comet and the object of investigation of the European Space Agency mission Rosetta. This report presents the first full 3D simulation results of 67P’s neutral gas coma. In this study we include results from a direct simulation Monte Carlo method, a hydrodynamic code, and a purely geometric calculation which computes the total illuminated surface area on the nucleus. All models include the triangulated 3D shape model of 67P as well as realistic illumination and shadowing conditions. The basic concept is the assumption that these illumination conditions on the nucleus are the main driver for the gas activity of the comet. As a consequence, the total production rate of 67P varies as a function of solar insolation. The best agreement between the model and the data is achieved when gas fluxes on the night side are in the range of 7% to 10% of the maximum flux, accounting for contributions from the most volatile components. To validate the output of our numerical simulations we compare the results of all three models to in situ gas number density measurements from the ROSINA COPS instrument. We are able to reproduce the overall features of these local neutral number density measurements of ROSINA COPS for the time period between early August 2014 and January 1 2015 with all three models. Some details in the measurements are not reproduced and warrant further investigation and refinement of the models. However, the overall assumption that illumination conditions on the nucleus are at least an important driver of the gas activity is validated by the models. According to our simulation results we find the total production rate of 67P to be constant between August and November 2014 with a value of about 1 × 10²⁶ molecules s⁻¹.
Resumo:
This paper provides new sufficient conditions for the existence, computation via successive approximations, and stability of Markovian equilibrium decision processes for a large class of OLG models with stochastic nonclassical production. Our notion of stability is existence of stationary Markovian equilibrium. With a nonclassical production, our economies encompass a large class of OLG models with public policy, valued fiat money, production externalities, and Markov shocks to production. Our approach combines aspects of both topological and order theoretic fixed point theory, and provides the basis of globally stable numerical iteration procedures for computing extremal Markovian equilibrium objects. In addition to new theoretical results on existence and computation, we provide some monotone comparative statics results on the space of economies.
Resumo:
The importance of renewable energies for the European electricity market is growing rapidly. This presents transmission grids and the power market in general with new challenges which stem from the higher spatiotemporal variability of power generation. This uncertainty is due to the fact that renewable power production results from weather phenomena, thus making it difficult to plan and control. We present a sensitivity study of a total solar eclipse in central Europe in March. The weather in Germany and Europe was modeled using the German Weather Service's local area models COSMO-DE and COSMO-EU, respectively (http://www.cosmo-model.org/). The simulations were performed with and without considering a solar eclipse for the following 3 situations: 1. An idealized, clear-sky situation for the entire model area (Europe, COSMO-EU) 2. A real weather situation with mostly cloudy skies (Germany, COSMO-DE) 3. A real weather situation with mostly clear skies (Germany, COSMO-DE) The data should help to evaluate the effects of a total solar eclipse on the weather in the planetary boundary layer. The results show that a total solar eclipse has significant effects particularly on the main variables for renewable energy production, such as solar irradiation and temperature near the ground.
Resumo:
Appropriate field data are required to check the reliability of hydrodynamic models simulating the dispersion of soluble substances in the marine environment. This study deals with the collection of physical measurements and soluble tracer data intended specifically for this kind of validation. The intensity of currents as well as the complexity of topography and tides around the Cap de La Hague in the center of the English Channel makes it one of the most difficult areas to represent in terms of hydrodynamics and dispersion. Controlled releases of tritium - in the form of HTO - are carried out in this area by the AREVA-NC plant, providing an excellent soluble tracer. A total of 14 493 measurements were acquired to track dispersion in the hours and days following a release. These data, supplementing previously gathered data and physical measurements (bathymetry, water-surface levels, Eulerian and Lagrangian current studies) allow us to test dispersion models from the hour following release to periods of several years which are not accessible with dye experiments. The dispersion characteristics are described and methods are proposed for comparing models against measurements. An application is proposed for a 2 dimensions high-resolution numerical model. It shows how an extensive dataset can be used to build, calibrate and validate several aspects of the model in a highly dynamic and macrotidal area: tidal cycle timing, tidal amplitude, fixed-point current data, hodographs. This study presents results concerning the model's ability to reproduce residual Lagrangian currents, along with a comparison between simulation and high-frequency measurements of tracer dispersion. Physical and tracer data are available from the SISMER database of IFREMER (www.ifremer.fr/sismer/catal). This tool for validation of models in macro-tidal seas is intended to be an open and evolving resource, which could provide a benchmark for dispersion model validation.
Resumo:
We introduce two probabilistic, data-driven models that predict a ship's speed and the situations where a ship is probable to get stuck in ice based on the joint effect of ice features such as the thickness and concentration of level ice, ice ridges, rafted ice, moreover ice compression is considered. To develop the models to datasets were utilized. First, the data from the Automatic Identification System about the performance of a selected ship was used. Second, a numerical ice model HELMI, developed in the Finnish Meteorological Institute, provided information about the ice field. The relations between the ice conditions and ship movements were established using Bayesian learning algorithms. The case study presented in this paper considers a single and unassisted trip of an ice-strengthened bulk carrier between two Finnish ports in the presence of challenging ice conditions, which varied in time and space. The obtained results show good prediction power of the models. This means, on average 80% for predicting the ship's speed within specified bins, and above 90% for predicting cases where a ship may get stuck in ice. We expect this new approach to facilitate the safe and effective route selection problem for ice-covered waters where the ship performance is reflected in the objective function.
Resumo:
Sea surface temperatures and sea-ice extent are the most critical variables to evaluate the Southern Ocean paleoceanographic evolution in relation to the development of the global carbon cycle, atmospheric CO2 variability and ocean-atmosphere circulation. In contrast to the Atlantic and the Indian sectors, the Pacific sector of the Southern Ocean has been insufficiently investigated so far. To cover this gap of information we present diatom-based estimates of summer sea surface temperature (SSST) and winter sea-ice concentration (WSI) from 17 sites in the polar South Pacific to study the Last Glacial Maximum (LGM) at the EPILOG time slice (19,000-23,000 cal. years BP). Applied statistical methods are the Imbrie and Kipp Method (IKM) and the Modern Analog Technique (MAT) to estimate temperature and sea-ice concentration, respectively. Our data display a distinct LGM east-west differentiation in SSST and WSI with steeper latitudinal temperature gradients and a winter sea-ice edge located consistently north of the Pacific-Antarctic Ridge in the Ross sea sector. In the eastern sector of our study area, which is governed by the Amundsen Abyssal Plain, the estimates yield weaker latitudinal SSST gradients together with a variable extended winter sea-ice field. In this sector, sea-ice extent may have reached sporadically the area of the present Subantarctic Front at its maximum LGM expansion. This pattern points to topographic forcing as major controller of the frontal system location and sea-ice extent in the western Pacific sector whereas atmospheric conditions like the Southern Annular Mode and the ENSO affected the oceanographic conditions in the eastern Pacific sector. Although it is difficult to depict the location and the physical nature of frontal systems separating the glacial Southern Ocean water masses into different zones, we found a distinct temperature gradient in latitudes straddled by the modern Southern Subtropical Front. Considering that the glacial temperatures north of this zone are similar to the modern, we suggest that this represents the Glacial Southern Subtropical Front (GSSTF), which delimits the zone of strongest glacial SSST cooling (>4K) to its North. The southern boundary of the zone of maximum cooling is close to the glacial 4°C isotherm. This isotherm, which is in the range of SSST at the modern Antarctic Polar Front (APF), represents a circum-Antarctic feature and marks the northern edge of the glacial Antarctic Circumpolar Current (ACC). We also assume that a glacial front was established at the northern average winter sea ice edge, comparable with the modern Southern Antarctic Circumpolar Current Front (SACCF). During the glacial, this front would be located in the area of the modern APF. The northward deflection of colder than modern surface waters along the South American continent leads to a significant cooling of the glacial Humboldt Current surface waters (4-8K), which affects the temperature regimes as far north as into tropical latitudes. The glacial reduction of ACC temperatures may also result in the significant cooling in the Atlantic and Indian Southern Ocean, thus may enhance thermal differentiation of the Southern Ocean and Antarctic continental cooling. Comparison with temperature and sea ice simulations for the last glacial based on numerical simulations show that the majority of modern models overestimate summer and winter sea ice cover and that there exists few models that reproduce our temperature data rather well.
Resumo:
A new method is presented to generate reduced order models (ROMs) in Fluid Dynamics problems of industrial interest. The method is based on the expansion of the flow variables in a Proper Orthogonal Decomposition (POD) basis, calculated from a limited number of snapshots, which are obtained via Computational Fluid Dynamics (CFD). Then, the POD-mode amplitudes are calculated as minimizers of a properly defined overall residual of the equations and boundary conditions. The method includes various ingredients that are new in this field. The residual can be calculated using only a limited number of points in the flow field, which can be scattered either all over the whole computational domain or over a smaller projection window. The resulting ROM is both computationally efficient(reconstructed flow fields require, in cases that do not present shock waves, less than 1 % of the time needed to compute a full CFD solution) and flexible(the projection window can avoid regions of large localized CFD errors).Also, for problems related with aerodynamics, POD modes are obtained from a set of snapshots calculated by a CFD method based on the compressible Navier Stokes equations and a turbulence model (which further more includes some unphysical stabilizing terms that are included for purely numerical reasons), but projection onto the POD manifold is made using the inviscid Euler equations, which makes the method independent of the CFD scheme. In addition, shock waves are treated specifically in the POD description, to avoid the need of using a too large number of snapshots. Various definitions of the residual are also discussed, along with the number and distribution of snapshots, the number of retained modes, and the effect of CFD errors. The method is checked and discussed on several test problems that describe (i) heat transfer in the recirculation region downstream of a backwards facing step, (ii) the flow past a two-dimensional airfoil in both the subsonic and transonic regimes, and (iii) the flow past a three-dimensional horizontal tail plane. The method is both efficient and numerically robust in the sense that the computational effort is quite small compared to CFD and results are both reasonably accurate and largely insensitive to the definition of the residual, to CFD errors, and to the CFD method itself, which may contain artificial stabilizing terms. Thus, the method is amenable for practical engineering applications. Resumen Se presenta un nuevo método para generar modelos de orden reducido (ROMs) aplicado a problemas fluidodinámicos de interés industrial. El nuevo método se basa en la expansión de las variables fluidas en una base POD, calculada a partir de un cierto número de snapshots, los cuales se han obtenido gracias a simulaciones numéricas (CFD). A continuación, las amplitudes de los modos POD se calculan minimizando un residual global adecuadamente definido que combina las ecuaciones y las condiciones de contorno. El método incluye varios ingredientes que son nuevos en este campo de estudio. El residual puede calcularse utilizando únicamente un número limitado de puntos del campo fluido. Estos puntos puede encontrarse dispersos a lo largo del dominio computacional completo o sobre una ventana de proyección. El modelo ROM obtenido es tanto computacionalmente eficiente (en aquellos casos que no presentan ondas de choque reconstruir los campos fluidos requiere menos del 1% del tiempo necesario para calcular una solución CFD) como flexible (la ventana de proyección puede escogerse de forma que evite contener regiones con errores en la solución CFD localizados y grandes). Además, en problemas aerodinámicos, los modos POD se obtienen de un conjunto de snapshots calculados utilizando un código CFD basado en la versión compresible de las ecuaciones de Navier Stokes y un modelo de turbulencia (el cual puede incluir algunos términos estabilizadores sin sentido físico que se añaden por razones puramente numéricas), aunque la proyección en la variedad POD se hace utilizando las ecuaciones de Euler, lo que hace al método independiente del esquema utilizado en el código CFD. Además, las ondas de choque se tratan específicamente en la descripción POD para evitar la necesidad de utilizar un número demasiado grande de snapshots. Varias definiciones del residual se discuten, así como el número y distribución de los snapshots,el número de modos retenidos y el efecto de los errores debidos al CFD. El método se comprueba y discute para varios problemas de evaluación que describen (i) la transferencia de calor en la región de recirculación aguas abajo de un escalón, (ii) el flujo alrededor de un perfil bidimensional en regímenes subsónico y transónico y (iii) el flujo alrededor de un estabilizador horizontal tridimensional. El método es tanto eficiente como numéricamente robusto en el sentido de que el esfuerzo computacional es muy pequeño comparado con el requerido por el CFD y los resultados son razonablemente precisos y muy insensibles a la definición del residual, los errores debidos al CFD y al método CFD en sí mismo, el cual puede contener términos estabilizadores artificiales. Por lo tanto, el método puede utilizarse en aplicaciones prácticas de ingeniería.
Resumo:
Many of the material models most frequently used for the numerical simulation of the behavior of concrete when subjected to high strain rates have been originally developed for the simulation of ballistic impact. Therefore, they are plasticity-based models in which the compressive behavior is modeled in a complex way, while their tensile failure criterion is of a rather simpler nature. As concrete elements usually fail in tensión when subjected to blast loading, available concrete material models for high strain rates may not represent accurately their real behavior. In this research work an experimental program of reinforced concrete fíat elements subjected to blast load is presented. Altogether four detonation tests are conducted, in which 12 slabs of two different concrete types are subjected to the same blast load. The results of the experimental program are then used for the development and adjustment of numerical tools needed in the modeling of concrete elements subjected to blast.
Resumo:
In this work, robustness and stability of continuum damage models applied to material failure in soft tissues are addressed. In the implicit damage models equipped with softening, the presence of negative eigenvalues in the tangent elemental matrix degrades the condition number of the global matrix, leading to a reduction of the computational performance of the numerical model. Two strategies have been adapted from literature to improve the aforementioned computational performance degradation: the IMPL-EX integration scheme [Oliver,2006], which renders the elemental matrix contribution definite positive, and arclength-type continuation methods [Carrera,1994], which allow to capture the unstable softening branch in brittle ruptures. The IMPL-EX integration scheme has as a major drawback the need to use small time steps to keep numerical error below an acceptable value. A convergence study, limiting the maximum allowed increment of internal variables in the damage model, is presented. Finally, numerical simulation of failure problems with fibre reinforced materials illustrates the performance of the adopted methodology.
Resumo:
In this work we propose a method to accelerate time dependent numerical solvers of systems of PDEs that require a high cost in computational time and memory. The method is based on the combined use of such numerical solver with a proper orthogonal decomposition, from which we identify modes, a Galerkin projection (that provides a reduced system of equations) and the integration of the reduced system, studying the evolution of the modal amplitudes. We integrate the reduced model until our a priori error estimator indicates that our approximation in not accurate. At this point we use again our original numerical code in a short time interval to adapt the POD manifold and continue then with the integration of the reduced model. Application will be made to two model problems: the Ginzburg-Landau equation in transient chaos conditions and the two-dimensional pulsating cavity problem, which describes the motion of liquid in a box whose upper wall is moving back and forth in a quasi-periodic fashion. Finally, we will discuss a way of improving the performance of the method using experimental data or information from numerical simulations
Resumo:
An experimental and numerical study of ballistic impacts on steel plates at various temperatures (700ºC, 400ºC and room temperature) has been carried out. The motivation for this work is the blade‐off event that may occur inside a jet engine turbine. However, as a first attempt to understand this complex loading process, a somewhat simpler approach is carried out in the present work. The material used in this study is the FV535 martensitic stainless steel, which is one of the most commonly used materials for turbine casings. Based on material test data, a Modified Johnson‐Cook (MJC) model was calibrated for numerical simulations using the LS‐DYNA explicit finite element code (see Figure 1). To check the mesh size sensitivity, 2D axisymmetric finite element models with three different mesh sizes and configurations were used for the various temperatures. Two fixed meshes with 64 and 128 elements over the 2mm thick plate and one mesh with 32 elements over the thickness with adaptive remeshing were used in the simulations. The formation of adiabatic shear bands in the perforation process has been found critical in order to achieve good results. Adiabatic shear bands are formed by the temperature rise due to the accumulation of plastic strain during impact (see Figure 2). The influence of the thermal softening in the plastic model has hence been analyzed for the room temperature impact tests, where the temperature gradient is highest