907 resultados para Numerical calculations
Resumo:
This study examines criteria for the existence of two stable states of the Atlantic Meridional Overturning Circulation (AMOC) using a combination of theory and simulations from a numerical coupled atmosphere–ocean climate model. By formulating a simple collection of state parameters and their relationships, the authors reconstruct the North Atlantic Deep Water (NADW) OFF state behavior under a varying external salt-flux forcing. This part (Part I) of the paper examines the steady-state solution, which gives insight into the mechanisms that sustain the NADW OFF state in this coupled model; Part II deals with the transient behavior predicted by the evolution equation. The nonlinear behavior of the Antarctic Intermediate Water (AAIW) reverse cell is critical to the OFF state. Higher Atlantic salinity leads both to a reduced AAIW reverse cell and to a greater vertical salinity gradient in the South Atlantic. The former tends to reduce Atlantic salt export to the Southern Ocean, while the latter tends to increases it. These competing effects produce a nonlinear response of Atlantic salinity and salt export to salt forcing, and the existence of maxima in these quantities. Thus the authors obtain a natural and accurate analytical saddle-node condition for the maximal surface salt flux for which a NADW OFF state exists. By contrast, the bistability indicator proposed by De Vries and Weber does not generally work in this model. It is applicable only when the effect of the AAIW reverse cell on the Atlantic salt budget is weak.
Resumo:
A high resolution general circulation model has been used to study intense tropical storms. A five-year-long global integration with a spatial resolution of 125 km has been analysed. The geographical and seasonal distribution of tropical storms agrees remarkably well with observations. The structure of individual storms also agrees with observations, but the storms are generally more extensive in coverage and less extreme than the observed ones. A few additional calculations have also been done by a very high resolution limited-area version of the same model, where the boundary conditions successively have been interpolated from the global model. These results are very realistic in many details of the structure of the storms including simulated rain-bands and an eye structure. The global model has also been used in another five-year integration to study the influence of greenhouse warming. The sea surface temperatures have been taken from a transient climate change experiment carried out with a low resolution coupled ocean-atmosphere model. The result is a significant reduction in the number of hurricanes, particularly in the Southern Hemisphere. Main reasons for this can be found in changes in the largescale circulation, i.e. a weakening of the Hadley circulation, and a more intense warming of the upper tropical troposphere. A similar effect can be seen during warm ENSO events, where fewer North Atlantic hurricanes have been reported.
Resumo:
The parameterisation of diabatic processes in numerical models is critical for the accuracy of weather forecasts and for climate projections. A novel approach to the evaluation of these processes in models is introduced in this contribution. The approach combines a suite of on-line tracer diagnostics with off-line trajectory calculations. Each tracer tracks accumulative changes in potential temperature associated with a particular parameterised diabatic process in the model. A comparison of tracers therefore allows the identification of the most active diabatic processes and their downstream impacts. The tracers are combined with trajectories computed using model-resolved winds, allowing the various diabatic contributions to be tracked back to their time and location of occurrence. We have used this approach to investigate diabatic processes within a simulated extratropical cyclone. We focus on the warm conveyor belt, in which the dominant diabatic contributions come from large-scale latent heating and parameterised convection. By contrasting two simulations, one with standard convection parameterisation settings and another with reduced parameterised convection, the effects of parameterised convection on the structure of the cyclone have been determined. Under reduced parameterised convection conditions, the large-scale latent heating is forced to release convective instability that would otherwise have been released by the convection parameterisation. Although the spatial distribution of precipitation depends on the details of the split between parameterised convection and large-scale latent heating, the total precipitation amount associated with the cyclone remains largely unchanged. For reduced parameterised convection, a more rapid and stronger latent heating episode takes place as air ascends within the warm conveyor belt.
Resumo:
Calculations using a numerical model of the convection dominated high latitude ionosphere are compared with observations made by EISCAT as part of the UK-POLAR Special Programme. The data used were for 24–25 October 1984, which was characterized by an unusually steady IMF, with Bz < 0 and By > 0; in the calculations it was assumed that a steady IMF implies steady convection conditions. Using the electric field models of Heppner and Maynard (1983) appropriate to By > 0 and precipitation data taken from Spiroet al. (1982), we calculated the velocities and electron densities appropriate to the EISCAT observations. Many of the general features of the velocity data were reproduced by the model. In particular, the phasing of the change from eastward to westward flow in the vicinity of the Harang discontinuity, flows near the dayside throat and a region of slow flow at higher latitudes near dusk were well reproduced. In the afternoon sector modelled velocity values were significantly less than those observed. Electron density calculations showed good agreement with EISCAT observations near the F-peak, but compared poorly with observations near 211 km. In both cases, the greatest disagreement occurred in the early part of the observations, where the convection pattern was poorly known and showed some evidence of long term temporal change. Possible causes for the disagreement between observations and calculations are discussed and shown to raise interesting and, as yet, unresolved questions concerning the interpretation of the data. For the data set used, the late afternoon dip in electron density observed near the F-peak and interpreted as the signature of the mid-latitude trough is well reproduced by the calculations. Calculations indicate that it does not arise from long residence times of plasma on the nightside, but is the signature of a gap between two major ionization sources, viz. photoionization and particle precipitation.
Resumo:
The high computational cost of calculating the radiative heating rates in numerical weather prediction (NWP) and climate models requires that calculations are made infrequently, leading to poor sampling of the fast-changing cloud field and a poor representation of the feedback that would occur. This paper presents two related schemes for improving the temporal sampling of the cloud field. Firstly, the ‘split time-stepping’ scheme takes advantage of the independent nature of the monochromatic calculations of the ‘correlated-k’ method to split the calculation into gaseous absorption terms that are highly dependent on changes in cloud (the optically thin terms) and those that are not (optically thick). The small number of optically thin terms can then be calculated more often to capture changes in the grey absorption and scattering associated with cloud droplets and ice crystals. Secondly, the ‘incremental time-stepping’ scheme uses a simple radiative transfer calculation using only one or two monochromatic calculations representing the optically thin part of the atmospheric spectrum. These are found to be sufficient to represent the heating rate increments caused by changes in the cloud field, which can then be added to the last full calculation of the radiation code. We test these schemes in an operational forecast model configuration and find a significant improvement is achieved, for a small computational cost, over the current scheme employed at the Met Office. The ‘incremental time-stepping’ scheme is recommended for operational use, along with a new scheme to correct the surface fluxes for the change in solar zenith angle between radiation calculations.
Resumo:
MCNP has stood so far as one of the main Monte Carlo radiation transport codes. Its use, as any other Monte Carlo based code, has increased as computers perform calculations faster and become more affordable along time. However, the use of Monte Carlo method to tally events in volumes which represent a small fraction of the whole system may turn to be unfeasible, if a straight analogue transport procedure (no use of variance reduction techniques) is employed and precise results are demanded. Calculations of reaction rates in activation foils placed in critical systems turn to be one of the mentioned cases. The present work takes advantage of the fixed source representation from MCNP to perform the above mentioned task in a more effective sampling way (characterizing neutron population in the vicinity of the tallying region and using it in a geometric reduced coupled simulation). An extended analysis of source dependent parameters is studied in order to understand their influence on simulation performance and on validity of results. Although discrepant results have been observed for small enveloping regions, the procedure presents itself as very efficient, giving adequate and precise results in shorter times than the standard analogue procedure. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
In this article, we present an analytical direct method, based on a Numerov three-point scheme, which is sixth order accurate and has a linear execution time on the grid dimension, to solve the discrete one-dimensional Poisson equation with Dirichlet boundary conditions. Our results should improve numerical codes used mainly in self-consistent calculations in solid state physics.
Resumo:
This paper is concerned with the numerical solutions of time dependent two-dimensional incompressible flows. By using the primitive variables of velocity and pressure, the Navier-Stokes and mass conservation equations are solved by a semi-implicit finite difference projection method. A new bounded higher order upwind convection scheme is employed to deal with the non-linear (advective) terms. The procedure is an adaptation of the GENSMAC (J. Comput. Phys. 1994; 110: 171-186) methodology for calculating confined and free surface fluid flows at both low and high Reynolds numbers. The calculations were performed by using the 2D version of the Freeflow simulation system (J. Comp. Visual. Science 2000; 2:199-210). In order to demonstrate the capabilities of the numerical method, various test cases are presented. These are the fully developed flow in a channel, the flow over a backward facing step, the die-swell problem, the broken dam flow, and an impinging jet onto a flat plate. The numerical results compare favourably with the experimental data and the analytical solutions. Copyright (c) 2006 John Wiley & Sons, Ltd.
Resumo:
This paper presents a viscous three-dimensional simulations coupling Euler and boundary layer codes for calculating flows over arbitrary surfaces. The governing equations are written in a general non orthogonal coordinate system. The Levy-Lees transformation generalized to three-dimensional flows is utilized. The inviscid properties are obtained from the Euler equations using the Beam and Warming implicit approximate factorization scheme. The resulting equations are discretized and approximated by a two-point fmitedifference numerical scheme. The code developed is validated and applied to the simulation of the flowfield over aerospace vehicle configurations. The results present good correlation with the available data.
Resumo:
This paper presents a comparative analysis between the experimental characterization and the numerical simulation results for a three-dimensional FCC photonic crystal (PhC) based on a self-assembly synthesis of monodispersive latex spheres. Specifically, experimental optical characterization, by means of reflectance measurements under variable angles over the lattice plane family [1,1, 1], are compared to theoretical calculations based on the Finite Di®erence Time Domain (FDTD) method, in order to investigate the correlation between theoretical predictions and experimental data. The goal is to highlight the influence of crystal defects on the achieved performance.
Resumo:
The flow around circular smooth fixed cylinder in a large range of Reynolds numbers is considered in this paper. In order to investigate this canonical case, we perform CFD calculations and apply verification & validation (V&V) procedures to draw conclusions regarding numerical error and, afterwards, assess the modeling errors and capabilities of this (U)RANS method to solve the problem. Eight Reynolds numbers between Re = 10 and Re 5 x 10(5) will be presented with, at least, four geometrically similar grids and five discretization in time for each case (when unsteady), together with strict control of iterative and round-off errors, allowing a consistent verification analysis with uncertainty estimation. Two-dimensional RANS, steady or unsteady, laminar or turbulent calculations are performed. The original 1994 k - omega SST turbulence model by Menter is used to model turbulence. The validation procedure is performed by comparing the numerical results with an extensive set of experimental results compiled from the literature. [DOI: 10.1115/1.4007571]
Resumo:
[EN] This paper presents an interpretation of a classic optical flow method by Nagel and Enkelmann as a tensor-driven anisotropic diffusion approach in digital image analysis. We introduce an improvement into the model formulation, and we establish well-posedness results for the resulting system of parabolic partial differential equations. Our method avoids linearizations in the optical flow constraint, and it can recover displacement fields which are far beyond the typical one-pixel limits that are characteristic for many differential methods for optical flow recovery. A robust numerical scheme is presented in detail. We avoid convergence to irrelevant local minima by embedding our method into a linear scale-space framework and using a focusing strategy from coarse to fine scales. The high accuracy of the proposed method is demonstrated by means of a synthetic and a real-world image sequence.
Resumo:
This thesis is concerned with calculations in manifestly Lorentz-invariant baryon chiral perturbation theory beyond order D=4. We investigate two different methods. The first approach consists of the inclusion of additional particles besides pions and nucleons as explicit degrees of freedom. This results in the resummation of an infinite number of higher-order terms which contribute to higher-order low-energy constants in the standard formulation. In this thesis the nucleon axial, induced pseudoscalar, and pion-nucleon form factors are investigated. They are first calculated in the standard approach up to order D=4. Next, the inclusion of the axial-vector meson a_1(1260) is considered. We find three diagrams with an axial-vector meson which are relevant to the form factors. Due to the applied renormalization scheme, however, the contributions of the two loop diagrams vanish and only a tree diagram contributes explicitly. The appearing coupling constant is fitted to experimental data of the axial form factor. The inclusion of the axial-vector meson results in an improved description of the axial form factor for higher values of momentum transfer. The contributions to the induced pseudoscalar form factor, however, are negligible for the considered momentum transfer, and the axial-vector meson does not contribute to the pion-nucleon form factor. The second method consists in the explicit calculation of higher-order diagrams. This thesis describes the applied renormalization scheme and shows that all symmetries and the power counting are preserved. As an application we determine the nucleon mass up to order D=6 which includes the evaluation of two-loop diagrams. This is the first complete calculation in manifestly Lorentz-invariant baryon chiral perturbation theory at the two-loop level. The numerical contributions of the terms of order D=5 and D=6 are estimated, and we investigate their pion-mass dependence. Furthermore, the higher-order terms of the nucleon sigma term are determined with the help of the Feynman-Hellmann theorem.
Resumo:
DI Diesel engine are widely used both for industrial and automotive applications due to their durability and fuel economy. Nonetheless, increasing environmental concerns force that type of engine to comply with increasingly demanding emission limits, so that, it has become mandatory to develop a robust design methodology of the DI Diesel combustion system focused on reduction of soot and NOx simultaneously while maintaining a reasonable fuel economy. In recent years, genetic algorithms and CFD three-dimensional combustion simulations have been successfully applied to that kind of problem. However, combining GAs optimization with actual CFD three-dimensional combustion simulations can be too onerous since a large number of calculations is usually needed for the genetic algorithm to converge, resulting in a high computational cost and, thus, limiting the suitability of this method for industrial processes. In order to make the optimization process less time-consuming, CFD simulations can be more conveniently used to generate a training set for the learning process of an artificial neural network which, once correctly trained, can be used to forecast the engine outputs as a function of the design parameters during a GA optimization performing a so-called virtual optimization. In the current work, a numerical methodology for the multi-objective virtual optimization of the combustion of an automotive DI Diesel engine, which relies on artificial neural networks and genetic algorithms, was developed.
Resumo:
In vielen Teilgebieten der Mathematik ist es w"{u}nschenswert, die Monodromiegruppe einer homogenen linearen Differenzialgleichung zu verstehen. Es sind nur wenige analytische Methoden zur Berechnung dieser Gruppe bekannt, daher entwickeln wir im ersten Teil dieser Arbeit eine numerische Methode zur Approximation ihrer Erzeuger.rnIm zweiten Abschnitt fassen wir die Grundlagen der Theorie der Uniformisierung Riemannscher Fl"achen und die der arithmetischen Fuchsschen Gruppen zusammen. Auss erdem erkl"aren wir, wie unsere numerische Methode bei der Bestimmung von uniformisierenden Differenzialgleichungen dienlich sein kann. F"ur arithmetische Fuchssche Gruppen mit zwei Erzeugern erhalten wir lokale Daten und freie Parameter von Lam'{e} Gleichungen, welche die zugeh"origen Riemannschen Fl"achen uniformisieren. rnIm dritten Teil geben wir einen kurzen Abriss zur homologischen Spiegelsymmetrie und f"uhren die $widehat{Gamma}$-Klasse ein. Wir erkl"aren wie diese genutzt werden kann, um eine Hodge-theoretische Version der Spiegelsymmetrie f"ur torische Varit"aten zu beweisen. Daraus gewinnen wir Vermutungen "uber die Monodromiegruppe $M$ von Picard-Fuchs Gleichungen von gewissen Familien $f:mathcal{X}rightarrow bbp^1$ von $n$-dimensionalen Calabi-Yau Variet"aten. Diese besagen erstens, dass bez"uglich einer nat"urlichen Basis die Monodromiematrizen in $M$ Eintr"age aus dem K"orper $bbq(zeta(2j+1)/(2 pi i)^{2j+1},j=1,ldots,lfloor (n-1)/2 rfloor)$ haben. Und zweitens, dass sich topologische Invarianten des Spiegelpartners einer generischen Faser von $f:mathcal{X}rightarrow bbp^1$ aus einem speziellen Element von $M$ rekonstruieren lassen. Schliess lich benutzen wir die im ersten Teil entwickelten Methoden zur Verifizierung dieser Vermutungen, vornehmlich in Hinblick auf Dimension drei. Dar"uber hinaus erstellen wir eine Liste von Kandidaten topologischer Invarianten von vermutlich existierenden dreidimensionalen Calabi-Yau Variet"aten mit $h^{1,1}=1$.