42 resultados para mesh: Models, Theoretical
em CentAUR: Central Archive University of Reading - UK
Resumo:
A stochastic parameterization scheme for deep convection is described, suitable for use in both climate and NWP models. Theoretical arguments and the results of cloud-resolving models, are discussed in order to motivate the form of the scheme. In the deterministic limit, it tends to a spectrum of entraining/detraining plumes and is similar to other current parameterizations. The stochastic variability describes the local fluctuations about a large-scale equilibrium state. Plumes are drawn at random from a probability distribution function (pdf) that defines the chance of finding a plume of given cloud-base mass flux within each model grid box. The normalization of the pdf is given by the ensemble-mean mass flux, and this is computed with a CAPE closure method. The characteristics of each plume produced are determined using an adaptation of the plume model from the Kain-Fritsch parameterization. Initial tests in the single column version of the Unified Model verify that the scheme is effective in producing the desired distributions of convective variability without adversely affecting the mean state.
Resumo:
There is an increasing interest in modelling electromagnetic methods of NDT - particularly eddy currents. A collaboration within the International Institute of Welding led to a survey intended to explain to non mathematicians the present scope of modelling. The present review commences with this survey and then points out some of the developments and some of the outstanding problems in transferring modelling into industry.
Resumo:
A theoretical framework for the joint conservation of energy and momentum in the parameterization of subgrid-scale processes in climate models is presented. The framework couples a hydrostatic resolved (planetary scale) flow to a nonhydrostatic subgrid-scale (mesoscale) flow. The temporal and horizontal spatial scale separation between the planetary scale and mesoscale is imposed using multiple-scale asymptotics. Energy and momentum are exchanged through subgrid-scale flux convergences of heat, pressure, and momentum. The generation and dissipation of subgrid-scale energy and momentum is understood using wave-activity conservation laws that are derived by exploiting the (mesoscale) temporal and horizontal spatial homogeneities in the planetary-scale flow. The relations between these conservation laws and the planetary-scale dynamics represent generalized nonacceleration theorems. A derived relationship between the wave-activity fluxes-which represents a generalization of the second Eliassen-Palm theorem-is key to ensuring consistency between energy and momentum conservation. The framework includes a consistent formulation of heating and entropy production due to kinetic energy dissipation.
Resumo:
The electronic structure and oxidation state of atomic Au adsorbed on a perfect CeO2(111) surface have been investigated in detail by means of periodic density functional theory-based calculations, using the LDA+U and GGA+U potentials for a broad range of U values, complemented with calculations employing the HSE06 hybrid functional. In addition, the effects of the lattice parameter a0 and of the starting point for the geometry optimization have also been analyzed. From the present results we suggest that the oxidation state of single Au atoms on CeO2(111) predicted by LDA+U, GGA+U, and HSE06 density functional calculations is not conclusive and that the final picture strongly depends on the method chosen and on the construction of the surface model. In some cases we have been able to locate two well-defined states which are close in energy but with very different electronic structure and local geometries, one with Au fully oxidized and one with neutral Au. The energy difference between the two states is typically within the limits of the accuracy of the present exchange-correlation potentials, and therefore, a clear lowest-energy state cannot be identified. These results suggest the possibility of a dynamic distribution of Au0 and Au+ atomic species at the regular sites of the CeO2(111) surface.
Resumo:
Most parameterizations for precipitating convection in use today are bulk schemes, in which an ensemble of cumulus elements with different properties is modelled as a single, representative entraining-detraining plume. We review the underpinning mathematical model for such parameterizations, in particular by comparing it with spectral models in which elements are not combined into the representative plume. The chief merit of a bulk model is that the representative plume can be described by an equation set with the same structure as that which describes each element in a spectral model. The equivalence relies on an ansatz for detrained condensate introduced by Yanai et al. (1973) and on a simplified microphysics. There are also conceptual differences in the closure of bulk and spectral parameterizations. In particular, we show that the convective quasi-equilibrium closure of Arakawa and Schubert (1974) for spectral parameterizations cannot be carried over to a bulk parameterization in a straightforward way. Quasi-equilibrium of the cloud work function assumes a timescale separation between a slow forcing process and a rapid convective response. But, for the natural bulk analogue to the cloud-work function (the dilute CAPE), the relevant forcing is characterised by a different timescale, and so its quasi-equilibrium entails a different physical constraint. Closures of bulk parameterization that use the non-entraining parcel value of CAPE do not suffer from this timescale issue. However, the Yanai et al. (1973) ansatz must be invoked as a necessary ingredient of those closures.
Resumo:
Although climate models have been improving in accuracy and efficiency over the past few decades, it now seems that these incremental improvements may be slowing. As tera/petascale computing becomes massively parallel, our legacy codes are less suitable, and even with the increased resolution that we are now beginning to use, these models cannot represent the multiscale nature of the climate system. This paper argues that it may be time to reconsider the use of adaptive mesh refinement for weather and climate forecasting in order to achieve good scaling and representation of the wide range of spatial scales in the atmosphere and ocean. Furthermore, the challenge of introducing living organisms and human responses into climate system models is only just beginning to be tackled. We do not yet have a clear framework in which to approach the problem, but it is likely to cover such a huge number of different scales and processes that radically different methods may have to be considered. The challenges of multiscale modelling and petascale computing provide an opportunity to consider a fresh approach to numerical modelling of the climate (or Earth) system, which takes advantage of the computational fluid dynamics developments in other fields and brings new perspectives on how to incorporate Earth system processes. This paper reviews some of the current issues in climate (and, by implication, Earth) system modelling, and asks the question whether a new generation of models is needed to tackle these problems.
Resumo:
Accuracy and mesh generation are key issues for the high-resolution hydrodynamic modelling of the whole Great Barrier Reef. Our objective is to generate suitable unstructured grids that can resolve topological and dynamical features like tidal jets and recirculation eddies in the wake of islands. A new strategy is suggested to refine the mesh in areas of interest taking into account the bathymetric field and an approximated distance to islands and reefs. Such a distance is obtained by solving an elliptic differential operator, with specific boundary conditions. Meshes produced illustrate both the validity and the efficiency of the adaptive strategy. Selection of refinement and geometrical parameters is discussed. (c) 2006 Elsevier Ltd. All rights reserved.
Resumo:
Flow in the world's oceans occurs at a wide range of spatial scales, from a fraction of a metre up to many thousands of kilometers. In particular, regions of intense flow are often highly localised, for example, western boundary currents, equatorial jets, overflows and convective plumes. Conventional numerical ocean models generally use static meshes. The use of dynamically-adaptive meshes has many potential advantages but needs to be guided by an error measure reflecting the underlying physics. A method of defining an error measure to guide an adaptive meshing algorithm for unstructured tetrahedral finite elements, utilizing an adjoint or goal-based method, is described here. This method is based upon a functional, encompassing important features of the flow structure. The sensitivity of this functional, with respect to the solution variables, is used as the basis from which an error measure is derived. This error measure acts to predict those areas of the domain where resolution should be changed. A barotropic wind driven gyre problem is used to demonstrate the capabilities of the method. The overall objective of this work is to develop robust error measures for use in an oceanographic context which will ensure areas of fine mesh resolution are used only where and when they are required. (c) 2006 Elsevier Ltd. All rights reserved.
Resumo:
Airborne scanning laser altimetry (LiDAR) is an important new data source for river flood modelling. LiDAR can give dense and accurate DTMs of floodplains for use as model bathymetry. Spatial resolutions of 0.5m or less are possible, with a height accuracy of 0.15m. LiDAR gives a Digital Surface Model (DSM), so vegetation removal software (e.g. TERRASCAN) must be used to obtain a DTM. An example used to illustrate the current state of the art will be the LiDAR data provided by the EA, which has been processed by their in-house software to convert the raw data to a ground DTM and separate vegetation height map. Their method distinguishes trees from buildings on the basis of object size. EA data products include the DTM with or without buildings removed, a vegetation height map, a DTM with bridges removed, etc. Most vegetation removal software ignores short vegetation less than say 1m high. We have attempted to extend vegetation height measurement to short vegetation using local height texture. Typically most of a floodplain may be covered in such vegetation. The idea is to assign friction coefficients depending on local vegetation height, so that friction is spatially varying. This obviates the need to calibrate a global floodplain friction coefficient. It’s not clear at present if the method is useful, but it’s worth testing further. The LiDAR DTM is usually determined by looking for local minima in the raw data, then interpolating between these to form a space-filling height surface. This is a low pass filtering operation, in which objects of high spatial frequency such as buildings, river embankments and walls may be incorrectly classed as vegetation. The problem is particularly acute in urban areas. A solution may be to apply pattern recognition techniques to LiDAR height data fused with other data types such as LiDAR intensity or multispectral CASI data. We are attempting to use digital map data (Mastermap structured topography data) to help to distinguish buildings from trees, and roads from areas of short vegetation. The problems involved in doing this will be discussed. A related problem of how best to merge historic river cross-section data with a LiDAR DTM will also be considered. LiDAR data may also be used to help generate a finite element mesh. In rural area we have decomposed a floodplain mesh according to taller vegetation features such as hedges and trees, so that e.g. hedge elements can be assigned higher friction coefficients than those in adjacent fields. We are attempting to extend this approach to urban area, so that the mesh is decomposed in the vicinity of buildings, roads, etc as well as trees and hedges. A dominant points algorithm is used to identify points of high curvature on a building or road, which act as initial nodes in the meshing process. A difficulty is that the resulting mesh may contain a very large number of nodes. However, the mesh generated may be useful to allow a high resolution FE model to act as a benchmark for a more practical lower resolution model. A further problem discussed will be how best to exploit data redundancy due to the high resolution of the LiDAR compared to that of a typical flood model. Problems occur if features have dimensions smaller than the model cell size e.g. for a 5m-wide embankment within a raster grid model with 15m cell size, the maximum height of the embankment locally could be assigned to each cell covering the embankment. But how could a 5m-wide ditch be represented? Again, this redundancy has been exploited to improve wetting/drying algorithms using the sub-grid-scale LiDAR heights within finite elements at the waterline.
Resumo:
A simple theoretical model for the intensification of tropical cyclones and polar lows is developed using a minimal set of physical assumptions. These disturbances are assumed to be balanced systems intensifying through the WISHE (Wind-Induced Surface Heat Exchange) intensification mechanism, driven by surface fluxes of heat and moisture into an atmosphere which is neutral to moist convection. The equation set is linearized about a resting basic state and solved as an initial-value problem. A system is predicted to intensify with an exponential perturbation growth rate scaled by the radial gradient of an efficiency parameter which crudely represents the effects of unsaturated processes. The form of this efficiency parameter is assumed to be defined by initial conditions, dependent on the nature of a pre-existing vortex required to precondition the atmosphere to a state in which the vortex can intensify. Evaluation of the simple model using a primitive-equation, nonlinear numerical model provides support for the prediction of exponential perturbation growth. Good agreement is found between the simple and numerical models for the sensitivities of the measured growth rate to various parameters, including surface roughness, the rate of transfer of heat and moisture from the ocean surface, and the scale for the growing vortex.
Resumo:
A comparison of the models of Vitti et al. (2000, J. Anim. Sci. 78, 2706-2712) and Fernandez (1995c, Livest. Prod. Sci. 41, 255-261) was carried out using two data sets on growing pigs as input. The two models compared were based on similar basic principles, although their aims and calculations differed. The Vitti model employs the rate:state formalism and describes phosphorus (P) flow between four pools representing P content in gut, blood, bone and soft tissue in growing goats. The Fernandez model describes flow and fractional recirculation between P pools in gut, blood and bone in growing pigs. The results from both models showed similar trends for P absorption from gut to blood and net retention in bone with increasing P intake, with the exception of the 65 kg results from Date Set 2 calculated using the FernAndez model. Endogenous loss from blood back to gut increased faster with increasing P intake in the FernAndez than in the Vitti model for Data Set 1. However, for Data Set 2, endogenous loss increased with increasing P intake using the Vitti model, but decreased when calculated using the FernAndez model. Incorporation of P into bone was not influenced by intake in the FernAndez model, while in the Vitti model there was an increasing trend. The FernAndez model produced a pattern of decreasing resorption in bone with increasing P intake, with one of the data sets, which was not observed when using the Vitti model. The pigs maintained their P homeostasis in blood by regulation of P excretion in urine. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
1. Estimates of seed bank depletion rates are essential for modelling and management of plant populations. The seed bag burial method is often used to measure seed mortality in the soil. However, the density of seeds within seed bags is higher than densities in natural seed banks, which may elevate levels of pathogens and influence seed mortality. The aim of this study was to quantify the effects of fungi and seed density within buried mesh bags on the mortality of seeds. Striga hermonthica was chosen as the study species because it has been widely studied but different methods for measuring seed mortality in the soil have yielded contradictory estimates. 2. Seed bags were buried in soil and exhumed at regular time intervals to monitor mortality of the seeds in three field experiments during two rainy seasons. The effect of fungal activity on seed mortality was evaluated in a fungi exclusion experiment. Differences in seed-to-seed interaction were obtained by using two and four densities within the seed bags in consecutive years. Densities were created by mixing 1000 seeds with 0, 10, 100 or 1000 g of coarse sand. 3. The mortality rate was significantly lower when fungi were excluded, indicating the possible role of pathogenic fungi. 4. Decreasing the density of seeds in bags significantly reduced seed mortality, most probably because of decreased seed-to-seed contamination by pathogenic fungi. 5. Synthesis and applications. Models of plant populations in general and annual weeds in particular often use values from the literature for seed bank depletion rates. These depletion rates have often been estimated by the seed bag burial method, yet seed density within seed bags may be unrealistically high. Consequently, estimates of seed mortality rates may be too high because of an overestimation of the effects of soil or seed-borne pathogens. Species that have been classified from such studies as having short-lived seed banks may need to be re-assessed using realistic densities either within seed bags or otherwise. Similarly, models of seed bank dynamics based on such overestimated depletion rates may lead to incorrect conclusions regarding the seed banks and, perhaps, the management of weeds and rare species.
Resumo:
An example of the evolution of the interacting behaviours of parents and progeny is studied using iterative equations linking the frequencies of the gametes produced by the progeny to the frequencies of the gametes in the parental generation. This population genetics approach shows that a model in which both behaviours are determined by a single locus can lead to a stable equilibrium in which the two behaviours continue to segregate. A model in which the behaviours are determined by genes at two separate loci leads eventually to fixation of the alleles at both loci but this can take many generations of selection. Models of the type described in this paper will be needed to understand the evolution of complex behaviour when genomic or experimental information is available about the genetic determinants of behaviour and the selective values of different genomes. (c) 2007 Elsevier Inc. All rights reserved.
Resumo:
The structures of trimethylchlorogermane ((CH3)(3)GeCl) and trimethylbromogermane ((CH3)(3)GeBr) have been determined by gas-phase electron diffraction (GED), augmented by the results from ab initio calculations employing second-order Moller-Plesset (MP2) level of theory and the 6-311+G(d) basis set. All the electrons were included in the correlation calculation. The results from the ab initio calculations indicated that these molecules have C-3v symmetry, and models with this symmetry were used in the electron diffraction analysis. The results for the principal distances (r(g)) and angles (angle(alpha)) from the combined GED/ab initio study of trimethylchlorogermane (with estimated 2sigma uncertainties) are: r(Ge-C) = 1.950(4) Angstrom, r(Ge-Cl) = 2.173(4) Angstrom, r(C-H) = 1.090(9) Angstrom, angleCGeC = 112.7(7)degrees, angleCGeCl = 106.0(8)degrees, angleGeCH = 107.8(12)degrees. The results for the principal distances (r(g)) and angles (angle(alpha)) from the combined GED/ab initio study of trimethylbromogermane (with estimated 2sigma uncertainties) are: r(Ge-C) = 1.952(7) Angstrom, r(Ge-Br) = 2.325(4) Angstrom, r(C-H) = 1. 140(28) Angstrom, angleCGeC = 114.2(11)degrees, angleCGeBr = 104.2(13)degrees, angleGeCH 106.9(43)degrees. Local C-3v symmetry and staggered conformation were assumed for the methyl groups.