907 resultados para Simulation Time-Step


Relevância:

80.00% 80.00%

Publicador:

Resumo:

The infinitesimal differential quantum Monte Carlo (QMC) technique is used to estimate electrostatic polarizabilities of the H and He atoms up to the sixth order in the electric field perturbation. All 542 different QMC estimators of the nonzero atomic polarizabilities are derived and used in order to decrease the statistical error and to obtain the maximum efficiency of the simulations. We are confident that the estimates are "exact" (free of systematic error): the two atoms are nodeless systems, hence no fixed-node error is introduced. Furthermore, we develope and use techniques which eliminate systematic error inherent when extrapolating our results to zero time-step and large stack-size. The QMC results are consistent with published accurate values obtained using perturbation methods. The precision is found to be related to the number of perturbations, varying from 2 to 4 significant digits.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

La conception de systèmes hétérogènes exige deux étapes importantes, à savoir : la modélisation et la simulation. Habituellement, des simulateurs sont reliés et synchronisés en employant un bus de co-simulation. Les approches courantes ont beaucoup d’inconvénients : elles ne sont pas toujours adaptées aux environnements distribués, le temps d’exécution de simulation peut être très décevant, et chaque simulateur a son propre noyau de simulation. Nous proposons une nouvelle approche qui consiste au développement d’un simulateur compilé multi-langage où chaque modèle peut être décrit en employant différents langages de modélisation tel que SystemC, ESyS.Net ou autres. Chaque modèle contient généralement des modules et des moyens de communications entre eux. Les modules décrivent des fonctionnalités propres à un système souhaité. Leur description est réalisée en utilisant la programmation orientée objet et peut être décrite en utilisant une syntaxe que l’utilisateur aura choisie. Nous proposons ainsi une séparation entre le langage de modélisation et la simulation. Les modèles sont transformés en une même représentation interne qui pourrait être vue comme ensemble d’objets. Notre environnement compile les objets internes en produisant un code unifié au lieu d’utiliser plusieurs langages de modélisation qui ajoutent beaucoup de mécanismes de communications et des informations supplémentaires. Les optimisations peuvent inclure différents mécanismes tels que le regroupement des processus en un seul processus séquentiel tout en respectant la sémantique des modèles. Nous utiliserons deux niveaux d’abstraction soit le « register transfer level » (RTL) et le « transaction level modeling » (TLM). Le RTL permet une modélisation à bas niveau d’abstraction et la communication entre les modules se fait à l’aide de signaux et des signalisations. Le TLM est une modélisation d’une communication transactionnelle à un plus haut niveau d’abstraction. Notre objectif est de supporter ces deux types de simulation, mais en laissant à l’usager le choix du langage de modélisation. De même, nous proposons d’utiliser un seul noyau au lieu de plusieurs et d’enlever le bus de co-simulation pour accélérer le temps de simulation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Le travail de modélisation a été réalisé à travers EGSnrc, un logiciel développé par le Conseil National de Recherche Canada.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Worldwide water managers are increasingly challenged to allocate sufficient and affordable water supplies to different water use sectors without further degrading river ecosystems and their valuable services to mankind. Since 1950 human population almost tripled, water abstractions increased by a factor of four, and the number of large dam constructions is about eight times higher today. From a hydrological perspective, the alteration of river flows (temporally and spatially) is one of the main consequences of global change and further impairments can be expected given growing population pressure and projected climate change. Implications have been addressed in numerous hydrological studies, but with a clear focus on human water demands. Ecological water requirements have often been neglected or addressed in a very simplistic manner, particularly from the large-scale perspective. With his PhD thesis, Christof Schneider took up the challenge to assess direct (dam operation and water abstraction) and indirect (climate change) impacts of human activities on river flow regimes and evaluate the consequences for river ecosystems by using a modeling approach. The global hydrology model WaterGAP3 (developed at CESR) was applied and further developed within this thesis to carry out several model experiments and assess anthropogenic river flow regime modifications and their effects on river ecosystems. To address the complexity of ecological water requirements the assessment is based on three main ideas: (i) the natural flow paradigm, (ii) the perception that different flows have different ecological functions, and (iii) the flood pulse concept. The thesis shows that WaterGAP3 performs well in representing ecologically relevant flow characteristics on a daily time step, and therefore justifies its application within this research field. For the first time a methodology was established to estimate bankfull flow on a 5 by 5 arc minute grid cell raster globally, which is a key parameter in eFlow assessments as it marks the point where rivers hydraulically connect to adjacent floodplains. Management of dams and water consumption pose a risk to floodplains and riparian wetlands as flood volumes are significantly reduced. The thesis highlights that almost one-third of 93 selected Ramsar sites are seriously affected by modified inundation patterns today, and in the future, inundation patterns are very likely to be further impaired as a result of new major dam initiatives and climate change. Global warming has been identified as a major threat to river flow regimes as rising temperatures, declining snow cover, changing precipitation patterns and increasing climate variability are expected to seriously modify river flow regimes in the future. Flow regimes in all climate zones will be affected, in particular the polar zone (Northern Scandinavia) with higher river flows during the year and higher flood peaks in spring. On the other side, river flows in the Mediterranean are likely to be even more intermittent in the future because of strong reductions in mean summer precipitation as well as a decrease in winter precipitation, leading to an increasing number of zero flow events creating isolated pools along the river and transitions from lotic to lentic waters. As a result, strong impacts on river ecosystem integrity can be expected. Already today, large amounts of water are withdrawn in this region for agricultural irrigation and climate change is likely to exacerbate the current situation of water shortages.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We present an immersed interface method for the incompressible Navier Stokes equations capable of handling rigid immersed boundaries. The immersed boundary is represented by a set of Lagrangian control points. In order to guarantee that the no-slip condition on the boundary is satisfied, singular forces are applied on the fluid at the immersed boundary. The forces are related to the jumps in pressure and the jumps in the derivatives of both pressure and velocity, and are interpolated using cubic splines. The strength of singular forces is determined by solving a small system of equations at each time step. The Navier-Stokes equations are discretized on a staggered Cartesian grid by a second order accurate projection method for pressure and velocity.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The characteristics of service independence and flexibility of ATM networks make the control problems of such networks very critical. One of the main challenges in ATM networks is to design traffic control mechanisms that enable both economically efficient use of the network resources and desired quality of service to higher layer applications. Window flow control mechanisms of traditional packet switched networks are not well suited to real time services, at the speeds envisaged for the future networks. In this work, the utilisation of the Probability of Congestion (PC) as a bandwidth decision parameter is presented. The validity of PC utilisation is compared with QOS parameters in buffer-less environments when only the cell loss ratio (CLR) parameter is relevant. The convolution algorithm is a good solution for CAC in ATM networks with small buffers. If the source characteristics are known, the actual CLR can be very well estimated. Furthermore, this estimation is always conservative, allowing the retention of the network performance guarantees. Several experiments have been carried out and investigated to explain the deviation between the proposed method and the simulation. Time parameters for burst length and different buffer sizes have been considered. Experiments to confine the limits of the burst length with respect to the buffer size conclude that a minimum buffer size is necessary to achieve adequate cell contention. Note that propagation delay is a no dismiss limit for long distance and interactive communications, then small buffer must be used in order to minimise delay. Under previous premises, the convolution approach is the most accurate method used in bandwidth allocation. This method gives enough accuracy in both homogeneous and heterogeneous networks. But, the convolution approach has a considerable computation cost and a high number of accumulated calculations. To overcome this drawbacks, a new method of evaluation is analysed: the Enhanced Convolution Approach (ECA). In ECA, traffic is grouped in classes of identical parameters. By using the multinomial distribution function instead of the formula-based convolution, a partial state corresponding to each class of traffic is obtained. Finally, the global state probabilities are evaluated by multi-convolution of the partial results. This method avoids accumulated calculations and saves storage requirements, specially in complex scenarios. Sorting is the dominant factor for the formula-based convolution, whereas cost evaluation is the dominant factor for the enhanced convolution. A set of cut-off mechanisms are introduced to reduce the complexity of the ECA evaluation. The ECA also computes the CLR for each j-class of traffic (CLRj), an expression for the CLRj evaluation is also presented. We can conclude that by combining the ECA method with cut-off mechanisms, utilisation of ECA in real-time CAC environments as a single level scheme is always possible.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A parametrization for ice supersaturation is introduced into the ECMWF Integrated Forecast System (IFS), compatible with the cloud scheme that allows partial cloud coverage. It is based on the simple, but often justifiable, diagnostic assumption that the ice nucleation and subsequent depositional growth time-scales are short compared to the model time step, thus supersaturation is only permitted in the clear-sky portion of the grid cell. Results from model integrations using the new scheme are presented, which is demonstrated to increase upper-tropospheric humidity, decrease high-level cloud cover and, to a much lesser extent, cloud ice amounts, all as expected from simple arguments. Evaluation of the relative distribution of supersaturated humidity amounts shows good agreement with the observed climatology derived from in situ aircraft observations. With the new scheme, the global distribution of frequency of occurrence of supersaturated regions compares well with remotely sensed microwave limb sounder (MLS) data, with the most marked errors of underprediction occurring in regions where the model is known to underpredict deep convection. Finally, it is also demonstrated that the new scheme leads to improved predictions of permanent contrail cloud over southern England, which indirectly implies upper-tropospheric humidity fields are better represented for this region.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The Integrated Catchment Model of Nitrogen (INCA-N) was applied to the River Lambourn, a Chalk river-system in southern England. The model's abilities to simulate the long-term trend and seasonal patterns in observed stream water nitrate concentrations from 1920 to 2003 were tested. This is the first time a semi-distributed, daily time-step model has been applied to simulate such a long time period and then used to calculate detailed catchment nutrient budgets which span the conversion of pasture to arable during the late 1930s and 1940s. Thus, this work goes beyond source apportionment and looks to demonstrate how such simulations can be used to assess the state of the catchment and develop an understanding of system behaviour. The mass-balance results from 1921, 1922, 1991, 2001 and 2002 are presented and those for 1991 are compared to other modelled and literature values of loads associated with nitrogen soil processes and export. The variations highlighted the problem of comparing modelled fluxes with point measurements but proved useful for identifying the most poorly understood inputs and processes thereby providing an assessment of input data and model structural uncertainty. The modelled terrestrial and instream mass-balances also highlight the importance of the hydrological conditions in pollutant transport. Between 1922 and 2002, increased inputs of nitrogen from fertiliser, livestock and deposition have altered the nitrogen balance with a shift from possible reduction in soil fertility but little environmental impact in 1922, to a situation of nitrogen accumulation in the soil, groundwater and instream biota in 2002. In 1922 and 2002 it was estimated that approximately 2 and 18 kg N ha(-1) yr(-1) respectively were exported from the land to the stream. The utility of the approach and further considerations for the best use of models are discussed. (C) 2008 Elsevier B.V. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The Integrated Catchment Model of Nitrogen (INCA-N) was applied to the River Lambourn, a Chalk river-system in southern England. The model's abilities to simulate the long-term trend and seasonal patterns in observed stream water nitrate concentrations from 1920 to 2003 were tested. This is the first time a semi-distributed, daily time-step model has been applied to simulate such a long time period and then used to calculate detailed catchment nutrient budgets which span the conversion of pasture to arable during the late 1930s and 1940s. Thus, this work goes beyond source apportionment and looks to demonstrate how such simulations can be used to assess the state of the catchment and develop an understanding of system behaviour. The mass-balance results from 1921, 1922, 1991, 2001 and 2002 are presented and those for 1991 are compared to other modelled and literature values of loads associated with nitrogen soil processes and export. The variations highlighted the problem of comparing modelled fluxes with point measurements but proved useful for identifying the most poorly understood inputs and processes thereby providing an assessment of input data and model structural uncertainty. The modelled terrestrial and instream mass-balances also highlight the importance of the hydrological conditions in pollutant transport. Between 1922 and 2002, increased inputs of nitrogen from fertiliser, livestock and deposition have altered the nitrogen balance with a shift from possible reduction in soil fertility but little environmental impact in 1922, to a situation of nitrogen accumulation in the soil, groundwater and instream biota in 2002. In 1922 and 2002 it was estimated that approximately 2 and 18 kg N ha(-1) yr(-1) respectively were exported from the land to the stream. The utility of the approach and further considerations for the best use of models are discussed. (C) 2008 Elsevier B.V. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A new surface-crossing algorithm suitable for describing bond-breaking and bond-forming processes in molecular dynamics simulations is presented. The method is formulated for two intersecting potential energy manifolds which dissociate to different adiabatic states. During simulations, crossings are detected by monitoring an energy criterion. If fulfilled, the two manifolds are mixed over a finite number of time steps, after which the system is propagated on the second adiabat and the crossing is carried out with probability one. The algorithm is extensively tested (almost 0.5 mu s of total simulation time) for the rebinding of NO to myoglobin. The unbound surface ((FeNO)-N-...) is represented using a standard force field, whereas the bound surface (Fe-NO) is described by an ab initio potential energy surface. The rebinding is found to be nonexponential in time, in agreement with experimental studies, and can be described using two time constants. Depending on the asymptotic energy separation between the manifolds, the short rebinding timescale is between 1 and 9 ps, whereas the longer timescale is about an order of magnitude larger. NO molecules which do not rebind within 1 ns are typically found in the Xenon-4 pocket, indicating the high affinity of NO to this region in the protein.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this study we quantify the relationship between the aerosol optical depth increase from a volcanic eruption and the severity of the subsequent surface temperature decrease. This investigation is made by simulating 10 different sizes of eruption in a global circulation model (GCM) by changing stratospheric sulfate aerosol optical depth at each time step. The sizes of the simulated eruptions range from Pinatubo‐sized up to the magnitude of supervolcanic eruptions around 100 times the size of Pinatubo. From these simulations we find that there is a smooth monotonic relationship between the global mean maximum aerosol optical depth anomaly and the global mean temperature anomaly and we derive a simple mathematical expression which fits this relationship well. We also construct similar relationships between global mean aerosol optical depth and the temperature anomaly at every individual model grid box to produce global maps of best‐fit coefficients and fit residuals. These maps are used with caution to find the eruption size at which a local temperature anomaly is clearly distinct from the local natural variability and to approximate the temperature anomalies which the model may simulate following a Tambora‐sized eruption. To our knowledge, this is the first study which quantifies the relationship between aerosol optical depth and resulting temperature anomalies in a simple way, using the wealth of data that is available from GCM simulations.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Proposed is a unique cell histogram architecture which will process k data items in parallel to compute 2q histogram bins per time step. An array of m/2q cells computes an m-bin histogram with a speed-up factor of k; k ⩾ 2 makes it faster than current dual-ported memory implementations. Furthermore, simple mechanisms for conflict-free storing of the histogram bins into an external memory array are discussed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper we consider boundary integral methods applied to boundary value problems for the positive definite Helmholtz-type problem -DeltaU + alpha U-2 = 0 in a bounded or unbounded domain, with the parameter alpha real and possibly large. Applications arise in the implementation of space-time boundary integral methods for the heat equation, where alpha is proportional to 1/root deltat, and deltat is the time step. The corresponding layer potentials arising from this problem depend nonlinearly on the parameter alpha and have kernels which become highly peaked as alpha --> infinity, causing standard discretization schemes to fail. We propose a new collocation method with a robust convergence rate as alpha --> infinity. Numerical experiments on a model problem verify the theoretical results.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In numerical weather prediction (NWP) data assimilation (DA) methods are used to combine available observations with numerical model estimates. This is done by minimising measures of error on both observations and model estimates with more weight given to data that can be more trusted. For any DA method an estimate of the initial forecast error covariance matrix is required. For convective scale data assimilation, however, the properties of the error covariances are not well understood. An effective way to investigate covariance properties in the presence of convection is to use an ensemble-based method for which an estimate of the error covariance is readily available at each time step. In this work, we investigate the performance of the ensemble square root filter (EnSRF) in the presence of cloud growth applied to an idealised 1D convective column model of the atmosphere. We show that the EnSRF performs well in capturing cloud growth, but the ensemble does not cope well with discontinuities introduced into the system by parameterised rain. The state estimates lose accuracy, and more importantly the ensemble is unable to capture the spread (variance) of the estimates correctly. We also find, counter-intuitively, that by reducing the spatial frequency of observations and/or the accuracy of the observations, the ensemble is able to capture the states and their variability successfully across all regimes.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A recent paper published in this journal considers the numerical integration of the shallow-water equations using the leapfrog time-stepping scheme [Sun Wen-Yih, Sun Oliver MT. A modified leapfrog scheme for shallow water equations. Comput Fluids 2011;52:69–72]. The authors of that paper propose using the time-averaged height in the numerical calculation of the pressure-gradient force, instead of the instantaneous height at the middle time step. The authors show that this modification doubles the maximum Courant number (and hence the maximum time step) at which the integrations are stable, doubling the computational efficiency. Unfortunately, the pressure-averaging technique proposed by the authors is not original. It was devised and published by Shuman [5] and has been widely used in the atmosphere and ocean modelling community for over 40 years.