979 resultados para approximate calculation of sums
Resumo:
A new electronic software distribution (ESD) life cycle analysis (LCA)methodology and model structure were constructed to calculate energy consumption and greenhouse gas (GHG) emissions. In order to counteract the use of high level, top-down modeling efforts, and to increase result accuracy, a focus upon device details and data routes was taken. In order to compare ESD to a relevant physical distribution alternative,physical model boundaries and variables were described. The methodology was compiled from the analysis and operational data of a major online store which provides ESD and physical distribution options. The ESD method included the calculation of power consumption of data center server and networking devices. An in-depth method to calculate server efficiency and utilization was also included to account for virtualization and server efficiency features. Internet transfer power consumption was analyzed taking into account the number of data hops and networking devices used. The power consumed by online browsing and downloading was also factored into the model. The embedded CO2e of server and networking devices was proportioned to each ESD process. Three U.K.-based ESD scenarios were analyzed using the model which revealed potential CO2e savings of 83% when ESD was used over physical distribution. Results also highlighted the importance of server efficiency and utilization methods.
Resumo:
In this paper the authors exploit two equivalent formulations of the average rate of material entropy production in the climate system to propose an approximate splitting between contributions due to vertical and eminently horizontal processes. This approach is based only on 2D radiative fields at the surface and at the top of atmosphere. Using 2D fields at the top of atmosphere alone, lower bounds to the rate of material entropy production and to the intensity of the Lorenz energy cycle are derived. By introducing a measure of the efficiency of the planetary system with respect to horizontal thermodynamic processes, it is possible to gain insight into a previous intuition on the possibility of defining a baroclinic heat engine extracting work from the meridional heat flux. The approximate formula of the material entropy production is verified and used for studying the global thermodynamic properties of climate models (CMs) included in the Program for Climate Model Diagnosis and Intercomparison (PCMDI)/phase 3 of the Coupled Model Intercomparison Project (CMIP3) dataset in preindustrial climate conditions. It is found that about 90% of the material entropy production is due to vertical processes such as convection, whereas the large-scale meridional heat transport contributes to only about 10% of the total. This suggests that the traditional two-box models used for providing a minimal representation of entropy production in planetary systems are not appropriate, whereas a basic—but conceptually correct—description can be framed in terms of a four-box model. The total material entropy production is typically 55 mW m−2 K−1, with discrepancies on the order of 5%, and CMs’ baroclinic efficiencies are clustered around 0.055. The lower bounds on the intensity of the Lorenz energy cycle featured by CMs are found to be around 1.0–1.5 W m−2, which implies that the derived inequality is rather stringent. When looking at the variability and covariability of the considered thermodynamic quantities, the agreement among CMs is worse, suggesting that the description of feedbacks is more uncertain. The contributions to material entropy production from vertical and horizontal processes are positively correlated, so that no compensation mechanism seems in place. Quite consistently among CMs, the variability of the efficiency of the system is a better proxy for variability of the entropy production due to horizontal processes than that of the large-scale heat flux. The possibility of providing constraints on the 3D dynamics of the fluid envelope based only on 2D observations of radiative fluxes seems promising for the observational study of planets and for testing numerical models.
Resumo:
A new parameterisation is described that predicts the temperature perturbations due to sub-grid scale orographic gravity waves in the atmosphere of the 19 level HadAM3 version of the United Kingdom Met Office Unified Model. The explicit calculation of the wave phase allows the sign of the temperature perturbation to be predicted. The scheme is used to create orographic clouds, including cirrus, that were previously absent in model simulations. A novel approach to the validation of this parameterisation makes use of both satellite observations of a case study, and a simulation in which the Unified Model is nudged towards ERA-40 assimilated winds, temperatures and humidities. It is demonstrated that this approach offers a feasible way of introducing large scale orographic cirrus clouds into GCMs.
Resumo:
Experiments assimilating the RAPID dataset of deep temperature and salinity profiles at 26.5°N on the western and eastern Atlantic boundaries into a 1° global NEMO ocean model have been performed. The meridional overturning circulation (MOC) is then assessed against the transports calculated directly from observations. The best initialization found for this short period was obtained by assimilating the EN3 upper-ocean hydrography database prior to 2004, after which different methods of assimilating 5-day average RAPID profiles at the western boundary were tested. The model MOC is strengthened by ∼ 2 Sv giving closer agreement with the RAPID array transports, when the western boundary profiles are assimilated only below 900 m (the approximate depth of the Florida Straits, which are not well resolved) and when the T,S observations are spread meridionally from 10 to 35°N along the deep western boundary. The use of boundary-focused covariances has the largest impact on the assimilation results, otherwise using more conventional Gaussian covariances has a very local impact on the MOC at 26°N with strong adverse impacts on the MOC stream function at higher and lower latitudes. Even using boundary-focused covariances only enables the MOC to be strengthened for ∼ 2 years, after which the increased transport of warm waters leads to a negative feedback on water formation in the subpolar gyre which then reduces the MOC. This negative feedback can be mitigated if EN3 hydrography data continue to be assimilated along with the RAPID array boundary data. Copyright © 2012 Royal Meteorological Society and Crown in the right of Canada.
Resumo:
In view of the increasing interest in home-grown legumes as components of diets for non-ruminant livestock and in an attempt to reduce the reliance on imported soya bean meal (SBM), two experiments were conducted to evaluate samples of peas and faba beans for their standardised ileal digestibility (SID) of amino acids determined with young broiler chicks. Experiment 1 evaluated six faba bean and seven pea cultivars and Experiment 2 evaluated two faba bean and three pea cultivars as well as a sample of soya bean meal provided as a reference material. Peas and beans were added at 750g/kg as the only source of protein/amino acids in a semi-synthetic diet containing the inert marker titanium dioxide; SBM was added, in a control diet, at 500g/kg. Each diet was fed to six replicates of a cage containing two Ross-type broilers for 96h at which point birds were culled allowing removal of ileal digesta. Chemical analyses allowed the calculation of the coefficient of SID of amino acids. There were no differences between samples of the same pulse species (P>0.05) but peas had higher values (P<0.05), similar to SBM, than beans. Trypsin inhibitor content (expressed as g trypsin inhibitor units/mg sample) of all pea samples was low and in the range 0.83–1.77mg/kg. There was relatively little variation in bean tannin content and composition amongst the coloured-flowered varieties; however, the white-flowered cultivar had no tannins. There was no correlation between tannin content and coefficient of SID. The content of SID of amino acids (g/kg legume) was higher in SBM when compared with peas and beans by virtue of having higher total concentrations.
Resumo:
Undirected graphical models are widely used in statistics, physics and machine vision. However Bayesian parameter estimation for undirected models is extremely challenging, since evaluation of the posterior typically involves the calculation of an intractable normalising constant. This problem has received much attention, but very little of this has focussed on the important practical case where the data consists of noisy or incomplete observations of the underlying hidden structure. This paper specifically addresses this problem, comparing two alternative methodologies. In the first of these approaches particle Markov chain Monte Carlo (Andrieu et al., 2010) is used to efficiently explore the parameter space, combined with the exchange algorithm (Murray et al., 2006) for avoiding the calculation of the intractable normalising constant (a proof showing that this combination targets the correct distribution in found in a supplementary appendix online). This approach is compared with approximate Bayesian computation (Pritchard et al., 1999). Applications to estimating the parameters of Ising models and exponential random graphs from noisy data are presented. Each algorithm used in the paper targets an approximation to the true posterior due to the use of MCMC to simulate from the latent graphical model, in lieu of being able to do this exactly in general. The supplementary appendix also describes the nature of the resulting approximation.
Resumo:
A method is suggested for the calculation of the friction velocity for stable turbulent boundary-layer flow over hills. The method is tested using a continuous upstream mean velocity profile compatible with the propagation of gravity waves, and is incorporated into the linear model of Hunt, Leibovich and Richards with the modification proposed by Hunt, Richards and Brighton to include the effects of stability, and the reformulated solution of Weng for the near-surface region. Those theoretical results are compared with results from simulations using a non-hydrostatic microscale-mesoscale two-dimensional numerical model, and with field observations for different values of stability. These comparisons show a considerable improvement in the behaviour of the theoretical model when the friction velocity is calculated using the method proposed here, leading to a consistent variation of the boundary-layer structure with stability, and better agreement with observational and numerical data.
Resumo:
We present the first calculation of the complete NLO QCD corrections to the production of heavy flavors with longitudinally polarized hadrons. This reaction can be used at RHIC to extract the gluon helicity density and may shed light on the "heavy quark enigma". The theoretical uncertainties are briefly discussed.
Resumo:
We present all relevant details of our calculation of the complete next-to-leading order O(αS2α) QCD corrections to heavy flavor photoproduction with longitudinally polarized point-like photons and hadrons. In particular we provide analytical results for the virtual plus soft gluon cross section. We carefully address the relevance of remaining theoretical uncertainties by varying, for instance, the factorization and renormalization scales independently. Such studies are of importance for a meaningful first direct determination of the polarized gluon density Δg from the total charm production spin asymmetry by the upcoming COMPASS experiment. It is shown that the scale uncertainty is considerably reduced in next-to-leading order, but the dependence on the charm quark mass is sizable at fixed target energies. Finally, we study several differential single-inclusive heavy quark distributions and, for the polarized HERA option, the total bottom spin asymmetry.
Resumo:
We present a calculation of the next-to-leading order ... QCD corrections to heavy flavor photoproduction with longitudinally polarized beams. We apply our results to study the longitudinal spin asymmetry for the total charm quark production cross section which will be utilized by the forthcoming COMPASS experiment at CERN to obtain first direct information on the polarized gluon density Δg. We also briefly discuss the main theoretical uncertainties inherent in this calculation. In particular we demonstrate that the factorization scale dependence is considerably reduced in next-to-leading order.
Resumo:
The complete details of our calculation of the NLO QCD corrections to heavy flavor photo- and hadroproduction with longitudinally polarized initial states are presented. The main motivation for investigating these processes is the determination of the polarized gluon density at the COMPASS and RHIC experiments, respectively, in the near future. All methods used in the computation are extensively documented, providing a self-contained introduction to this type of calculations. Some employed tools also may be of general interest, e.g., the series expansion of hypergeometric functions. The relevant parton level results are collected and plotted in the form of scaling functions. However, the simplification of the obtained gluon-gluon virtual contributions has not been completed yet. Thus NLO phenomenological predictions are only given in the case of photoproduction. The theoretical uncertainties of these predictions, in particular with respect to the heavy quark mass, are carefully considered. Also it is shown that transverse momentum cuts can considerably enhance the measured production asymmetries. Finally unpolarized heavy quark production is reviewed in order to derive conditions for a successful interpretation of future spin-dependent experimental data.
Resumo:
We establish Maximum Principles which apply to vectorial approximate minimizers of the general integral functional of Calculus of Variations. Our main result is a version of the Convex Hull Property. The primary advance compared to results already existing in the literature is that we have dropped the quasiconvexity assumption of the integrand in the gradient term. The lack of weak Lower semicontinuity is compensated by introducing a nonlinear convergence technique, based on the approximation of the projection onto a convex set by reflections and on the invariance of the integrand in the gradient term under the Orthogonal Group. Maximum Principles are implied for the relaxed solution in the case of non-existence of minimizers and for minimizing solutions of the Euler–Lagrange system of PDE.
Resumo:
The sensitivity of the biological parameters in a nutrient-phytoplankton-zooplankton-detritus (NPZD) model in the calculation of the air-sea CO2 flux, primary production and detrital export is analysed. We explore the effect on these outputs of variation in the values of the twenty parameters that control ocean ecosystem growth in a 1-D formulation of the UK Met Office HadOCC NPZD model used in GCMs. We use and compare the results from one-at-a-time and all-at-a-time perturbations performed at three sites in the EuroSITES European Ocean Observatory Network: the Central Irminger Sea (60° N 40° W), the Porcupine Abyssal Plain (49° N 16° W) and the European Station for Time series in the Ocean Canary Islands (29° N 15° W). Reasonable changes to the values of key parameters are shown to have a large effect on the calculation of the air-sea CO2 flux, primary production, and export of biological detritus to the deep ocean. Changes in the values of key parameters have a greater effect in more productive regions than in less productive areas. The most sensitive parameters are generally found to be those controlling well-established ocean ecosystem parameterisations widely used in many NPZD-type models. The air-sea CO2 flux is most influenced by variation in the parameters that control phytoplankton growth, detrital sinking and carbonate production by phytoplankton (the rain ratio). Primary production is most sensitive to the parameters that define the shape of the photosynthesis-irradiance curve. Export production is most sensitive to the parameters that control the rate of detrital sinking and the remineralisation of detritus.
Resumo:
In this contribution, we continue our exploration of the factors defining the Mesozoic climatic history. We improve the Earth system model GEOCLIM designed for long term climate and geochemical reconstructions by adding the explicit calculation of the biome dynamics using the LPJ model. The coupled GEOCLIM-LPJ model thus allows the simultaneous calculation of the climate with a 2-D spatial resolution, the coeval atmospheric CO2, and the continental biome distribution. We found that accounting for the climatic role of the continental vegetation dynamics (albedo change, water cycle and surface roughness modulations) strongly affects the reconstructed geological climate. Indeed the calculated partial pressure of atmospheric CO2 over the Mesozoic is twice the value calculated when assuming a uniform constant vegetation. This increase in CO2 is triggered by a global cooling of the continents, itself triggered by a general increase in continental albedo owing to the development of desertic surfaces. This cooling reduces the CO2 consumption through silicate weathering, and hence results in a compensating increase in the atmospheric CO2 pressure. This study demonstrates that the impact of land plants on climate and hence on atmospheric CO2 is as important as their geochemical effect through the enhancement of chemical weathering of the continental surface. Our GEOCLIM-LPJ simulations also define a climatic baseline for the Mesozoic, around which exceptionally cool and warm events can be identified.
Resumo:
The solar and longwave environmental irradiance geometry (SOLWEIG) model simulates spatial variations of 3-D radiation fluxes and mean radiant temperature (T mrt) as well as shadow patterns in complex urban settings. In this paper, a new vegetation scheme is included in SOLWEIG and evaluated. The new shadow casting algorithm for complex vegetation structures makes it possible to obtain continuous images of shadow patterns and sky view factors taking both buildings and vegetation into account. For the calculation of 3-D radiation fluxes and T mrt, SOLWEIG only requires a limited number of inputs, such as global shortwave radiation, air temperature, relative humidity, geographical information (latitude, longitude and elevation) and urban geometry represented by high-resolution ground and building digital elevation models (DEM). Trees and bushes are represented by separate DEMs. The model is evaluated using 5 days of integral radiation measurements at two sites within a square surrounded by low-rise buildings and vegetation in Göteborg, Sweden (57°N). There is good agreement between modelled and observed values of T mrt, with an overall correspondence of R 2 = 0.91 (p < 0.01, RMSE = 3.1 K). A small overestimation of T mrt is found at locations shadowed by vegetation. Given this good performance a number of suggestions for future development are identified for applications which include for human comfort, building design, planning and evaluation of instrument exposure.