56 resultados para Energy level splitting

em CentAUR: Central Archive University of Reading - UK


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ruminant production is a vital part of food industry but it raises environmental concerns, partly due to the associated methane outputs. Efficient methane mitigation and estimation of emissions from ruminants requires accurate prediction tools. Equations recommended by international organizations or scientific studies have been developed with animals fed conserved forages and concentrates and may be used with caution for grazing cattle. The aim of the current study was to develop prediction equations with animals fed fresh grass in order to be more suitable to pasture-based systems and for animals at lower feeding levels. A study with 25 nonpregnant nonlactating cows fed solely fresh-cut grass at maintenance energy level was performed over two consecutive grazing seasons. Grass of broad feeding quality, due to contrasting harvest dates, maturity, fertilisation and grass varieties, from eight swards was offered. Cows were offered the experimental diets for at least 2 weeks before housed in calorimetric chambers over 3 consecutive days with feed intake measurements and total urine and faeces collections performed daily. Methane emissions were measured over the last 2 days. Prediction models were developed from 100 3-day averaged records. Internal validation of these equations, and those recommended in literature, was performed. The existing in greenhouse gas inventories models under-estimated methane emissions from animals fed fresh-cut grass at maintenance while the new models, using the same predictors, improved prediction accuracy. Error in methane outputs prediction was decreased when grass nutrient, metabolisable energy and digestible organic matter concentrations were added as predictors to equations already containing dry matter or energy intakes, possibly because they explain feed digestibility and the type of energy-supplying nutrients more efficiently. Predictions based on readily available farm-level data, such as liveweight and grass nutrient concentrations were also generated and performed satisfactorily. New models may be recommended for predictions of methane emissions from grazing cattle at maintenance or low feeding levels.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Pasture-based ruminant production systems are common in certain areas of the world, but energy evaluation in grazing cattle is performed with equations developed, in their majority, with sheep or cattle fed total mixed rations. The aim of the current study was to develop predictions of metabolisable energy (ME) concentrations in fresh-cut grass offered to non-pregnant non-lactating cows at maintenance energy level, which may be more suitable for grazing cattle. Data were collected from three digestibility trials performed over consecutive grazing seasons. In order to cover a range of commercial conditions and data availability in pasture-based systems, thirty-eight equations for the prediction of energy concentrations and ratios were developed. An internal validation was performed for all equations and also for existing predictions of grass ME. Prediction error for ME using nutrient digestibility was lowest when gross energy (GE) or organic matter digestibilities were used as sole predictors, while the addition of grass nutrient contents reduced the difference between predicted and actual values, and explained more variation. Addition of N, GE and diethyl ether extract (EE) contents improved accuracy when digestible organic matter in DM was the primary predictor. When digestible energy was the primary explanatory variable, prediction error was relatively low, but addition of water-soluble carbohydrates, EE and acid-detergent fibre contents of grass decreased prediction error. Equations developed in the current study showed lower prediction errors when compared with those of existing equations, and may thus allow for an improved prediction of ME in practice, which is critical for the sustainability of pasture-based systems.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Improved nutrient utilization efficiency is strongly related to enhanced economic performance and reduced environmental footprint of dairy farms. Pasture-based systems are widely used for dairy production in certain areas of the world, but prediction equations of fresh grass nutritive value (nutrient digestibility and energy concentrations) are limited. Equations to predict digestible energy (DE) and metabolizable energy (ME) used for grazing cattle have been either developed with cattle fed conserved forage and concentrate diets or sheep fed previously frozen grass, and the majority of them require measurements less commonly available to producers, such as nutrient digestibility. The aim of the present study was therefore to develop prediction equations more suitable to grazing cattle for nutrient digestibility and energy concentrations, which are routinely available at farm level by using grass nutrient contents as predictors. A study with 33 nonpregnant, nonlactating cows fed solely fresh-cut grass at maintenance energy level for 50 wk was carried out over 3 consecutive grazing seasons. Freshly harvested grass of 3 cuts (primary growth and first and second regrowth), 9 fertilizer input levels, and contrasting stage of maturity (3 to 9 wk after harvest) was used, thus ensuring a wide representation of nutritional quality. As a result, a large variation existed in digestibility of dry matter (0.642-0.900) and digestible organic matter in dry matter (0.636-0.851) and in concentrations of DE (11.8-16.7 MJ/kg of dry matter) and ME (9.0-14.1 MJ/kg of dry matter). Nutrient digestibilities and DE and ME concentrations were negatively related to grass neutral detergent fiber (NDF) and acid detergent fiber (ADF) contents but positively related to nitrogen (N), gross energy, and ether extract (EE) contents. For each predicted variable (nutrient digestibilities or energy concentrations), different combinations of predictors (grass chemical composition) were found to be significant and increase the explained variation. For example, relatively higher R(2) values were found for prediction of N digestibility using N and EE as predictors; gross-energy digestibility using EE, NDF, ADF, and ash; NDF, ADF, and organic matter digestibilities using N, water-soluble carbohydrates, EE, and NDF; digestible organic matter in dry matter using water-soluble carbohydrates, EE, NDF, and ADF; DE concentration using gross energy, EE, NDF, ADF, and ash; and ME concentration using N, EE, ADF, and ash. Equations presented may allow a relatively quick and easy prediction of grass quality and, hence, better grazing utilization on commercial and research farms, where nutrient composition falls within the range assessed in the current study.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Currently microporous oxidic materials including zeolites are attracting interest as potential hydrogen storage materials. Understanding how molecular hydrogen interacts with these materials is important in the rational development of hydrogen storage materials and is also challenging theoretically. In this paper, we present an incoherent inelastic neutron scattering (INS) study of the adsorption of molecular hydrogen and hydrogen deuteride (HD) in a copper substituted ZSM5 zeolite varying the hydrogen dosage and temperature. We have demonstrated how inelastic neutron scattering can help us understand the interaction of H-2 molecules with a binding site in a particular microporous material, Cu ZSM5, and by implication of other similar materials. The H-2 molecule is bound as a single species lying parallel with the surface. As H-2 dosing increases, lateral interactions between the adsorbed H-2 molecules become apparent. With rising temperature of measurement up to 70 K (the limit of our experiments), H-2 molecules remain bound to the surface equivalent to a liquid or solid H-2 phase. The implication is that hydrogen is bound rather strongly in Cu ZSM5. Using the simple model for the anisotropic interaction to calculate the energy levels splitting, we found that the measured rotational constant of the hydrogen molecule is reduced as a consequence of adsorption by the Cu ZSM5. From the decrease in total signal intensity with increasing temperature, we were able to observe the conversion of para-hydrogen into ortho-hydrogen at paramagnetic centres and so determine the fraction of paramagnetic sites occupied by hydrogen molecules, ca. 60%. (c) 2006 Elsevier B.V. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper introduces a new neurofuzzy model construction and parameter estimation algorithm from observed finite data sets, based on a Takagi and Sugeno (T-S) inference mechanism and a new extended Gram-Schmidt orthogonal decomposition algorithm, for the modeling of a priori unknown dynamical systems in the form of a set of fuzzy rules. The first contribution of the paper is the introduction of a one to one mapping between a fuzzy rule-base and a model matrix feature subspace using the T-S inference mechanism. This link enables the numerical properties associated with a rule-based matrix subspace, the relationships amongst these matrix subspaces, and the correlation between the output vector and a rule-base matrix subspace, to be investigated and extracted as rule-based knowledge to enhance model transparency. The matrix subspace spanned by a fuzzy rule is initially derived as the input regression matrix multiplied by a weighting matrix that consists of the corresponding fuzzy membership functions over the training data set. Model transparency is explored by the derivation of an equivalence between an A-optimality experimental design criterion of the weighting matrix and the average model output sensitivity to the fuzzy rule, so that rule-bases can be effectively measured by their identifiability via the A-optimality experimental design criterion. The A-optimality experimental design criterion of the weighting matrices of fuzzy rules is used to construct an initial model rule-base. An extended Gram-Schmidt algorithm is then developed to estimate the parameter vector for each rule. This new algorithm decomposes the model rule-bases via an orthogonal subspace decomposition approach, so as to enhance model transparency with the capability of interpreting the derived rule-base energy level. This new approach is computationally simpler than the conventional Gram-Schmidt algorithm for resolving high dimensional regression problems, whereby it is computationally desirable to decompose complex models into a few submodels rather than a single model with large number of input variables and the associated curse of dimensionality problem. Numerical examples are included to demonstrate the effectiveness of the proposed new algorithm.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A new robust neurofuzzy model construction algorithm has been introduced for the modeling of a priori unknown dynamical systems from observed finite data sets in the form of a set of fuzzy rules. Based on a Takagi-Sugeno (T-S) inference mechanism a one to one mapping between a fuzzy rule base and a model matrix feature subspace is established. This link enables rule based knowledge to be extracted from matrix subspace to enhance model transparency. In order to achieve maximized model robustness and sparsity, a new robust extended Gram-Schmidt (G-S) method has been introduced via two effective and complementary approaches of regularization and D-optimality experimental design. Model rule bases are decomposed into orthogonal subspaces, so as to enhance model transparency with the capability of interpreting the derived rule base energy level. A locally regularized orthogonal least squares algorithm, combined with a D-optimality used for subspace based rule selection, has been extended for fuzzy rule regularization and subspace based information extraction. By using a weighting for the D-optimality cost function, the entire model construction procedure becomes automatic. Numerical examples are included to demonstrate the effectiveness of the proposed new algorithm.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The question of linear sheared-disturbance evolution in constant-shear parallel flow is here reexamined with regard to the temporary-amplification phenomenon noted first by Orr in 1907. The results apply directly to Rossby waves on a beta-plane, and are also relevant to the Eady model of baroclinic instability. It is shown that an isotropic initial distribution of standing waves maintains a constant energy level throughout the shearing process, the amplification of some waves being precisely balanced by the decay of the others. An expression is obtained for the energy of a distribution of disturbances whose wavevectors lie within a given angular wedge and an upper bound derived. It is concluded that the case for ubiquitous amplification made in recent studies may have been somewhat overstated: while carefully-chosen individual Fourier components can amplify considerably before they decay. a general distribution will tend to exhibit little or no amplification.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Quantum calculations of the ground vibrational state tunneling splitting of H-atom and D-atom transfer in malonaldehyde are performed on a full-dimensional ab initio potential energy surface (PES). The PES is a fit to 11 147 near basis-set-limit frozen-core CCSD(T) electronic energies. This surface properly describes the invariance of the potential with respect to all permutations of identical atoms. The saddle-point barrier for the H-atom transfer on the PES is 4.1 kcal/mol, in excellent agreement with the reported ab initio value. Model one-dimensional and "exact" full-dimensional calculations of the splitting for H- and D-atom transfer are done using this PES. The tunneling splittings in full dimensionality are calculated using the unbiased "fixed-node" diffusion Monte Carlo (DMC) method in Cartesian and saddle-point normal coordinates. The ground-state tunneling splitting is found to be 21.6 cm(-1) in Cartesian coordinates and 22.6 cm(-1) in normal coordinates, with an uncertainty of 2-3 cm(-1). This splitting is also calculated based on a model which makes use of the exact single-well zero-point energy (ZPE) obtained with the MULTIMODE code and DMC ZPE and this calculation gives a tunneling splitting of 21-22 cm(-1). The corresponding computed splittings for the D-atom transfer are 3.0, 3.1, and 2-3 cm(-1). These calculated tunneling splittings agree with each other to within less than the standard uncertainties obtained with the DMC method used, which are between 2 and 3 cm(-1), and agree well with the experimental values of 21.6 and 2.9 cm(-1) for the H and D transfer, respectively. (C) 2008 American Institute of Physics.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We review the sea-level and energy budgets together from 1961, using recent and updated estimates of all terms. From 1972 to 2008, the observed sea-level rise (1.8 ± 0.2 mm yr−1 from tide gauges alone and 2.1 ± 0.2 mm yr−1 from a combination of tide gauges and altimeter observations) agrees well with the sum of contributions (1.8 ± 0.4 mm yr−1) in magnitude and with both having similar increases in the rate of rise during the period. The largest contributions come from ocean thermal expansion (0.8 mm yr−1) and the melting of glaciers and ice caps (0.7 mm yr−1), with Greenland and Antarctica contributing about 0.4 mm yr−1. The cryospheric contributions increase through the period (particularly in the 1990s) but the thermosteric contribution increases less rapidly. We include an improved estimate of aquifer depletion (0.3 mm yr−1), partially offsetting the retention of water in dams and giving a total terrestrial storage contribution of −0.1 mm yr−1. Ocean warming (90% of the total of the Earth's energy increase) continues through to the end of the record, in agreement with continued greenhouse gas forcing. The aerosol forcing, inferred as a residual in the atmospheric energy balance, is estimated as −0.8 ± 0.4 W m−2 for the 1980s and early 1990s. It increases in the late 1990s, as is required for consistency with little surface warming over the last decade. This increase is likely at least partially related to substantial increases in aerosol emissions from developing nations and moderate volcanic activity

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The UK Government's Department for Energy and Climate Change has been investigating the feasibility of developing a national energy efficiency data framework covering both domestic and non-domestic buildings. Working closely with the Energy Saving Trust and energy suppliers, the aim is to develop a data framework to monitor changes in energy efficiency, develop and evaluate programmes and improve information available to consumers. Key applications of the framework are to understand trends in built stock energy use, identify drivers and evaluate the success of different policies. For energy suppliers, it could identify what energy uses are growing, in which sectors and why. This would help with market segmentation and the design of products. For building professionals, it could supplement energy audits and modelling of end-use consumption with real data and support the generation of accurate and comprehensive benchmarks. This paper critically examines the results of the first phase of work to construct a national energy efficiency data-framework for the domestic sector focusing on two specific issues: (a) drivers of domestic energy consumption in terms of the physical nature of the dwellings and socio-economic characteristics of occupants and (b) the impact of energy efficiency measures on energy consumption.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We present an efficient graph-based algorithm for quantifying the similarity of household-level energy use profiles, using a notion of similarity that allows for small time–shifts when comparing profiles. Experimental results on a real smart meter data set demonstrate that in cases of practical interest our technique is far faster than the existing method for computing the same similarity measure. Having a fast algorithm for measuring profile similarity improves the efficiency of tasks such as clustering of customers and cross-validation of forecasting methods using historical data. Furthermore, we apply a generalisation of our algorithm to produce substantially better household-level energy use forecasts from historical smart meter data.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

As low carbon technologies become more pervasive, distribution network operators are looking to support the expected changes in the demands on the low voltage networks through the smarter control of storage devices. Accurate forecasts of demand at the single household-level, or of small aggregations of households, can improve the peak demand reduction brought about through such devices by helping to plan the appropriate charging and discharging cycles. However, before such methods can be developed, validation measures are required which can assess the accuracy and usefulness of forecasts of volatile and noisy household-level demand. In this paper we introduce a new forecast verification error measure that reduces the so called “double penalty” effect, incurred by forecasts whose features are displaced in space or time, compared to traditional point-wise metrics, such as Mean Absolute Error and p-norms in general. The measure that we propose is based on finding a restricted permutation of the original forecast that minimises the point wise error, according to a given metric. We illustrate the advantages of our error measure using half-hourly domestic household electrical energy usage data recorded by smart meters and discuss the effect of the permutation restriction.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Street-level mean flow and turbulence govern the dispersion of gases away from their sources in urban areas. A suitable reference measurement in the driving flow above the urban canopy is needed to both understand and model complex street-level flow for pollutant dispersion or emergency response purposes. In vegetation canopies, a reference at mean canopy height is often used, but it is unclear whether this is suitable for urban canopies. This paper presents an evaluation of the quality of reference measurements at both roof-top (height = H) and at height z = 9H = 190 m, and their ability to explain mean and turbulent variations of street-level flow. Fast response wind data were measured at street canyon and reference sites during the six-week long DAPPLE project field campaign in spring 2004, in central London, UK, and an averaging time of 10 min was used to distinguish recirculation-type mean flow patterns from turbulence. Flow distortion at each reference site was assessed by considering turbulence intensity and streamline deflection. Then each reference was used as the dependent variable in the model of Dobre et al. (2005) which decomposes street-level flow into channelling and recirculating components. The high reference explained more of the variability of the mean flow. Coupling of turbulent kinetic energy was also stronger between street-level and the high reference flow rather than the roof-top. This coupling was weaker when overnight flow was stratified, and turbulence was suppressed at the high reference site. However, such events were rare (<1% of data) over the six-week long period. The potential usefulness of a centralised, high reference site in London was thus demonstrated with application to emergency response and air quality modelling.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents the model SCOPE (Soil Canopy Observation, Photochemistry and Energy fluxes), which is a vertical (1-D) integrated radiative transfer and energy balance model. The model links visible to thermal infrared radiance spectra (0.4 to 50 μm) as observed above the canopy to the fluxes of water, heat and carbon dioxide, as a function of vegetation structure, and the vertical profiles of temperature. Output of the model is the spectrum of outgoing radiation in the viewing direction and the turbulent heat fluxes, photosynthesis and chlorophyll fluorescence. A special routine is dedicated to the calculation of photosynthesis rate and chlorophyll fluorescence at the leaf level as a function of net radiation and leaf temperature. The fluorescence contributions from individual leaves are integrated over the canopy layer to calculate top-of-canopy fluorescence. The calculation of radiative transfer and the energy balance is fully integrated, allowing for feedback between leaf temperatures, leaf chlorophyll fluorescence and radiative fluxes. Leaf temperatures are calculated on the basis of energy balance closure. Model simulations were evaluated against observations reported in the literature and against data collected during field campaigns. These evaluations showed that SCOPE is able to reproduce realistic radiance spectra, directional radiance and energy balance fluxes. The model may be applied for the design of algorithms for the retrieval of evapotranspiration from optical and thermal earth observation data, for validation of existing methods to monitor vegetation functioning, to help interpret canopy fluorescence measurements, and to study the relationships between synoptic observations with diurnally integrated quantities. The model has been implemented in Matlab and has a modular design, thus allowing for great flexibility and scalability.