907 resultados para Cross-system comparison
Resumo:
La desertificació és un problema de degradació de sòls de gran importància en regions àrides, semi-àrides i sub-humides, amb serioses conseqüències ambientals, socials i econòmiques com a resultat de l'impacte d'activitats humanes en combinació amb condicions físiques i medi ambientals desfavorables (UNEP, 1994). L'objectiu principal d'aquesta tesi va ser el desenvolupament d'una metodologia simple per tal de poder avaluar de forma precisa l'estat i l'evolució de la desertificació a escala local, a través de la creació d'un model anomenat sistema d'indicators de desertificació (DIS). En aquest mateix context, un dels dos objectius específics d'aquesta recerca es va centrar en l'estudi dels factors més importants de degradació de sòls a escala de parcel.la, comportant un extens treball de camp, analisi de laboratori i la corresponent interpretació i discussió dels resultats obtinguts. El segon objectiu específic es va basar en el desenvolupament i aplicació del DIS. L'àrea d'estudi seleccionada va ser la conca de la Serra de Rodes, un ambient típic Mediterràni inclòs en el Parc Natural del Cap de Creus, NE Espanya, el qual ha estat progressivament abandonat pels agricultors durant el segle passat. Actualment, els incendis forestals així com el canvi d'ús del sòl i especialment l'abandonament de terres són considerats els problemes ambientals més importants a l'àrea d'estudi (Dunjó et al., 2003). En primer lloc, es va realitzar l'estudi dels processos i causes de la degradació dels sòls a l'àrea d'interés. En base a aquest coneixement, es va dur a terme la identificació i selecció dels indicadors de desertificació més rellevants. Finalment, els indicadors de desertificació seleccionats a escala de conca, incloent l'erosió del sòl i l'escolament superficial, es van integrar en un model espaial de procés. Ja que el sòl és considerat el principal indicador dels processos d'erosió, segons la FAO/UNEP/UNESCO (1979), tant el paisatge original així com els dos escenaris d'ús del sòl desenvolupats, un centrat en el cas hipotétic del pas d'un incendi forestal, i l'altre un paisatge completament cultivat, poden ser ambients classificats sota baixa o moderada degradació. En comparació amb l'escenari original, els dos escenaris creats van revelar uns valors més elevats d'erosió i escolament superficial, i en particular l'escenari cultivat. Per tant, aquests dos hipotètic escenaris no semblen ser una alternativa sostenible vàlida als processos de degradació que es donen a l'àrea d'estudi. No obstant, un ampli ventall d'escenaris alternatius poden ser desenvolupats amb el DIS, tinguent en compte les polítiques d'especial interés per la regió de manera que puguin contribuir a determinar les conseqüències potencials de desertificació derivades d'aquestes polítiques aplicades en aquest escenari tan complexe espaialment. En conclusió, el model desenvolupat sembla ser un sistema força acurat per la identificació de riscs presents i futurs, així com per programar efectivament mesures per combatre la desertificació a escala de conca. No obstant, aquesta primera versió del model presenta varies limitacions i la necessitat de realitzar més recerca en cas de voler desenvolupar una versió futura i millor del DIS.
Resumo:
The behavior of the Asian summer monsoon is documented and compared using the European Centre for Medium-Range Weather Forecasts (ECMWF) Reanalysis (ERA) and the National Centers for Environmental Prediction-National Center for Atmospheric Research (NCEP-NCAR) Reanalysis. In terms of seasonal mean climatologies the results suggest that, in several respects, the ERA is superior to the NCEP-NCAR Reanalysis. The overall better simulation of the precipitation and hence the diabatic heating field over the monsoon domain in ERA means that the analyzed circulation is probably nearer reality. In terms of interannual variability, inconsistencies in the definition of weak and strong monsoon years based on typical monsoon indices such as All-India Rainfall (AIR) anomalies and the large-scale wind shear based dynamical monsoon index (DMI) still exist. Two dominant modes of interannual variability have been identified that together explain nearly 50% of the variance. Individually, they have many features in common with the composite flow patterns associated with weak and strong monsoons, when defined in terms of regional AIR anomalies and the large-scale DMI. The reanalyses also show a common dominant mode of intraseasonal variability that describes the latitudinal displacement of the tropical convergence zone from its oceanic-to-continental regime and essentially captures the low-frequency active/break cycles of the monsoon. The relationship between interannual and intraseasonal variability has been investigated by considering the probability density function (PDF) of the principal component of the dominant intraseasonal mode. Based on the DMI, there is an indication that in years with a weaker monsoon circulation, the PDF is skewed toward negative values (i,e., break conditions). Similarly, the PDFs for El Nino and La Nina years suggest that El Nino predisposes the system to more break spells, although the sample size may limit the statistical significance of the results.
Resumo:
Integrations of a fully-coupled climate model with and without flux adjustments in the equatorial oceans are performed under 2×CO2 conditions to explore in more detail the impact of increased greenhouse gas forcing on the monsoon-ENSO system. When flux adjustments are used to correct some systematic model biases, ENSO behaviour in the modelled future climate features distinct irregular and periodic (biennial) regimes. Comparison with the observed record yields some consistency with ENSO modes primarily based on air-sea interaction and those dependent on basinwide ocean wave dynamics. Simple theory is also used to draw analogies between the regimes and irregular (stochastically forced) and self-excited oscillations respectively. Periodic behaviour is also found in the Asian-Australian monsoon system, part of an overall biennial tendency of the model under these conditions related to strong monsoon forcing and increased coupling between the Indian and Pacific Oceans. The tropospheric biennial oscillation (TBO) thus serves as a useful descriptor for the coupled monsoon-ENSO system in this case. The presence of obvious regime changes in the monsoon-ENSO system on interdecadal timescales, when using flux adjustments, suggests there may be greater uncertainty in projections of future climate, although further modelling studies are required to confirm the realism and cause of such changes.
Resumo:
Different systems, different purposes – but how do they compare as learning environments? We undertook a survey of students at the University, asking whether they learned from their use of the systems, whether they made contact with other students through them, and how often they used them. Although it was a small scale survey, the results are quite enlightening and quite surprising. Blackboard is populated with learning material, has all the students on a module signed up to it, a safe environment (in terms of Acceptable Use and some degree of staff monitoring) and provides privacy within the learning group (plus lecturer and relevant support staff). Facebook, on the other hand, has no learning material, only some of the students using the system, and on the face of it, it has the opportunity for slips in privacy and potential bullying because the Acceptable Use policy is more lax than an institutional one, and breaches must be dealt with on an exception basis, when reported. So why do more students find people on their courses through Facebook than Blackboard? And why are up to 50% of students reporting that they have learned from using Facebook? Interviews indicate that students in subjects which use seminars are using Facebook to facilitate working groups – they can set up private groups which give them privacy to discuss ideas in an environment which perceived as safer than Blackboard can provide. No staff interference, unless they choose to invite them in, and the opportunity to select who in the class can engage. The other striking finding is the difference in use between the genders. Males are using blackboard more frequently than females, whilst the reverse is true for Facebook. Interviews suggest that this may have something to do with needing to access lecture notes… Overall, though, it appears that there is little relationship between the time spent engaging with Blackboard and reports that students have learned from it. Because Blackboard is our central repository for notes, any contact is likely to result in some learning. Facebook, however, shows a clear relationship between frequency of use and perception of learning – and our students post frequently to Facebook. Whilst much of this is probably trivia and social chit chat, the educational elements of it are, de facto, contructivist in nature. Further questions need to be answered - Is the reason the students learn from Facebook because they are creating content which others will see and comment on? Is it because they can engage in a dialogue, without the risk of interruption by others?
Resumo:
We discuss and test the potential usefulness of single-column models (SCMs) for the testing of stochastic physics schemes that have been proposed for use in general circulation models (GCMs). We argue that although single column tests cannot be definitive in exposing the full behaviour of a stochastic method in the full GCM, and although there are differences between SCM testing of deterministic and stochastic methods, SCM testing remains a useful tool. It is necessary to consider an ensemble of SCM runs produced by the stochastic method. These can be usefully compared to deterministic ensembles describing initial condition uncertainty and also to combinations of these (with structural model changes) into poor man's ensembles. The proposed methodology is demonstrated using an SCM experiment recently developed by the GCSS (GEWEX Cloud System Study) community, simulating transitions between active and suppressed periods of tropical convection.
Resumo:
A collection of 24 seawaters from various worldwide locations and differing depth was culled to measure their chlorine isotopic composition (delta(37)Cl). These samples cover all the oceans and large seas: Atlantic, Pacific, Indian and Antarctic oceans, Mediterranean and Red seas. This collection includes nine seawaters from three depth profiles down to 4560 mbsl. The standard deviation (2sigma) of the delta(37)Cl of this collection is +/-0.08 parts per thousand, which is in fact as large as our precision of measurement ( +/- 0.10 parts per thousand). Thus, within error, oceanic waters seem to be an homogeneous reservoir. According to our results, any seawater could be representative of Standard Mean Ocean Chloride (SMOC) and could be used as a reference standard. An extended international cross-calibration over a large range of delta(37)Cl has been completed. For this purpose, geological fluid samples of various chemical compositions and a manufactured CH3Cl gas sample, with delta(37)Cl from about -6 parts per thousand to +6 parts per thousand have been compared. Data were collected by gas source isotope ratio mass spectrometry (IRMS) at the Paris, Reading and Utrecht laboratories and by thermal ionization mass spectrometry (TIMS) at the Leeds laboratory. Comparison of IRMS values over the range -5.3 parts per thousand to +1.4 parts per thousand plots on the Y=X line, showing a very good agreement between the three laboratories. On 11 samples, the trend line between Paris and Reading Universities is: delta(37)Cl(Reading)= (1.007 +/- 0.009)delta(37)Cl(Paris) - (0.040 +/- 0.025), with a correlation coefficient: R-2 = 0.999. TIMS values from Leeds University have been compared to IRMS values from Paris University over the range -3.0 parts per thousand to +6.0 parts per thousand. On six samples, the agreement between these two laboratories, using different techniques is good: delta(37)Cl(Leeds)=(1.052 +/- 0.038)delta(37)Cl(Paris) + (0.058 +/- 0.099), with a correlation coefficient: R-2 = 0.995. The present study completes a previous cross-calibration between the Leeds and Reading laboratories to compare TIMS and IRMS results (Anal. Chem. 72 (2000) 2261). Both studies allow a comparison of IRMS and TIMS techniques between delta(37)Cl values from -4.4 parts per thousand to +6.0 parts per thousand and show a good agreement: delta(37)Cl(TIMS)=(1.039 +/- 0.023)delta(37)Cl(IRMS)+(0.059 +/- 0.056), with a correlation coefficient: R-2 = 0.996. Our study shows that, for fluid samples, if chlorine isotopic compositions are near 0 parts per thousand, their measurements either by IRMS or TIMS will give comparable results within less than +/- 0.10 parts per thousand, while for delta(37)Cl values as far as 10 parts per thousand (either positive or negative) from SMOC, both techniques will agree within less than +/- 0.30 parts per thousand. (C) 2004 Elsevier B.V. All rights reserved.
Resumo:
The impacts of afforestation at Plynlimon in the Severn catchment, mid-Wales. and in the Bedford Ouse catchment in south-east England are evaluated using the INCA model to simulate Nitrogen (N) fluxes and concentrations. The INCA model represents the key hydrological and N processes operating in catchments and simulates the daily dynamic behaviour as well as the annual fluxes. INCA has been applied to five years of data front the Hafren and Hore headwater sub-catchments (6.8 km(2) area in total) of the River Severn at Plytilimon and the model was calibrated and validated against field data. Simulation of afforestation is achieved by altering the uptake rate parameters in the model. INCA simulates the daily N behaviour in the catchments with good accuracy as well as reconstructing the annual budgets for N release following clearfelling a four-fold increase in N fluxes was followed by a slow recovery after re-afforestation. For comparison, INCA has been applied to the large (8380 km(2)) Bedford Ouse catchment to investigate the impact of replacing 20% arable land with forestry. The reduction in fertiliser inputs from arable farming and the N uptake by the forest are predicted to reduce the N flux reaching the main river system, leading to a 33% reduction in N-Nitrate concentrations in the river water.
Resumo:
In developing techniques for monitoring the costs associated with different procurement routes, the central task is disentangling the various project costs incurred by organizations taking part in construction projects. While all firms are familiar with the need to analyse their own costs, it is unusual to apply the same kind of analysis to projects. The purpose of this research is to examine the claims that new ways of working such as strategic alliancing and partnering bring positive business benefits. This requires that costs associated with marketing, estimating, pricing, negotiation of terms, monitoring of performance and enforcement of contract are collected for a cross-section of projects under differing arrangements, and from those in the supply chain from clients to consultants, contractors, sub-contractors and suppliers. Collaboration with industrial partners forms the basis for developing a research instrument, based on time sheets, which will be relevant for all those taking part in the work. The signs are that costs associated with tendering are highly variable, 1-15%, depending upon what precisely is taken into account. The research to date reveals that there are mechanisms for measuring the costs of transactions and these will generate useful data for subsequent analysis.
Resumo:
During the twentieth century sea surface temperatures in the Atlantic Ocean exhibited prominent multidecadal variations. The source of such variations has yet to be rigorously established—but the question of their impact on climate can be investigated. Here we report on a set of multimodel experiments to examine the impact of patterns of warming in the North Atlantic, and cooling in the South Atlantic, derived from observations, that is characteristic of the positive phase of the Atlantic Multidecadal Oscillation (AMO). The experiments were carried out with six atmospheric General Circulation Models (including two versions of one model), and a major goal was to assess the extent to which key climate impacts are consistent between the different models. The major climate impacts are found over North and South America, with the strongest impacts over land found over the United States and northern parts of South America. These responses appear to be driven by a combination of an off-equatorial Gill response to diabatic heating over the Caribbean due to increased rainfall within the region and a Northward shift in the Inter Tropical Convergence Zone (ITCZ) due to the anomalous cross-equatorial SST gradient. The majority of the models show warmer US land temperatures and reduced Mean Sea Level Pressure during summer (JJA) in response to a warmer North Atlantic and a cooler South Atlantic, in line with observations. However the majority of models show no significant impact on US rainfall during summer. Over northern South America, all models show reduced rainfall in southern hemisphere winter (JJA), whilst in Summer (DJF) there is a generally an increase in rainfall. However, there is a large spread amongst the models in the magnitude of the rainfall anomalies over land. Away from the Americas, there are no consistent significant modelled responses. In particular there are no significant changes in the North Atlantic Oscillation (NAO) over the North Atlantic and Europe in Winter (DJF). Additionally, the observed Sahel drying signal in African rainfall is not seen in the modelled responses. Suggesting that, in contrast to some studies, the Atlantic Multidecadal Oscillation was not the primary driver of recent reductions in Sahel rainfall.
Resumo:
[ 1] The European Centre for Medium-Range Weather Forecasts (ECMWF) 40-year Reanalysis (ERA-40) ozone and water vapor reanalysis fields during the 1990s have been compared with independent satellite data from the Halogen Occultation Experiment (HALOE) and Microwave Limb Sounder (MLS) instruments on board the Upper Atmosphere Research Satellite (UARS). In addition, ERA-40 has been compared with aircraft data from the Measurements of Ozone and Water Vapour by Airbus In-Service Aircraft (MOZAIC) program. Overall, in comparison with the values derived from the independent observations, the upper stratosphere in ERA-40 has about 5 - 10% more ozone and 15 - 20% less water vapor. This dry bias in the reanalysis appears to be global and extends into the middle stratosphere down to 40 hPa. Most of the discrepancies and seasonal variations between ERA-40 and the independent observations occur within the upper troposphere over the tropics and the lower stratosphere over the high latitudes. ERA-40 reproduces a weaker Antarctic ozone hole, and of less vertical extent, than the independent observations; values in the ozone maximum in the tropical stratosphere are lower for the reanalysis. ERA-40 mixing ratios of water vapor are considerably larger than those for MOZAIC, typically by 20% in the tropical upper troposphere, and they may exceed 60% in the lower stratosphere over high latitudes. The results imply that the Brewer-Dobson circulation in the ECMWF reanalysis system is too fast, as is also evidenced by deficiencies in the way ERA-40 reproduces the water vapor "tape recorder'' signal in the tropical stratosphere. Finally, the paper examines the biases and their temporal variation during the 1990s in the way ERA-40 compares to the independent observations. We also discuss how the evaluation results depend on the instrument used, as well as on the version of the data.
Resumo:
Space weather effects on technological systems originate with energy carried from the Sun to the terrestrial environment by the solar wind. In this study, we present results of modeling of solar corona-heliosphere processes to predict solar wind conditions at the L1 Lagrangian point upstream of Earth. In particular we calculate performance metrics for (1) empirical, (2) hybrid empirical/physics-based, and (3) full physics-based coupled corona-heliosphere models over an 8-year period (1995–2002). L1 measurements of the radial solar wind speed are the primary basis for validation of the coronal and heliosphere models studied, though other solar wind parameters are also considered. The models are from the Center for Integrated Space-Weather Modeling (CISM) which has developed a coupled model of the whole Sun-to-Earth system, from the solar photosphere to the terrestrial thermosphere. Simple point-by-point analysis techniques, such as mean-square-error and correlation coefficients, indicate that the empirical coronal-heliosphere model currently gives the best forecast of solar wind speed at 1 AU. A more detailed analysis shows that errors in the physics-based models are predominately the result of small timing offsets to solar wind structures and that the large-scale features of the solar wind are actually well modeled. We suggest that additional “tuning” of the coupling between the coronal and heliosphere models could lead to a significant improvement of their accuracy. Furthermore, we note that the physics-based models accurately capture dynamic effects at solar wind stream interaction regions, such as magnetic field compression, flow deflection, and density buildup, which the empirical scheme cannot.
Resumo:
The Observing System Research and Predictability Experiment (THORPEX) Interactive Grand Global Ensemble (TIGGE) is a World Weather Research Programme project. One of its main objectives is to enhance collaboration on the development of ensemble prediction between operational centers and universities by increasing the availability of ensemble prediction system (EPS) data for research. This study analyzes the prediction of Northern Hemisphere extratropical cyclones by nine different EPSs archived as part of the TIGGE project for the 6-month time period of 1 February 2008–31 July 2008, which included a sample of 774 cyclones. An objective feature tracking method has been used to identify and track the cyclones along the forecast trajectories. Forecast verification statistics have then been produced [using the European Centre for Medium-Range Weather Forecasts (ECMWF) operational analysis as the truth] for cyclone position, intensity, and propagation speed, showing large differences between the different EPSs. The results show that the ECMWF ensemble mean and control have the highest level of skill for all cyclone properties. The Japanese Meteorological Administration (JMA), the National Centers for Environmental Prediction (NCEP), the Met Office (UKMO), and the Canadian Meteorological Centre (CMC) have 1 day less skill for the position of cyclones throughout the forecast range. The relative performance of the different EPSs remains the same for cyclone intensity except for NCEP, which has larger errors than for position. NCEP, the Centro de Previsão de Tempo e Estudos Climáticos (CPTEC), and the Australian Bureau of Meteorology (BoM) all have faster intensity error growth in the earlier part of the forecast. They are also very underdispersive and significantly underpredict intensities, perhaps due to the comparatively low spatial resolutions of these EPSs not being able to accurately model the tilted structure essential to cyclone growth and decay. There is very little difference between the levels of skill of the ensemble mean and control for cyclone position, but the ensemble mean provides an advantage over the control for all EPSs except CPTEC in cyclone intensity and there is an advantage for propagation speed for all EPSs. ECMWF and JMA have an excellent spread–skill relationship for cyclone position. The EPSs are all much more underdispersive for cyclone intensity and propagation speed than for position, with ECMWF and CMC performing best for intensity and CMC performing best for propagation speed. ECMWF is the only EPS to consistently overpredict cyclone intensity, although the bias is small. BoM, NCEP, UKMO, and CPTEC significantly underpredict intensity and, interestingly, all the EPSs underpredict the propagation speed, that is, the cyclones move too slowly on average in all EPSs.
Resumo:
The LINK Integrated Farming Systems (LINK-IFS) Project (1992-1997) was setup to compare conventional and integrated arable farming systems (IAFS), concentrating on practical feasibility and economic viability, but also taking into account the level of inputs used and environmental impact. As part of this, an examination into energy use within the two systems was also undertaken. This paper presents the results from that analysis. The data used is from the six sites within the LINK-IFS Project, spread through the arable production areas of England and from the one site in Scotland, covering the 5 years of the project. The comparison of the energy used is based on the equipment and inputs used to produce I kg of each crop within the conventional and integrated rotations, and thereby the overall energy used for each system. The results suggest that, in terms of total energy used, the integrated system appears to be the most efficient. However, in terms of energy efficiency, energy use per kilogram of output, the results are less conclusive. (C) 2003 Elsevier Science B.V. All rights reserved.
Resumo:
Reducing carbon conversion of ruminally degraded feed into methane increases feed efficiency and reduces emission of this potent greenhouse gas into the environment. Accurate, yet simple, predictions of methane production of ruminants on any feeding regime are important in the nutrition of ruminants, and in modeling methane produced by them. The current work investigated feed intake, digestibility and methane production by open-circuit respiration measurements in sheep fed 15 untreated, sodium hydroxide (NaOH) treated and anhydrous ammonia (NH3) treated wheat, barley and oat straws. In vitro fermentation characteristics of straws were obtained from incubations using the Hohenheim gas production system that measured gas production, true substrate degradability, short-chain fatty acid production and efficiency of microbial production from the ratio of truly degraded substrate to gas volume. In the 15 straws, organic matter (OM) intake and in vivo OM digestibility ranged from 563 to 1201 g and from 0.464 to 0.643, respectively. Total daily methane production ranged from 13.0 to 34.4 l, whereas methane produced/kg OM matter apparently digested in vivo varied from 35.0 to 61.8 l. The OM intake was positively related to total methane production (R2 = 0.81, P<0.0001), and in vivo OM digestibility was also positively associated with methane production (R2 = 0.67, P<0.001), but negatively associated with methane production/kg digestible OM intake (R2 = 0.61, P<0.001). In the in vitro incubations of the 15 straws, the ratio of acetate to propionate ranged from 2.3 to 2.8 (P<0.05) and efficiencies of microbial production ranged from 0.21 to 0.37 (P<0.05) at half asymptotic gas production. Total daily methane production, calculated from in vitro fermentation characteristics (i.e., true degradability, SCFA ratio and efficiency of microbial production) and OM intake, compared well with methane measured in the open-circuit respiration chamber (y = 2.5 + 0.86x, R2 = 0.89, P<0.0001, Sy.x = 2.3). Methane production from forage fed ruminants can be predicted accurately by simple in vitro incubations combining true substrate degradability and gas volume measurements, if feed intake is known.
Resumo:
Grass-based diets are of increasing social-economic importance in dairy cattle farming, but their low supply of glucogenic nutrients may limit the production of milk. Current evaluation systems that assess the energy supply and requirements are based on metabolisable energy (ME) or net energy (NE). These systems do not consider the characteristics of the energy delivering nutrients. In contrast, mechanistic models take into account the site of digestion, the type of nutrient absorbed and the type of nutrient required for production of milk constituents, and may therefore give a better prediction of supply and requirement of nutrients. The objective of the present study is to compare the ability of three energy evaluation systems, viz. the Dutch NE system, the agricultural and food research council (AFRC) ME system, and the feed into milk (FIM) ME system, and of a mechanistic model based on Dijkstra et al. [Simulation of digestion in cattle fed sugar cane: prediction of nutrient supply for milk production with locally available supplements. J. Agric. Sci., Cambridge 127, 247-60] and Mills et al. [A mechanistic model of whole-tract digestion and methanogenesis in the lactating dairy cow: model development, evaluation and application. J. Anim. Sci. 79, 1584-97] to predict the feed value of grass-based diets for milk production. The dataset for evaluation consists of 41 treatments of grass-based diets (at least 0.75 g ryegrass/g diet on DM basis). For each model, the predicted energy or nutrient supply, based on observed intake, was compared with predicted requirement based on observed performance. Assessment of the error of energy or nutrient supply relative to requirement is made by calculation of mean square prediction error (MSPE) and by concordance correlation coefficient (CCC). All energy evaluation systems predicted energy requirement to be lower (6-11%) than energy supply. The root MSPE (expressed as a proportion of the supply) was lowest for the mechanistic model (0.061), followed by the Dutch NE system (0.082), FIM ME system (0.097) and AFRCME system(0.118). For the energy evaluation systems, the error due to overall bias of prediction dominated the MSPE, whereas for the mechanistic model, proportionally 0.76 of MSPE was due to random variation. CCC analysis confirmed the higher accuracy and precision of the mechanistic model compared with energy evaluation systems. The error of prediction was positively related to grass protein content for the Dutch NE system, and was also positively related to grass DMI level for all models. In conclusion, current energy evaluation systems overestimate energy supply relative to energy requirement on grass-based diets for dairy cattle. The mechanistic model predicted glucogenic nutrients to limit performance of dairy cattle on grass-based diets, and proved to be more accurate and precise than the energy systems. The mechanistic model could be improved by allowing glucose maintenance and utilization requirements parameters to be variable. (C) 2007 Elsevier B.V. All rights reserved.