889 resultados para The Lattice Solid Model
Resumo:
Models of the dynamics of nitrogen in soil (soil-N) can be used to aid the fertilizer management of a crop. The predictions of soil-N models can be validated by comparison with observed data. Validation generally involves calculating non-spatial statistics of the observations and predictions, such as their means, their mean squared-difference, and their correlation. However, when the model predictions are spatially distributed across a landscape the model requires validation with spatial statistics. There are three reasons for this: (i) the model may be more or less successful at reproducing the variance of the observations at different spatial scales; (ii) the correlation of the predictions with the observations may be different at different spatial scales; (iii) the spatial pattern of model error may be informative. In this study we used a model, parameterized with spatially variable input information about the soil, to predict the mineral-N content of soil in an arable field, and compared the results with observed data. We validated the performance of the N model spatially with a linear mixed model of the observations and model predictions, estimated by residual maximum likelihood. This novel approach allowed us to describe the joint variation of the observations and predictions as: (i) independent random variation that occurred at a fine spatial scale; (ii) correlated random variation that occurred at a coarse spatial scale; (iii) systematic variation associated with a spatial trend. The linear mixed model revealed that, in general, the performance of the N model changed depending on the spatial scale of interest. At the scales associated with random variation, the N model underestimated the variance of the observations, and the predictions were correlated poorly with the observations. At the scale of the trend, the predictions and observations shared a common surface. The spatial pattern of the error of the N model suggested that the observations were affected by the local soil condition, but this was not accounted for by the N model. In summary, the N model would be well-suited to field-scale management of soil nitrogen, but suited poorly to management at finer spatial scales. This information was not apparent with a non-spatial validation. (c),2007 Elsevier B.V. All rights reserved.
Resumo:
White clover (Trifolium repens) is an important pasture legume but is often difficult to sustain in a mixed sward because, among other things, of the damage to roots caused by the soil-dwelling larval stages of S. lepidus. Locating the root nodules on the white clover roots is crucial for the survival of the newly hatched larvae. This paper presents a numerical model to simulate the movement of newly hatched S. lepidus larvae towards the root nodules, guided by a chemical signal released by the nodules. The model is based on the diffusion-chemotaxis equation. Experimental observations showed that the average speed of the larvae remained approximately constant, so the diffusion-chernotaxis model was modified so that the larvae respond only to the gradient direction of the chemical signal but not its magnitude. An individual-based lattice Boltzmann method was used to simulate the movement of individual larvae, and the parameters required for the model were estimated from the measurement of larval movement towards nodules in soil scanned using X-ray microtomography. The model was used to investigate the effects of nodule density, the rate of release of chemical signal, the sensitivity of the larvae to the signal, and the random foraging of the larvae on the movement and subsequent survival of the larvae. The simulations showed that the most significant factors for larval survival were nodule density and the sensitivity of the larvae to the signal. The dependence of larval survival rate on nodule density was well fitted by the Michealis-Menten kinetics. (c) 2005 Elsevier B.V All rights reserved.
Resumo:
Observations show the oceans have warmed over the past 40 yr. with appreciable regional variation and more warming at the surface than at depth. Comparing the observations with results from two coupled ocean-atmosphere climate models [the Parallel Climate Model version 1 (PCM) and the Hadley Centre Coupled Climate Model version 3 (HadCM3)] that include anthropogenic forcing shows remarkable agreement between the observed and model-estimated warming. In this comparison the models were sampled at the same locations as gridded yearly observed data. In the top 100 m of the water column the warming is well separated from natural variability, including both variability arising from internal instabilities of the coupled ocean-atmosphere climate system and that arising from volcanism and solar fluctuations. Between 125 and 200 m the agreement is not significant, but then increases again below this level, and remains significant down to 600 m. Analysis of PCM's heat budget indicates that the warming is driven by an increase in net surface heat flux that reaches 0.7 W m(-2) by the 1990s; the downward longwave flux increases bv 3.7 W m(-2). which is not fully compensated by an increase in the upward longwave flux of 2.2 W m(-2). Latent and net solar heat fluxes each decrease by about 0.6 W m(-2). The changes in the individual longwave components are distinguishable from the preindustrial mean by the 1920s, but due to cancellation of components. changes in the net surface heat flux do not become well separated from zero until the 1960s. Changes in advection can also play an important role in local ocean warming due to anthropogenic forcing, depending, on the location. The observed sampling of ocean temperature is highly variable in space and time. but sufficient to detect the anthropogenic warming signal in all basins, at least in the surface layers, bv the 1980s.
Resumo:
We develop the linearization of a semi-implicit semi-Lagrangian model of the one-dimensional shallow-water equations using two different methods. The usual tangent linear model, formed by linearizing the discrete nonlinear model, is compared with a model formed by first linearizing the continuous nonlinear equations and then discretizing. Both models are shown to perform equally well for finite perturbations. However, the asymptotic behaviour of the two models differs as the perturbation size is reduced. This leads to difficulties in showing that the models are correctly coded using the standard tests. To overcome this difficulty we propose a new method for testing linear models, which we demonstrate both theoretically and numerically. © Crown copyright, 2003. Royal Meteorological Society
Resumo:
Ozone and temperature profiles from the Michelson Interferometer for Passive Atmospheric Sounding (MIPAS) have been assimilated, using three-dimensional variational assimilation, into a stratosphere troposphere version of the Met Office numerical weather-prediction system. Analyses are made for the month of September 2002, when there was an unprecedented split in the southern hemisphere polar vortex. The analyses are validated against independent ozone observations from sondes, limb-occultation and total column ozone satellite instruments. Through most of the stratosphere, precision varies from 5 to 15%, and biases are 15% or less of the analysed field. Problems remain in the vortex and below the 60 hPa. level, especially at the tropopause where the analyses have too much ozone and poor agreement with independent data. Analysis problems are largely a result of the model rather than the data, giving confidence in the MIPAS ozone retrievals, though there may be a small high bias in MIPAS ozone in the lower stratosphere. Model issues include an excessive Brewer-Dobson circulation, which results both from known problems with the tracer transport scheme and from the data assimilation of dynamical variables. The extreme conditions of the vortex split reveal large differences between existing linear ozone photochemistry schemes. Despite these issues, the ozone analyses are able to successfully describe the ozone hole split and compare well to other studies of this event. Recommendations are made for the further development of the ozone assimilation system.
Resumo:
In this paper we focus on the one year ahead prediction of the electricity peak-demand daily trajectory during the winter season in Central England and Wales. We define a Bayesian hierarchical model for predicting the winter trajectories and present results based on the past observed weather. Thanks to the flexibility of the Bayesian approach, we are able to produce the marginal posterior distributions of all the predictands of interest. This is a fundamental progress with respect to the classical methods. The results are encouraging in both skill and representation of uncertainty. Further extensions are straightforward at least in principle. The main two of those consist in conditioning the weather generator model with respect to additional information like the knowledge of the first part of the winter and/or the seasonal weather forecast. Copyright (C) 2006 John Wiley & Sons, Ltd.
Resumo:
One of the primary goals of the Center for Integrated Space Weather Modeling (CISM) effort is to assess and improve prediction of the solar wind conditions in near‐Earth space, arising from both quasi‐steady and transient structures. We compare 8 years of L1 in situ observations to predictions of the solar wind speed made by the Wang‐Sheeley‐Arge (WSA) empirical model. The mean‐square error (MSE) between the observed and model predictions is used to reach a number of useful conclusions: there is no systematic lag in the WSA predictions, the MSE is found to be highest at solar minimum and lowest during the rise to solar maximum, and the optimal lead time for 1 AU solar wind speed predictions is found to be 3 days. However, MSE is shown to frequently be an inadequate “figure of merit” for assessing solar wind speed predictions. A complementary, event‐based analysis technique is developed in which high‐speed enhancements (HSEs) are systematically selected and associated from observed and model time series. WSA model is validated using comparisons of the number of hit, missed, and false HSEs, along with the timing and speed magnitude errors between the forecasted and observed events. Morphological differences between the different HSE populations are investigated to aid interpretation of the results and improvements to the model. Finally, by defining discrete events in the time series, model predictions from above and below the ecliptic plane can be used to estimate an uncertainty in the predicted HSE arrival times.
Resumo:
We introduce a technique for assessing the diurnal development of convective storm systems based on outgoing longwave radiation fields. Using the size distribution of the storms measured from a series of images, we generate an array in the lengthscale-time domain based on the standard score statistic. It demonstrates succinctly the size evolution of storms as well as the dissipation kinematics. It also provides evidence related to the temperature evolution of the cloud tops. We apply this approach to a test case comparing observations made by the Geostationary Earth Radiation Budget instrument to output from the Met Office Unified Model run at two resolutions. The 12km resolution model produces peak convective activity on all lengthscales significantly earlier in the day than shown by the observations and no evidence for storms growing in size. The 4km resolution model shows realistic timing and growth evolution although the dissipation mechanism still differs from the observed data.
Resumo:
The objective of this work was to construct a dynamic model of hepatic amino acid metabolism in the lactating dairy cow that could be parameterized using net flow data from in vivo experiments. The model considers 22 amino acids, ammonia, urea, and 13 energetic metabolites, and was parameterized using a steady-state balance model and two in vivo, net flow experiments conducted with mid-lactation dairy cows. Extracellular flows were derived directly from the observed data. An optimization routine was used to derive nine intracellular flows. The resulting dynamic model was found to be stable across a range of inputs suggesting that it can be perturbed and applied to other physiological states. Although nitrogen was generally in balance, leucine was in slight deficit compared to predicted needs for export protein synthesis, suggesting that an alternative source of leucine (e.g. peptides) was utilized. Simulations of varying glucagon concentrations indicated that an additional 5 mol/d of glucose could be synthesized at the reference substrate concentrations and blood flows. The increased glucose production was supported by increased removal from blood of lactate, glutamate, aspartate, alanine, asparagine, and glutamine. As glucose Output increased, ketone body and acetate release increased while CO2 release declined. The pattern of amino acids appearing in hepatic vein blood was affected by changes in amino acid concentration in portal vein blood, portal blood flow rate and glucagon concentration, with methionine and phenylalanine being the most affected of essential amino acids. Experimental evidence is insufficient to determine whether essential amino acids are affected by varying gluconeogenic demands. (C) 2004 Published by Elsevier Ltd.
Resumo:
Development research has responded to a number of charges over the past few decades. For example, when traditional research was accused of being 'top-down', the response was participatory research, linking the 'receptors' to the generators of research. As participatory processes were recognised as producing limited outcomes, the demand-led agenda was born. In response to the alleged failure of research to deliver its products, the 'joined-up' model, which links research with the private sector, has become popular. However, using examples from animal-health research, this article demonstrates that all the aforementioned approaches are seriously limited in their attempts to generate outputs to address the multi-faceted problems facing the poor. The article outlines a new approach to research: the Mosaic Model. By combining different knowledge forms, and focusing on existing gaps, the model aims to bridge basic and applied findings to enhance the efficiency and value of research, past, present, and future.
A hierarchical Bayesian model for predicting the functional consequences of amino-acid polymorphisms
Resumo:
Genetic polymorphisms in deoxyribonucleic acid coding regions may have a phenotypic effect on the carrier, e.g. by influencing susceptibility to disease. Detection of deleterious mutations via association studies is hampered by the large number of candidate sites; therefore methods are needed to narrow down the search to the most promising sites. For this, a possible approach is to use structural and sequence-based information of the encoded protein to predict whether a mutation at a particular site is likely to disrupt the functionality of the protein itself. We propose a hierarchical Bayesian multivariate adaptive regression spline (BMARS) model for supervised learning in this context and assess its predictive performance by using data from mutagenesis experiments on lac repressor and lysozyme proteins. In these experiments, about 12 amino-acid substitutions were performed at each native amino-acid position and the effect on protein functionality was assessed. The training data thus consist of repeated observations at each position, which the hierarchical framework is needed to account for. The model is trained on the lac repressor data and tested on the lysozyme mutations and vice versa. In particular, we show that the hierarchical BMARS model, by allowing for the clustered nature of the data, yields lower out-of-sample misclassification rates compared with both a BMARS and a frequen-tist MARS model, a support vector machine classifier and an optimally pruned classification tree.
Resumo:
In this paper we focus on the one year ahead prediction of the electricity peak-demand daily trajectory during the winter season in Central England and Wales. We define a Bayesian hierarchical model for predicting the winter trajectories and present results based on the past observed weather. Thanks to the flexibility of the Bayesian approach, we are able to produce the marginal posterior distributions of all the predictands of interest. This is a fundamental progress with respect to the classical methods. The results are encouraging in both skill and representation of uncertainty. Further extensions are straightforward at least in principle. The main two of those consist in conditioning the weather generator model with respect to additional information like the knowledge of the first part of the winter and/or the seasonal weather forecast. Copyright (C) 2006 John Wiley & Sons, Ltd.
Resumo:
A model for comparing the inventory costs of purchasing under the economic order quantity (EOQ) system and the just-in-time (JIT) order purchasing system in existing literature concluded that JIT purchasing was virtually always the preferable inventory ordering system especially at high level of annual demand. By expanding the classical EOQ model, this paper shows that it is possible for the EOQ system to be more cost effective than the JIT system once the inventory demand approaches the EOQ-JIT cost indifference point. The case study conducted in the ready-mixed concrete industry in Singapore supported this proposition.
Resumo:
Our objective in this study was to develop and implement an effective intervention strategy to manipulate the amount and composition of dietary fat and carbohydrate (CHO) in free-living individuals in the RISCK study. The study was a randomized, controlled dietary intervention study that was conducted in 720 participants identified as higher risk for or with metabolic syndrome. All followed a 4-wk run-in reference diet [high saturated fatty acids (SF)/high glycemic index (GI)]. Volunteers were randomized to continue this diet for a further 24 wk or to I of 4 isoenergetic prescriptions [high monounsaturated fatty acids (MUFA)/high GI; high MUFA/low GI; low fat (LF)/high GI; and LF/low GI]. We developed a food exchange model to implement each diet. Dietary records and plasma phospholipid fatty acids were used to assess the effectiveness of the intervention strategy. Reported fat intake from the LF diets was significantly reduced to 28% of energy (%E) compared with 38% E from the HM and LF diets. SF intake was successfully decreased in the HM and LF diets was similar to 10% E compared with 17% E in the reference diet (P = 0.001). Dietary MUFA in the HIM diets was similar to 17% E, significantly higher than in the reference (12% E) and LF diets (10% E) (P = 0.001). Changes in plasma phospholipid fatty acids provided further evidence for the successful manipulation of fat intake. The GI of the HGI and LGI arms differed by similar to 9 points (P = 0.001). The food exchange model provided an effective dietary strategy for the design and implementation across multiple sites of 5 experimental diets with specific targets for the proportion of fat and CHO. J. Nutr. 139: 1534-1540, 2009.
Resumo:
In addition to projected increases in global mean sea level over the 21st century, model simulations suggest there will also be changes in the regional distribution of sea level relative to the global mean. There is a considerable spread in the projected patterns of these changes by current models, as shown by the recent Intergovernmental Panel on Climate Change (IPCC) Fourth Assessment (AR4). This spread has not reduced from that given by the Third Assessment models. Comparison with projections by ensembles of models based on a single structure supports an earlier suggestion that models of similar formulation give more similar patterns of sea level change. Analysing an AR4 ensemble of model projections under a business-as-usual scenario shows that steric changes (associated with subsurface ocean density changes) largely dominate the sea level pattern changes. The relative importance of subsurface temperature or salinity changes in contributing to this differs from region to region and, to an extent, from model-to-model. In general, thermosteric changes give the spatial variations in the Southern Ocean, halosteric changes dominate in the Arctic and strong compensation between thermosteric and halosteric changes characterises the Atlantic. The magnitude of sea level and component changes in the Atlantic appear to be linked to the amount of Atlantic meridional overturning circulation (MOC) weakening. When the MOC weakening is substantial, the Atlantic thermosteric patterns of change arise from a dominant role of ocean advective heat flux changes.