60 resultados para INDIVIDUAL-BASED MODEL
Resumo:
Multiscale modeling is emerging as one of the key challenges in mathematical biology. However, the recent rapid increase in the number of modeling methodologies being used to describe cell populations has raised a number of interesting questions. For example, at the cellular scale, how can the appropriate discrete cell-level model be identified in a given context? Additionally, how can the many phenomenological assumptions used in the derivation of models at the continuum scale be related to individual cell behavior? In order to begin to address such questions, we consider a discrete one-dimensional cell-based model in which cells are assumed to interact via linear springs. From the discrete equations of motion, the continuous Rouse [P. E. Rouse, J. Chem. Phys. 21, 1272 (1953)] model is obtained. This formalism readily allows the definition of a cell number density for which a nonlinear "fast" diffusion equation is derived. Excellent agreement is demonstrated between the continuum and discrete models. Subsequently, via the incorporation of cell division, we demonstrate that the derived nonlinear diffusion model is robust to the inclusion of more realistic biological detail. In the limit of stiff springs, where cells can be considered to be incompressible, we show that cell velocity can be directly related to cell production. This assumption is frequently made in the literature but our derivation places limits on its validity. Finally, the model is compared with a model of a similar form recently derived for a different discrete cell-based model and it is shown how the different diffusion coefficients can be understood in terms of the underlying assumptions about cell behavior in the respective discrete models.
Resumo:
Government targets for CO2 reductions are being progressively tightened, the Climate Change Act set the UK target as an 80% reduction by 2050 on 1990 figures. The residential sector accounts for about 30% of emissions. This paper discusses current modelling techniques in the residential sector: principally top-down and bottom-up. Top-down models work on a macro-economic basis and can be used to consider large scale economic changes; bottom-up models are detail rich to model technological changes. Bottom-up models demonstrate what is technically possible. However, there are differences between the technical potential and what is likely given the limited economic rationality of the typical householder. This paper recommends research to better understand individuals’ behaviour. Such research needs to include actual choices, stated preferences and opinion research to allow a detailed understanding of the individual end user. This increased understanding can then be used in an agent based model (ABM). In an ABM, agents are used to model real world actors and can be given a rule set intended to emulate the actions and behaviours of real people. This can help in understanding how new technologies diffuse. In this way a degree of micro-economic realism can be added to domestic carbon modelling. Such a model should then be of use for both forward projections of CO2 and to analyse the cost effectiveness of various policy measures.
Resumo:
The UK has a target for an 80% reduction in CO2 emissions by 2050 from a 1990 base. Domestic energy use accounts for around 30% of total emissions. This paper presents a comprehensive review of existing models and modelling techniques and indicates how they might be improved by considering individual buying behaviour. Macro (top-down) and micro (bottom-up) models have been reviewed and analysed. It is found that bottom-up models can project technology diffusion due to their higher resolution. The weakness of existing bottom-up models at capturing individual green technology buying behaviour has been identified. Consequently, Markov chains, neural networks and agent-based modelling are proposed as possible methods to incorporate buying behaviour within a domestic energy forecast model. Among the three methods, agent-based models are found to be the most promising, although a successful agent approach requires large amounts of input data. A prototype agent-based model has been developed and tested, which demonstrates the feasibility of an agent approach. This model shows that an agent-based approach is promising as a means to predict the effectiveness of various policy measures.
Resumo:
Photoelectron spectroscopy and scanning tunneling microscopy have been used to investigate how the oxidation state of Ce in CeO2-x(111) ultrathin films is influenced by the presence of Pd nanoparticles. Pd induces an increase in the concentration of Ce3+ cations, which is interpreted as charge transfer from Pd to CeO2-x(111) on the basis of DFT+U calculations. Charge transfer from Pd to Ce4+ is found to be energetically favorable even for individual Pd adatoms. These results have implications for our understanding of the redox behavior of ceria-based model catalyst systems.
Resumo:
This paper introduces a new agent-based model, which incorporates the actions of individual homeowners in a long-term domestic stock model, and details how it was applied in energy policy analysis. The results indicate that current policies are likely to fall significantly short of the 80% target and suggest that current subsidy levels need re-examining. In the model, current subsidy levels appear to offer too much support to some technologies, which in turn leads to the suppression of other technologies that have a greater energy saving potential. The model can be used by policy makers to develop further scenarios to find alternative, more effective, sets of policy measures. The model is currently limited to the owner-occupied stock in England, although it can be expanded, subject to the availability of data.
Resumo:
More and more households are purchasing electric vehicles (EVs), and this will continue as we move towards a low carbon future. There are various projections as to the rate of EV uptake, but all predict an increase over the next ten years. Charging these EVs will produce one of the biggest loads on the low voltage network. To manage the network, we must not only take into account the number of EVs taken up, but where on the network they are charging, and at what time. To simulate the impact on the network from high, medium and low EV uptake (as outlined by the UK government), we present an agent-based model. We initialise the model to assign an EV to a household based on either random distribution or social influences - that is, a neighbour of an EV owner is more likely to also purchase an EV. Additionally, we examine the effect of peak behaviour on the network when charging is at day-time, night-time, or a mix of both. The model is implemented on a neighbourhood in south-east England using smart meter data (half hourly electricity readings) and real life charging patterns from an EV trial. Our results indicate that social influence can increase the peak demand on a local level (street or feeder), meaning that medium EV uptake can create higher peak demand than currently expected.
Resumo:
White clover (Trifolium repens) is an important pasture legume but is often difficult to sustain in a mixed sward because, among other things, of the damage to roots caused by the soil-dwelling larval stages of S. lepidus. Locating the root nodules on the white clover roots is crucial for the survival of the newly hatched larvae. This paper presents a numerical model to simulate the movement of newly hatched S. lepidus larvae towards the root nodules, guided by a chemical signal released by the nodules. The model is based on the diffusion-chemotaxis equation. Experimental observations showed that the average speed of the larvae remained approximately constant, so the diffusion-chernotaxis model was modified so that the larvae respond only to the gradient direction of the chemical signal but not its magnitude. An individual-based lattice Boltzmann method was used to simulate the movement of individual larvae, and the parameters required for the model were estimated from the measurement of larval movement towards nodules in soil scanned using X-ray microtomography. The model was used to investigate the effects of nodule density, the rate of release of chemical signal, the sensitivity of the larvae to the signal, and the random foraging of the larvae on the movement and subsequent survival of the larvae. The simulations showed that the most significant factors for larval survival were nodule density and the sensitivity of the larvae to the signal. The dependence of larval survival rate on nodule density was well fitted by the Michealis-Menten kinetics. (c) 2005 Elsevier B.V All rights reserved.
Resumo:
The aim of this three year project funded by the Countryside Council for Wales (CCW) is to develop techniques firstly, to refine and update existing targets for habitat restoration and re-creation at the landscape scale and secondly, to develop a GIS-based model for the implementation of those targets at the local scale. Landscape Character Assessment (LCA) is being used to map Landscape Types across the whole of Wales as the first stage towards setting strategic habitat targets. The GIS habitat model uses data from the digital Phase I Habitat Survey for Wales to determine the suitability of individual sites for restoration to specific habitat types, including broadleaf woodland. The long-term aim is to develop a system that strengthens the character of Welsh landscapes and provides real biodiversity benefits based upon realistic targets given limited resources for habitat restoration and re-creation.
Resumo:
White clover (Trifolium repens) is an important pasture legume but is often difficult to sustain in a mixed sward because, among other things, of the damage to roots caused by the soil-dwelling larval stages of S. lepidus. Locating the root nodules on the white clover roots is crucial for the survival of the newly hatched larvae. This paper presents a numerical model to simulate the movement of newly hatched S. lepidus larvae towards the root nodules, guided by a chemical signal released by the nodules. The model is based on the diffusion-chemotaxis equation. Experimental observations showed that the average speed of the larvae remained approximately constant, so the diffusion-chernotaxis model was modified so that the larvae respond only to the gradient direction of the chemical signal but not its magnitude. An individual-based lattice Boltzmann method was used to simulate the movement of individual larvae, and the parameters required for the model were estimated from the measurement of larval movement towards nodules in soil scanned using X-ray microtomography. The model was used to investigate the effects of nodule density, the rate of release of chemical signal, the sensitivity of the larvae to the signal, and the random foraging of the larvae on the movement and subsequent survival of the larvae. The simulations showed that the most significant factors for larval survival were nodule density and the sensitivity of the larvae to the signal. The dependence of larval survival rate on nodule density was well fitted by the Michealis-Menten kinetics. (c) 2005 Elsevier B.V All rights reserved.
Resumo:
MOTIVATION: The accurate prediction of the quality of 3D models is a key component of successful protein tertiary structure prediction methods. Currently, clustering or consensus based Model Quality Assessment Programs (MQAPs) are the most accurate methods for predicting 3D model quality; however they are often CPU intensive as they carry out multiple structural alignments in order to compare numerous models. In this study, we describe ModFOLDclustQ - a novel MQAP that compares 3D models of proteins without the need for CPU intensive structural alignments by utilising the Q measure for model comparisons. The ModFOLDclustQ method is benchmarked against the top established methods in terms of both accuracy and speed. In addition, the ModFOLDclustQ scores are combined with those from our older ModFOLDclust method to form a new method, ModFOLDclust2, that aims to provide increased prediction accuracy with negligible computational overhead. RESULTS: The ModFOLDclustQ method is competitive with leading clustering based MQAPs for the prediction of global model quality, yet it is up to 150 times faster than the previous version of the ModFOLDclust method at comparing models of small proteins (<60 residues) and over 5 times faster at comparing models of large proteins (>800 residues). Furthermore, a significant improvement in accuracy can be gained over the previous clustering based MQAPs by combining the scores from ModFOLDclustQ and ModFOLDclust to form the new ModFOLDclust2 method, with little impact on the overall time taken for each prediction. AVAILABILITY: The ModFOLDclustQ and ModFOLDclust2 methods are available to download from: http://www.reading.ac.uk/bioinf/downloads/ CONTACT: l.j.mcguffin@reading.ac.uk.
Resumo:
It is generally acknowledged that population-level assessments provide,I better measure of response to toxicants than assessments of individual-level effects. population-level assessments generally require the use of models to integrate potentially complex data about the effects of toxicants on life-history traits, and to provide a relevant measure of ecological impact. Building on excellent earlier reviews we here briefly outline the modelling options in population-level risk assessment. Modelling is used to calculate population endpoints from available data, which is often about Individual life histories, the ways that individuals interact with each other, the environment and other species, and the ways individuals are affected by pesticides. As population endpoints, we recommend the use of population abundance, population growth rate, and the chance of population persistence. We recommend two types of model: simple life-history models distinguishing two life-history stages, juveniles and adults; and spatially-explicit individual-based landscape models. Life-history models are very quick to set up and run, and they provide a great deal or insight. At the other extreme, individual-based landscape models provide the greatest verisimilitude, albeit at the cost of greatly increased complexity. We conclude with a discussion of the cations of the severe problems of parameterising models.
Resumo:
The effect of different sugars and glyoxal on the formation of acrylamide in low-moisture starch-based model systems was studied, and kinetic data were obtained. Glucose was more effective than fructose, tagatose, or maltose in acrylamide formation, whereas the importance of glyoxal as a key sugar fragmentation intermediate was confirmed. Glyoxal formation was greater in model systems containing asparagine and glucose rather than fructose. A solid phase microextraction GC-MS method was employed to determine quantitatively the formation of pyrazines in model reaction systems. Substituted pyrazine formation was more evident in model systems containing fructose; however, the unsubstituted homologue, which was the only pyrazine identified in the headspace of glyoxal-asparagine systems, was formed at higher yields when aldoses were used as the reducing sugar. Highly significant correlations were obtained for the relationship between pyrazine and acrylamide formation. The importance of the tautomerization of the asparagine-carbonyl decarboxylated Schiff base in the relative yields of pyrazines and acrylamide is discussed.
Resumo:
The identification of non-linear systems using only observed finite datasets has become a mature research area over the last two decades. A class of linear-in-the-parameter models with universal approximation capabilities have been intensively studied and widely used due to the availability of many linear-learning algorithms and their inherent convergence conditions. This article presents a systematic overview of basic research on model selection approaches for linear-in-the-parameter models. One of the fundamental problems in non-linear system identification is to find the minimal model with the best model generalisation performance from observational data only. The important concepts in achieving good model generalisation used in various non-linear system-identification algorithms are first reviewed, including Bayesian parameter regularisation and models selective criteria based on the cross validation and experimental design. A significant advance in machine learning has been the development of the support vector machine as a means for identifying kernel models based on the structural risk minimisation principle. The developments on the convex optimisation-based model construction algorithms including the support vector regression algorithms are outlined. Input selection algorithms and on-line system identification algorithms are also included in this review. Finally, some industrial applications of non-linear models are discussed.
Resumo:
Remote sensing is the only practicable means to observe snow at large scales. Measurements from passive microwave instruments have been used to derive snow climatology since the late 1970’s, but the algorithms used were limited by the computational power of the era. Simplifications such as the assumption of constant snow properties enabled snow mass to be retrieved from the microwave measurements, but large errors arise from those assumptions, which are still used today. A better approach is to perform retrievals within a data assimilation framework, where a physically-based model of the snow properties can be used to produce the best estimate of the snow cover, in conjunction with multi-sensor observations such as the grain size, surface temperature, and microwave radiation. We have developed an existing snow model, SNOBAL, to incorporate mass and energy transfer of the soil, and to simulate the growth of the snow grains. An evaluation of this model is presented and techniques for the development of new retrieval systems are discussed.
Resumo:
Summary 1. Agent-based models (ABMs) are widely used to predict how populations respond to changing environments. As the availability of food varies in space and time, individuals should have their own energy budgets, but there is no consensus as to how these should be modelled. Here, we use knowledge of physiological ecology to identify major issues confronting the modeller and to make recommendations about how energy budgets for use in ABMs should be constructed. 2. Our proposal is that modelled animals forage as necessary to supply their energy needs for maintenance, growth and reproduction. If there is sufficient energy intake, an animal allocates the energy obtained in the order: maintenance, growth, reproduction, energy storage, until its energy stores reach an optimal level. If there is a shortfall, the priorities for maintenance and growth/reproduction remain the same until reserves fall to a critical threshold below which all are allocated to maintenance. Rates of ingestion and allocation depend on body mass and temperature. We make suggestions for how each of these processes should be modelled mathematically. 3. Mortality rates vary with body mass and temperature according to known relationships, and these can be used to obtain estimates of background mortality rate. 4. If parameter values cannot be obtained directly, then values may provisionally be obtained by parameter borrowing, pattern-oriented modelling, artificial evolution or from allometric equations. 5. The development of ABMs incorporating individual energy budgets is essential for realistic modelling of populations affected by food availability. Such ABMs are already being used to guide conservation planning of nature reserves and shell fisheries, to assess environmental impacts of building proposals including wind farms and highways and to assess the effects on nontarget organisms of chemicals for the control of agricultural pests. Keywords: bioenergetics; energy budget; individual-based models; population dynamics.