892 resultados para agent based modelling


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Many modelling studies examine the impacts of climate change on crop yield, but few explore either the underlying bio-physical processes, or the uncertainty inherent in the parameterisation of crop growth and development. We used a perturbed-parameter crop modelling method together with a regional climate model (PRECIS) driven by the 2071-2100 SRES A2 emissions scenario in order to examine processes and uncertainties in yield simulation. Crop simulations used the groundnut (i.e. peanut; Arachis hypogaea L.) version of the General Large-Area Model for annual crops (GLAM). Two sets of GLAM simulations were carried out: control simulations and fixed-duration simulations, where the impact of mean temperature on crop development rate was removed. Model results were compared to sensitivity tests using two other crop models of differing levels of complexity: CROPGRO, and the groundnut model of Hammer et al. [Hammer, G.L., Sinclair, T.R., Boote, K.J., Wright, G.C., Meinke, H., and Bell, M.J., 1995, A peanut simulation model: I. Model development and testing. Agron. J. 87, 1085-1093]. GLAM simulations were particularly sensitive to two processes. First, elevated vapour pressure deficit (VPD) consistently reduced yield. The same result was seen in some simulations using both other crop models. Second, GLAM crop duration was longer, and yield greater, when the optimal temperature for the rate of development was exceeded. Yield increases were also seen in one other crop model. Overall, the models differed in their response to super-optimal temperatures, and that difference increased with mean temperature; percentage changes in yield between current and future climates were as diverse as -50% and over +30% for the same input data. The first process has been observed in many crop experiments, whilst the second has not. Thus, we conclude that there is a need for: (i) more process-based modelling studies of the impact of VPD on assimilation, and (ii) more experimental studies at super-optimal temperatures. Using the GLAM results, central values and uncertainty ranges were projected for mean 2071-2100 crop yields in India. In the fixed-duration simulations, ensemble mean yields mostly rose by 10-30%. The full ensemble range was greater than this mean change (20-60% over most of India). In the control simulations, yield stimulation by elevated CO2 was more than offset by other processes-principally accelerated crop development rates at elevated, but sub-optimal, mean temperatures. Hence, the quantification of uncertainty can facilitate relatively robust indications of the likely sign of crop yield changes in future climates. (C) 2007 Elsevier B.V. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Many ecosystem services are delivered by organisms that depend on habitats that are segregated spatially or temporally from the location where services are provided. Management of mobile organisms contributing to ecosystem services requires consideration not only of the local scale where services are delivered, but also the distribution of resources at the landscape scale, and the foraging ranges and dispersal movements of the mobile agents. We develop a conceptual model for exploring how one such mobile-agent-based ecosystem service (MABES), pollination, is affected by land-use change, and then generalize the model to other MABES. The model includes interactions and feedbacks among policies affecting land use, market forces and the biology of the organisms involved. Animal-mediated pollination contributes to the production of goods of value to humans such as crops; it also bolsters reproduction of wild plants on which other services or service-providing organisms depend. About one-third of crop production depends on animal pollinators, while 60-90% of plant species require an animal pollinator. The sensitivity of mobile organisms to ecological factors that operate across spatial scales makes the services provided by a given community of mobile agents highly contextual. Services vary, depending on the spatial and temporal distribution of resources surrounding the site, and on biotic interactions occurring locally, such as competition among pollinators for resources, and among plants for pollinators. The value of the resulting goods or services may feed back via market-based forces to influence land-use policies, which in turn influence land management practices that alter local habitat conditions and landscape structure. Developing conceptual models for MABES aids in identifying knowledge gaps, determining research priorities, and targeting interventions that can be applied in an adaptive management context.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Space applications demand the need for building reliable systems. Autonomic computing defines such reliable systems as self-managing systems. The work reported in this paper combines agent-based and swarm robotic approaches leading to swarm-array computing, a novel technique to achieve self-managing distributed parallel computing systems. Two swarm-array computing approaches based on swarms of computational resources and swarms of tasks are explored. FPGA is considered as the computing system. The feasibility of the two proposed approaches that binds the computing system and the task together is simulated on the SeSAm multi-agent simulator.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Space applications demand the need for building reliable systems. Autonomic computing defines such reliable systems as self-managing systems. The work reported in this paper combines agent-based and swarm robotic approaches leading to swarm-array computing, a novel technique to achieve self-managing distributed parallel computing systems. Two swarm-array computing approaches based on swarms of computational resources and swarms of tasks are explored. FPGA is considered as the computing system. The feasibility of the two proposed approaches that binds the computing system and the task together is simulated on the SeSAm multi-agent simulator.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The deployment of Quality of Service (QoS) techniques involves careful analysis of area including: those business requirements; corporate strategy; and technical implementation process, which can lead to conflict or contradiction between those goals of various user groups involved in that policy definition. In addition long-term change management provides a challenge as these implementations typically require a high-skill set and experience level, which expose organisations to effects such as “hyperthymestria” [1] and “The Seven Sins of Memory”, defined by Schacter and discussed further within this paper. It is proposed that, given the information embedded within the packets of IP traffic, an opportunity exists to augment the traffic management with a machine-learning agent-based mechanism. This paper describes the process by which current policies are defined and that research required to support the development of an application which enables adaptive intelligent Quality of Service controls to augment or replace those policy-based mechanisms currently in use.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Space applications demand the need for building reliable systems. Autonomic computing defines such reliable systems as self-managing systems. The work reported in this paper combines agent based and swarm robotic approaches leading to swarm-array computing, a novel technique to achieve autonomy for distributed parallel computing systems. Two swarm-array computing approaches based on swarms of computational resources and swarms of tasks are explored. FPGA is considered as the computing system. The feasibility of the two proposed approaches that binds the computing system and the task together is simulated on the SeSAm multi-agent simulator.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Brand competition is modelled using an agent based approach in order to examine the long run dynamics of market structure and brand characteristics. A repeated game is designed where myopic firms choose strategies based on beliefs about their rivals and consumers. Consumers are heterogeneous and can observe neighbour behaviour through social networks. Although firms do not observe them, the social networks have a significant impact on the emerging market structure. Presence of networks tends to polarize market share and leads to higher volatility in brands. Yet convergence in brand characteristics usually happens whenever the market reaches a steady state. Scale-free networks accentuate the polarization and volatility more than small world or random networks. Unilateral innovations are less frequent under social networks.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The work reported in this paper is motivated towards handling single node failures for parallel summation algorithms in computer clusters. An agent based approach is proposed in which a task to be executed is decomposed to sub-tasks and mapped onto agents that traverse computing nodes. The agents intercommunicate across computing nodes to share information during the event of a predicted node failure. Two single node failure scenarios are considered. The Message Passing Interface is employed for implementing the proposed approach. Quantitative results obtained from experiments reveal that the agent based approach can handle failures more efficiently than traditional failure handling approaches.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A connection between a fuzzy neural network model with the mixture of experts network (MEN) modelling approach is established. Based on this linkage, two new neuro-fuzzy MEN construction algorithms are proposed to overcome the curse of dimensionality that is inherent in the majority of associative memory networks and/or other rule based systems. The first construction algorithm employs a function selection manager module in an MEN system. The second construction algorithm is based on a new parallel learning algorithm in which each model rule is trained independently, for which the parameter convergence property of the new learning method is established. As with the first approach, an expert selection criterion is utilised in this algorithm. These two construction methods are equivalent in their effectiveness in overcoming the curse of dimensionality by reducing the dimensionality of the regression vector, but the latter has the additional computational advantage of parallel processing. The proposed algorithms are analysed for effectiveness followed by numerical examples to illustrate their efficacy for some difficult data based modelling problems.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A common problem in many data based modelling algorithms such as associative memory networks is the problem of the curse of dimensionality. In this paper, a new two-stage neurofuzzy system design and construction algorithm (NeuDeC) for nonlinear dynamical processes is introduced to effectively tackle this problem. A new simple preprocessing method is initially derived and applied to reduce the rule base, followed by a fine model detection process based on the reduced rule set by using forward orthogonal least squares model structure detection. In both stages, new A-optimality experimental design-based criteria we used. In the preprocessing stage, a lower bound of the A-optimality design criterion is derived and applied as a subset selection metric, but in the later stage, the A-optimality design criterion is incorporated into a new composite cost function that minimises model prediction error as well as penalises the model parameter variance. The utilisation of NeuDeC leads to unbiased model parameters with low parameter variance and the additional benefit of a parsimonious model structure. Numerical examples are included to demonstrate the effectiveness of this new modelling approach for high dimensional inputs.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Neurofuzzy modelling systems combine fuzzy logic with quantitative artificial neural networks via a concept of fuzzification by using a fuzzy membership function usually based on B-splines and algebraic operators for inference, etc. The paper introduces a neurofuzzy model construction algorithm using Bezier-Bernstein polynomial functions as basis functions. The new network maintains most of the properties of the B-spline expansion based neurofuzzy system, such as the non-negativity of the basis functions, and unity of support but with the additional advantages of structural parsimony and Delaunay input space partitioning, avoiding the inherent computational problems of lattice networks. This new modelling network is based on the idea that an input vector can be mapped into barycentric co-ordinates with respect to a set of predetermined knots as vertices of a polygon (a set of tiled Delaunay triangles) over the input space. The network is expressed as the Bezier-Bernstein polynomial function of barycentric co-ordinates of the input vector. An inverse de Casteljau procedure using backpropagation is developed to obtain the input vector's barycentric co-ordinates that form the basis functions. Extension of the Bezier-Bernstein neurofuzzy algorithm to n-dimensional inputs is discussed followed by numerical examples to demonstrate the effectiveness of this new data based modelling approach.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Many well-established statistical methods in genetics were developed in a climate of severe constraints on computational power. Recent advances in simulation methodology now bring modern, flexible statistical methods within the reach of scientists having access to a desktop workstation. We illustrate the potential advantages now available by considering the problem of assessing departures from Hardy-Weinberg (HW) equilibrium. Several hypothesis tests of HW have been established, as well as a variety of point estimation methods for the parameter which measures departures from HW under the inbreeding model. We propose a computational, Bayesian method for assessing departures from HW, which has a number of important advantages over existing approaches. The method incorporates the effects-of uncertainty about the nuisance parameters--the allele frequencies--as well as the boundary constraints on f (which are functions of the nuisance parameters). Results are naturally presented visually, exploiting the graphics capabilities of modern computer environments to allow straightforward interpretation. Perhaps most importantly, the method is founded on a flexible, likelihood-based modelling framework, which can incorporate the inbreeding model if appropriate, but also allows the assumptions of the model to he investigated and, if necessary, relaxed. Under appropriate conditions, information can be shared across loci and, possibly, across populations, leading to more precise estimation. The advantages of the method are illustrated by application both to simulated data and to data analysed by alternative methods in the recent literature.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The National Grid Company plc. owns and operates the electricity transmission network in England and Wales, the day to day running of the network being carried out by teams of engineers within the national control room. The task of monitoring and operating the transmission network involves the transfer of large amounts of data and a high degree of cooperation between these engineers. The purpose of the research detailed in this paper is to investigate the use of interfacing techniques within the control room scenario, in particular, the development of an agent based architecture for the support of cooperative tasks. The proposed architecture revolves around the use of interface and user supervisor agents. Primarily, these agents are responsible for the flow of information to and from individual users and user groups. The agents are also responsible for tackling the synchronisation and control issues arising during the completion of cooperative tasks. In this paper a novel approach to human computer interaction (HCI) for power systems incorporating an embedded agent infrastructure is presented. The agent architectures used to form the base of the cooperative task support system are discussed, as is the nature of the support system and tasks it is intended to support.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The IntFOLD-TS method was developed according to the guiding principle that the model quality assessment would be the most critical stage for our template based modelling pipeline. Thus, the IntFOLD-TS method firstly generates numerous alternative models, using in-house versions of several different sequence-structure alignment methods, which are then ranked in terms of global quality using our top performing quality assessment method – ModFOLDclust2. In addition to the predicted global quality scores, the predictions of local errors are also provided in the resulting coordinate files, using scores that represent the predicted deviation of each residue in the model from the equivalent residue in the native structure. The IntFOLD-TS method was found to generate high quality 3D models for many of the CASP9 targets, whilst also providing highly accurate predictions of their per-residue errors. This important information may help to make the 3D models that are produced by the IntFOLD-TS method more useful for guiding future experimental work

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this contribution we aim at anchoring Agent-Based Modeling (ABM) simulations in actual models of human psychology. More specifically, we apply unidirectional ABM to social psychological models using low level agents (i.e., intra-individual) to examine whether they generate better predictions, in comparison to standard statistical approaches, concerning the intentions of performing a behavior and the behavior. Moreover, this contribution tests to what extent the predictive validity of models of attitude such as the Theory of Planned Behavior (TPB) or Model of Goal-directed Behavior (MGB) depends on the assumption that peoples’ decisions and actions are purely rational. Simulations were therefore run by considering different deviations from rationality of the agents with a trembling hand method. Two data sets concerning respectively the consumption of soft drinks and physical activity were used. Three key findings emerged from the simulations. First, compared to standard statistical approach the agent-based simulation generally improves the prediction of behavior from intention. Second, the improvement in prediction is inversely proportional to the complexity of the underlying theoretical model. Finally, the introduction of varying degrees of deviation from rationality in agents’ behavior can lead to an improvement in the goodness of fit of the simulations. By demonstrating the potential of ABM as a complementary perspective to evaluating social psychological models, this contribution underlines the necessity of better defining agents in terms of psychological processes before examining higher levels such as the interactions between individuals.