48 resultados para Agent-Based Modeling
Resumo:
There has been much interest in the belief–desire–intention (BDI) agent-based model for developing scalable intelligent systems, e.g. using the AgentSpeak framework. However, reasoning from sensor information in these large-scale systems remains a significant challenge. For example, agents may be faced with information from heterogeneous sources which is uncertain and incomplete, while the sources themselves may be unreliable or conflicting. In order to derive meaningful conclusions, it is important that such information be correctly modelled and combined. In this paper, we choose to model uncertain sensor information in Dempster–Shafer (DS) theory. Unfortunately, as in other uncertainty theories, simple combination strategies in DS theory are often too restrictive (losing valuable information) or too permissive (resulting in ignorance). For this reason, we investigate how a context-dependent strategy originally defined for possibility theory can be adapted to DS theory. In particular, we use the notion of largely partially maximal consistent subsets (LPMCSes) to characterise the context for when to use Dempster’s original rule of combination and for when to resort to an alternative. To guide this process, we identify existing measures of similarity and conflict for finding LPMCSes along with quality of information heuristics to ensure that LPMCSes are formed around high-quality information. We then propose an intelligent sensor model for integrating this information into the AgentSpeak framework which is responsible for applying evidence propagation to construct compatible information, for performing context-dependent combination and for deriving beliefs for revising an agent’s belief base. Finally, we present a power grid scenario inspired by a real-world case study to demonstrate our work.
Resumo:
The TELL ME agent based model simulates the connections between health agency communication, personal decisions to adopt protective behaviour during an influenza epidemic, and the effect of those decisions on epidemic progress. The behaviour decisions are modelled with a combination of personal attitude, behaviour adoption by neighbours, and the local recent incidence of influenza. This paper sets out and justifies the model design, including how these decision factors have been operationalised. By exploring the effects of different communication strategies, the model is intended to assist health authorities with their influenza epidemic communication plans. It can both assist users to understand the complex interactions between communication, personal behaviour and epidemic progress, and guide future data collection to improve communication planning.
Resumo:
This Integration Insight provides a brief overview of the most popular modelling techniques used to analyse complex real-world problems, as well as some less popular but highly relevant techniques. The modelling methods are divided into three categories, with each encompassing a number of methods, as follows: 1) Qualitative Aggregate Models (Soft Systems Methodology, Concept Maps and Mind Mapping, Scenario Planning, Causal (Loop) Diagrams), 2) Quantitative Aggregate Models (Function fitting and Regression, Bayesian Nets, System of differential equations / Dynamical systems, System Dynamics, Evolutionary Algorithms) and 3) Individual Oriented Models (Cellular Automata, Microsimulation, Agent Based Models, Discrete Event Simulation, Social Network
Analysis). Each technique is broadly described with example uses, key attributes and reference material.
Resumo:
The agent-based social simulation component of the TELL ME project (WP4) developed prototype software to assist communications planners to understand the complex relationships between communication, personal protective behaviour and epidemic spread. Using the simulation, planners can enter different potential communications plans, and see their simulated effect on attitudes, behaviour and the consequent effect on an influenza epidemic.
The model and the software to run the model are both freely available (see section 2.2.1 for instructions on how to obtain the relevant files). This report provides the documentation for the prototype software. The major component is the user guide (Section 2). This provides instructions on how to set up the software, some training scenarios to become familiar with the model operation and use, and details about the model controls and output.
The model contains many parameters. Default values and their source are described at Section 3. These are unlikely to be suitable for all countries, and may also need to be changed as new research is conducted. Instructions for how to customise these values are also included (see section 3.5).
The final technical reference contains two parts. The first is a guide for advanced users who wish to run multiple simulations and analyse the results (section 4.1). The second is to orient programmers who wish to adapt or extend the simulation model (section 4.2). This material is not suitable for general users.
Resumo:
The development of artificial neural network (ANN) models to predict the rheological behavior of grouts is described is this paper and the sensitivity of such parameters to the variation in mixture ingredients is also evaluated. The input parameters of the neural network were the mixture ingredients influencing the rheological behavior of grouts, namely the cement content, fly ash, ground-granulated blast-furnace slag, limestone powder, silica fume, water-binder ratio (w/b), high-range water-reducing admixture, and viscosity-modifying agent (welan gum). The six outputs of the ANN models were the mini-slump, the apparent viscosity at low shear, and the yield stress and plastic viscosity values of the Bingham and modified Bingham models, respectively. The model is based on a multi-layer feed-forward neural network. The details of the proposed ANN with its architecture, training, and validation are presented in this paper. A database of 186 mixtures from eight different studies was developed to train and test the ANN model. The effectiveness of the trained ANN model is evaluated by comparing its responses with the experimental data that were used in the training process. The results show that the ANN model can accurately predict the mini-slump, the apparent viscosity at low shear, the yield stress, and the plastic viscosity values of the Bingham and modified Bingham models of the pseudo-plastic grouts used in the training process. The results can also predict these properties of new mixtures within the practical range of the input variables used in the training with an absolute error of 2%, 0.5%, 8%, 4%, 2%, and 1.6%, respectively. The sensitivity of the ANN model showed that the trend data obtained by the models were in good agreement with the actual experimental results, demonstrating the effect of mixture ingredients on fluidity and the rheological parameters with both the Bingham and modified Bingham models.
Resumo:
In the IEEE 802.11 MAC layer protocol, there are different trade-off points between the number of nodes competing for the medium and the network capacity provided to them. There is also a trade-off between the wireless channel condition during the transmission period and the energy consumption of the nodes. Current approaches at modeling energy consumption in 802.11 based networks do not consider the influence of the channel condition on all types of frames (control and data) in the WLAN. Nor do they consider the effect on the different MAC and PHY schemes that can occur in 802.11 networks. In this paper, we investigate energy consumption corresponding to the number of competing nodes in IEEE 802.11's MAC and PHY layers in error-prone wireless channel conditions, and present a new energy consumption model. Analysis of the power consumed by each type of MAC and PHY over different bit error rates shows that the parameters in these layers play a critical role in determining the overall energy consumption of the ad-hoc network. The goal of this research is not only to compare the energy consumption using exact formulae in saturated IEEE 802.11-based DCF networks under varying numbers of competing nodes, but also, as the results show, to demonstrate that channel errors have a significant impact on the energy consumption.
Resumo:
Ligand prediction has been driven by a fundamental desire to understand more about how biomolecules recognize their ligands and by the commercial imperative to develop new drugs. Most of the current available software systems are very complex and time-consuming to use. Therefore, developing simple and efficient tools to perform initial screening of interesting compounds is an appealing idea. In this paper, we introduce our tool for very rapid screening for likely ligands (either substrates or inhibitors) based on reasoning with imprecise probabilistic knowledge elicited from past experiments. Probabilistic knowledge is input to the system via a user-friendly interface showing a base compound structure. A prediction of whether a particular compound is a substrate is queried against the acquired probabilistic knowledge base and a probability is returned as an indication of the prediction. This tool will be particularly useful in situations where a number of similar compounds have been screened experimentally, but information is not available for all possible members of that group of compounds. We use two case studies to demonstrate how to use the tool.
Resumo:
Scalability and efficiency of on-chip communication of emerging Multiprocessor System-on-Chip (MPSoC) are critical design considerations. Conventional bus based interconnection schemes no longer fit for MPSoC with a large number of cores. Networks-on-Chip (NoC) is widely accepted as the next generation interconnection scheme for large scale MPSoC. The increase of MPSoC complexity requires fast and accurate system-level modeling techniques for rapid modeling and veri-fication of emerging MPSoCs. However, the existing modeling methods are limited in delivering the essentials of timing accuracy and simulation speed. This paper proposes a novel system-level Networks-on-Chip (NoC) modeling method, which is based on SystemC and TLM2.0 and capable of delivering timing accuracy close to cycle accurate modeling techniques at a significantly lower simulation cost. Experimental results are presented to demonstrate the proposed method. ©2010 IEEE.