776 resultados para agent-based modelling
Resumo:
In computer science, different types of reusable components for building software applications were proposed as a direct consequence of the emergence of new software programming paradigms. The success of these components for building applications depends on factors such as the flexibility in their combination or the facility for their selection in centralised or distributed environments such as internet. In this article, we propose a general type of reusable component, called primitive of representation, inspired by a knowledge-based approach that can promote reusability. The proposal can be understood as a generalisation of existing partial solutions that is applicable to both software and knowledge engineering for the development of hybrid applications that integrate conventional and knowledge based techniques. The article presents the structure and use of the component and describes our recent experience in the development of real-world applications based on this approach.
Resumo:
The presented works aim at proposing a methodology for the simulation of offshore wind conditions using CFD. The main objective is the development of a numerical model for the characterization of atmospheric boundary layers of different stability levels, as the most important issue in offshore wind resource assessment. Based on Monin-Obukhov theory, the steady k-ε Standard turbulence model is modified to take into account thermal stratification in the surface layer. The validity of Monin-Obukhov theory in offshore conditions is discussed with an analysis of a three day episode at FINO-1 platform.
Resumo:
In this paper, we introduce B2DI model that extends BDI model to perform Bayesian inference under uncertainty. For scalability and flexibility purposes, Multiply Sectioned Bayesian Network (MSBN) technology has been selected and adapted to BDI agent reasoning. A belief update mechanism has been defined for agents, whose belief models are connected by public shared beliefs, and the certainty of these beliefs is updated based on MSBN. The classical BDI agent architecture has been extended in order to manage uncertainty using Bayesian reasoning. The resulting extended model, so-called B2DI, proposes a new control loop. The proposed B2DI model has been evaluated in a network fault diagnosis scenario. The evaluation has compared this model with two previously developed agent models. The evaluation has been carried out with a real testbed diagnosis scenario using JADEX. As a result, the proposed model exhibits significant improvements in the cost and time required to carry out a reliable diagnosis.
Resumo:
This paper presents the development of the robotic multi-agent system SMART. In this system, the agent concept is applied to both hardware and software entities. Hardware agents are robots, with three and four legs, and an IP-camera that takes images of the scene where the cooperative task is carried out. Hardware agents strongly cooperate with software agents. These latter agents can be classified into image processing, communications, task management and decision making, planning and trajectory generation agents. To model, control and evaluate the performance of cooperative tasks among agents, a kind of PetriNet, called Work-Flow Petri Net, is used. Experimental results shows the good performance of the system.
Resumo:
Cooperative systems are suitable for many types of applications and nowadays these system are vastly used to improve a previously defined system or to coordinate multiple devices working together. This paper provides an alternative to improve the reliability of a previous intelligent identification system. The proposed approach implements a cooperative model based on multi-agent architecture. This new system is composed of several radar-based systems which identify a detected object and transmit its own partial result by implementing several agents and by using a wireless network to transfer data. The proposed topology is a centralized architecture where the coordinator device is in charge of providing the final identification result depending on the group behavior. In order to find the final outcome, three different mechanisms are introduced. The simplest one is based on majority voting whereas the others use two different weighting voting procedures, both providing the system with learning capabilities. Using an appropriate network configuration, the success rate can be improved from the initial 80% up to more than 90%.
Resumo:
We present a modelling method to estimate the 3-D geometry and location of homogeneously magnetized sources from magnetic anomaly data. As input information, the procedure needs the parameters defining the magnetization vector (intensity, inclination and declination) and the Earth's magnetic field direction. When these two vectors are expected to be different in direction, we propose to estimate the magnetization direction from the magnetic map. Then, using this information, we apply an inversion approach based on a genetic algorithm which finds the geometry of the sources by seeking the optimum solution from an initial population of models in successive iterations through an evolutionary process. The evolution consists of three genetic operators (selection, crossover and mutation), which act on each generation, and a smoothing operator, which looks for the best fit to the observed data and a solution consisting of plausible compact sources. The method allows the use of non-gridded, non-planar and inaccurate anomaly data and non-regular subsurface partitions. In addition, neither constraints for the depth to the top of the sources nor an initial model are necessary, although previous models can be incorporated into the process. We show the results of a test using two complex synthetic anomalies to demonstrate the efficiency of our inversion method. The application to real data is illustrated with aeromagnetic data of the volcanic island of Gran Canaria (Canary Islands).
Resumo:
Robotics is a field that presents a large number of problems because it depends on a large number of disciplines, devices, technologies and tasks. Its expansion from perfectly controlled industrial environments toward open and dynamic environment presents a many new challenges, such as robots household robots or professional robots. To facilitate the rapid development of robotic systems, low cost, reusability of code, its medium and long term maintainability and robustness are required novel approaches to provide generic models and software systems who develop paradigms capable of solving these problems. For this purpose, in this paper we propose a model based on multi-agent systems inspired by the human nervous system able to transfer the control characteristics of the biological system and able to take advantage of the best properties of distributed software systems.
Resumo:
Society today is completely dependent on computer networks, the Internet and distributed systems, which place at our disposal the necessary services to perform our daily tasks. Subconsciously, we rely increasingly on network management systems. These systems allow us to, in general, maintain, manage, configure, scale, adapt, modify, edit, protect, and enhance the main distributed systems. Their role is secondary and is unknown and transparent to the users. They provide the necessary support to maintain the distributed systems whose services we use every day. If we do not consider network management systems during the development stage of distributed systems, then there could be serious consequences or even total failures in the development of the distributed system. It is necessary, therefore, to consider the management of the systems within the design of the distributed systems and to systematise their design to minimise the impact of network management in distributed systems projects. In this paper, we present a framework that allows the design of network management systems systematically. To accomplish this goal, formal modelling tools are used for modelling different views sequentially proposed of the same problem. These views cover all the aspects that are involved in the system; based on process definitions for identifying responsible and defining the involved agents to propose the deployment in a distributed architecture that is both feasible and appropriate.
Resumo:
Proper management of supply chains is fundamental in the overall system performance of forestbased activities. Usually, efficient management techniques rely on a decision support software, which needs to be able to generate fast and effective outputs from the set of possibilities. In order to do this, it is necessary to provide accurate models representative of the dynamic interactions of systems. Due to forest-based supply chains’ nature, event-based models are more suited to describe their behaviours. This work proposes the modelling and simulation of a forestbased supply chain, in particular the biomass supply chain, through the SimPy framework. This Python based tool allows the modelling of discrete-event systems using operations such as events, processes and resources. The developed model was used to access the impact of changes in the daily working plan in three situations. First, as a control case, the deterministic behaviour was simulated. As a second approach, a machine delay was introduced and its implications in the plan accomplishment were analysed. Finally, to better address real operating conditions, stochastic behaviours of processing and driving times were simulated. The obtained results validate the SimPy simulation environment as a framework for modelling supply chains in general and for the biomass problem in particular.
Resumo:
Queensland fruit fly, Bactrocera (Dacus) tryoni (QFF) is arguably the most costly horticultural insect pest in Australia. Despite this, no model is available to describe its population dynamics and aid in its management. This paper describes a cohort-based model of the population dynamics of the Queensland fruit fly. The model is primarily driven by weather variables, and so can be used at any location where appropriate meteorological data are available. In the model, the life cycle is divided into a number of discreet stages to allow physiological processes to be defined as accurately as possible. Eggs develop and hatch into larvae, which develop into pupae, which emerge as either teneral females or males. Both females and males can enter reproductive and over-wintering life stages, and there is a trapped male life stage to allow model predictions to be compared with trap catch data. All development rates are temperature-dependent. Daily mortality rates are temperature-dependent, but may also be influenced by moisture, density of larvae in fruit, fruit suitability, and age. Eggs, larvae and pupae all have constant establishment mortalities, causing a defined proportion of individuals to die upon entering that life stage. Transfer from one immature stage to the next is based on physiological age. In the adult life stages, transfer between stages may require additional and/or alternative functions. Maximum fecundity is 1400 eggs per female per day, and maximum daily oviposition rate is 80 eggs/female per day. The actual number of eggs laid by a female on any given day is restricted by temperature, density of larva in fruit, suitability of fruit for oviposition, and female activity. Activity of reproductive females and males, which affects reproduction and trapping, decreases with rainfall. Trapping of reproductive males is determined by activity, temperature and the proportion of males in the active population. Limitations of the model are discussed. Despite these, the model provides a useful agreement with trap catch data, and allows key areas for future research to be identified. These critical gaps in the current state of knowledge exist despite over 50 years of research on this key pest. By explicitly attempting to model the population dynamics of this pest we have clearly identified the research areas that must be addressed before progress can be made in developing the model into an operational tool for the management of Queensland fruit fly. (C) 2003 Published by Elsevier B.V.
Resumo:
Background. The present paper describes a component of a large Population cost-effectiveness study that aimed to identify the averted burden and economic efficiency of current and optimal treatment for the major mental disorders. This paper reports on the findings for the anxiety disorders (panic disorder/agoraphobia, social phobia, generalized anxiety disorder, post-traumatic stress disorder and obsessive-compulsive disorder). Method. Outcome was calculated as averted 'years lived with disability' (YLD), a population summary measure of disability burden. Costs were the direct health care costs in 1997-8 Australian dollars. The cost per YLD averted (efficiency) was calculated for those already in contact with the health system for a mental health problem (current care) and for a hypothetical optimal care package of evidence-based treatment for this same group. Data sources included the Australian National Survey of Mental Health and Well-being and published treatment effects and unit costs. Results. Current coverage was around 40% for most disorders with the exception of social phobia at 21%. Receipt of interventions consistent with evidence-based care ranged from 32% of those in contact with services for social phobia to 64% for post-traumatic stress disorder. The cost of this care was estimated at $400 million, resulting in a cost per YLD averted ranging from $7761 for generalized anxiety disorder to $34 389 for panic/agoraphobia. Under optimal care, costs remained similar but health gains were increased substantially, reducing the cost per YLD to < $20 000 for all disorders. Conclusions. Evidence-based care for anxiety disorders would produce greater population health gain at a similar cost to that of current care, resulting in a substantial increase in the cost-effectiveness of treatment.
Resumo:
Mineralogical analysis is often used to assess the liberation properties of particles. A direct method of estimating liberation is to actually break particles and then directly obtain liberation information from applying mineralogical analysis to each size-class of the product. Another technique is to artificially apply random breakage to the feed particle sections to estimate the resultant distribution of product particle sections. This technique provides a useful alternative estimation method. Because this technique is applied to particle sections, the actual liberation properties for particles can only be estimated by applying stereological correction. A recent stereological technique has been developed that allows the discrepancy between the linear intercept composition distribution and the particle section composition distribution to be used as guide for estimating the particle composition distribution. The paper will show results validating this new technique using numerical simulation. (C) 2004 Elsevier Ltd. All rights reserved.
Resumo:
Modelling and optimization of the power draw of large SAG/AG mills is important due to the large power draw which modern mills require (5-10 MW). The cost of grinding is the single biggest cost within the entire process of mineral extraction. Traditionally, modelling of the mill power draw has been done using empirical models. Although these models are reliable, they cannot model mills and operating conditions which are not within the model database boundaries. Also, due to its static nature, the impact of the changing conditions within the mill on the power draw cannot be determined using such models. Despite advances in computing power, discrete element method (DEM) modelling of large mills with many thousands of particles could be a time consuming task. The speed of computation is determined principally by two parameters: number of particles involved and material properties. The computational time step is determined by the size of the smallest particle present in the model and material properties (stiffness). In the case of small particles, the computational time step will be short, whilst in the case of large particles; the computation time step will be larger. Hence, from the point of view of time required for modelling (which usually corresponds to time required for 3-4 mill revolutions), it will be advantageous that the smallest particles in the model are not unnecessarily too small. The objective of this work is to compare the net power draw of the mill whose charge is characterised by different size distributions, while preserving the constant mass of the charge and mill speed. (C) 2004 Elsevier Ltd. All rights reserved.
Resumo:
Evolutionary algorithms perform optimization using a population of sample solution points. An interesting development has been to view population-based optimization as the process of evolving an explicit, probabilistic model of the search space. This paper investigates a formal basis for continuous, population-based optimization in terms of a stochastic gradient descent on the Kullback-Leibler divergence between the model probability density and the objective function, represented as an unknown density of assumed form. This leads to an update rule that is related and compared with previous theoretical work, a continuous version of the population-based incremental learning algorithm, and the generalized mean shift clustering framework. Experimental results are presented that demonstrate the dynamics of the new algorithm on a set of simple test problems.