832 resultados para Multi-particle systems
Resumo:
A power describes the ability of an agent to act in some way. While this notion of power is critical in the context of organisational dynamics, and has been studied by others in this light, it must be constrained so as to be useful in any practical application. In particular, we are concerned with how power may be used by agents to govern the imposition and management of norms, and how agents may dynamically assign norms to other agents within a multi-agent system. We approach the problem by defining a syntax and semantics for powers governing the creation, deletion, or modification of norms within a system, which we refer to as normative powers. We then extend this basic model to accommodate more general powers that can modify other powers within the system, and describe how agents playing certain roles are able to apply powers, changing the system’s norms, and also the powers themselves. We examine how the powers found within a system may change as the status of norms change, and show how standard norm modification operations — such as the derogation, annulment and modification of norms— may be represented within our system.
Resumo:
Several agent platforms that implement the belief-desire-intention (BDI) architecture have been proposed. Even though most of them are implemented based on existing general purpose programming languages, e.g. Java, agents are either programmed in a new programming language or Domain-specific Language expressed in XML. As a consequence, this prevents the use of advanced features of the underlying programming language and the integration with existing libraries and frameworks, which are essential for the development of enterprise applications. Due to these limitations of BDI agent platforms, we have implemented the BDI4JADE, which is presented in this paper. It is implemented as a BDI layer on top of JADE, a well accepted agent platform.
Resumo:
The rapid growth of urban areas has a significant impact on traffic and transportation systems. New management policies and planning strategies are clearly necessary to cope with the more than ever limited capacity of existing road networks. The concept of Intelligent Transportation System (ITS) arises in this scenario; rather than attempting to increase road capacity by means of physical modifications to the infrastructure, the premise of ITS relies on the use of advanced communication and computer technologies to handle today’s traffic and transportation facilities. Influencing users’ behaviour patterns is a challenge that has stimulated much research in the ITS field, where human factors start gaining great importance to modelling, simulating, and assessing such an innovative approach. This work is aimed at using Multi-agent Systems (MAS) to represent the traffic and transportation systems in the light of the new performance measures brought about by ITS technologies. Agent features have good potentialities to represent those components of a system that are geographically and functionally distributed, such as most components in traffic and transportation. A BDI (beliefs, desires, and intentions) architecture is presented as an alternative to traditional models used to represent the driver behaviour within microscopic simulation allowing for an explicit representation of users’ mental states. Basic concepts of ITS and MAS are presented, as well as some application examples related to the subject. This has motivated the extension of an existing microscopic simulation framework to incorporate MAS features to enhance the representation of drivers. This way demand is generated from a population of agents as the result of their decisions on route and departure time, on a daily basis. The extended simulation model that now supports the interaction of BDI driver agents was effectively implemented, and different experiments were performed to test this approach in commuter scenarios. MAS provides a process-driven approach that fosters the easy construction of modular, robust, and scalable models, characteristics that lack in former result-driven approaches. Its abstraction premises allow for a closer association between the model and its practical implementation. Uncertainty and variability are addressed in a straightforward manner, as an easier representation of humanlike behaviours within the driver structure is provided by cognitive architectures, such as the BDI approach used in this work. This way MAS extends microscopic simulation of traffic to better address the complexity inherent in ITS technologies.
Resumo:
In Brazil and around the world, oil companies are looking for, and expected development of new technologies and processes that can increase the oil recovery factor in mature reservoirs, in a simple and inexpensive way. So, the latest research has developed a new process called Gas Assisted Gravity Drainage (GAGD) which was classified as a gas injection IOR. The process, which is undergoing pilot testing in the field, is being extensively studied through physical scale models and core-floods laboratory, due to high oil recoveries in relation to other gas injection IOR. This process consists of injecting gas at the top of a reservoir through horizontal or vertical injector wells and displacing the oil, taking advantage of natural gravity segregation of fluids, to a horizontal producer well placed at the bottom of the reservoir. To study this process it was modeled a homogeneous reservoir and a model of multi-component fluid with characteristics similar to light oil Brazilian fields through a compositional simulator, to optimize the operational parameters. The model of the process was simulated in GEM (CMG, 2009.10). The operational parameters studied were the gas injection rate, the type of gas injection, the location of the injector and production well. We also studied the presence of water drive in the process. The results showed that the maximum vertical spacing between the two wells, caused the maximum recovery of oil in GAGD. Also, it was found that the largest flow injection, it obtained the largest recovery factors. This parameter controls the speed of the front of the gas injected and determined if the gravitational force dominates or not the process in the recovery of oil. Natural gas had better performance than CO2 and that the presence of aquifer in the reservoir was less influential in the process. In economic analysis found that by injecting natural gas is obtained more economically beneficial than CO2
Resumo:
The industrial automation is directly linked to the development of information tecnology. Better hardware solutions, as well as improvements in software development methodologies make possible the rapid growth of the productive process control. In this thesis, we propose an architecture that will allow the joining of two technologies in hardware (industrial network) and software field (multiagent systems). The objective of this proposal is to join those technologies in a multiagent architecture to allow control strategies implementations in to field devices. With this, we intend develop an agents architecture to detect and solve problems which may occur in the industrial network environment. Our work ally machine learning with industrial context, become proposed multiagent architecture adaptable to unfamiliar or unexpected production environment. We used neural networks and presented an allocation strategies of these networks in industrial network field devices. With this we intend to improve decision support at plant level and allow operations human intervention independent
Resumo:
In this work, we propose the Interperception paradigm, a new approach that includes a set of rules and a software architecture for merge users from different interfaces in the same virtual environment. The system detects the user resources and provide transformations on the data in order to allow its visualization in 3D, 2D and textual (1D) interfaces. This allows any user to connect, access information, and exchange information with other users in a feasible way, without needs of changing hardware or software. As results are presented two virtual environments builded acording this paradigm
Resumo:
Nowadays, classifying proteins in structural classes, which concerns the inference of patterns in their 3D conformation, is one of the most important open problems in Molecular Biology. The main reason for this is that the function of a protein is intrinsically related to its spatial conformation. However, such conformations are very difficult to be obtained experimentally in laboratory. Thus, this problem has drawn the attention of many researchers in Bioinformatics. Considering the great difference between the number of protein sequences already known and the number of three-dimensional structures determined experimentally, the demand of automated techniques for structural classification of proteins is very high. In this context, computational tools, especially Machine Learning (ML) techniques, have become essential to deal with this problem. In this work, ML techniques are used in the recognition of protein structural classes: Decision Trees, k-Nearest Neighbor, Naive Bayes, Support Vector Machine and Neural Networks. These methods have been chosen because they represent different paradigms of learning and have been widely used in the Bioinfornmatics literature. Aiming to obtain an improvment in the performance of these techniques (individual classifiers), homogeneous (Bagging and Boosting) and heterogeneous (Voting, Stacking and StackingC) multiclassification systems are used. Moreover, since the protein database used in this work presents the problem of imbalanced classes, artificial techniques for class balance (Undersampling Random, Tomek Links, CNN, NCL and OSS) are used to minimize such a problem. In order to evaluate the ML methods, a cross-validation procedure is applied, where the accuracy of the classifiers is measured using the mean of classification error rate, on independent test sets. These means are compared, two by two, by the hypothesis test aiming to evaluate if there is, statistically, a significant difference between them. With respect to the results obtained with the individual classifiers, Support Vector Machine presented the best accuracy. In terms of the multi-classification systems (homogeneous and heterogeneous), they showed, in general, a superior or similar performance when compared to the one achieved by the individual classifiers used - especially Boosting with Decision Tree and the StackingC with Linear Regression as meta classifier. The Voting method, despite of its simplicity, has shown to be adequate for solving the problem presented in this work. The techniques for class balance, on the other hand, have not produced a significant improvement in the global classification error. Nevertheless, the use of such techniques did improve the classification error for the minority class. In this context, the NCL technique has shown to be more appropriated
Resumo:
In multi-robot systems, both control architecture and work strategy represent a challenge for researchers. It is important to have a robust architecture that can be easily adapted to requirement changes. It is also important that work strategy allows robots to complete tasks efficiently, considering that robots interact directly in environments with humans. In this context, this work explores two approaches for robot soccer team coordination for cooperative tasks development. Both approaches are based on a combination of imitation learning and reinforcement learning. Thus, in the first approach was developed a control architecture, a fuzzy inference engine for recognizing situations in robot soccer games, a software for narration of robot soccer games based on the inference engine and the implementation of learning by imitation from observation and analysis of others robotic teams. Moreover, state abstraction was efficiently implemented in reinforcement learning applied to the robot soccer standard problem. Finally, reinforcement learning was implemented in a form where actions are explored only in some states (for example, states where an specialist robot system used them) differently to the traditional form, where actions have to be tested in all states. In the second approach reinforcement learning was implemented with function approximation, for which an algorithm called RBF-Sarsa($lambda$) was created. In both approaches batch reinforcement learning algorithms were implemented and imitation learning was used as a seed for reinforcement learning. Moreover, learning from robotic teams controlled by humans was explored. The proposal in this work had revealed efficient in the robot soccer standard problem and, when implemented in other robotics systems, they will allow that these robotics systems can efficiently and effectively develop assigned tasks. These approaches will give high adaptation capabilities to requirements and environment changes.
Resumo:
In systems that combine the outputs of classification methods (combination systems), such as ensembles and multi-agent systems, one of the main constraints is that the base components (classifiers or agents) should be diverse among themselves. In other words, there is clearly no accuracy gain in a system that is composed of a set of identical base components. One way of increasing diversity is through the use of feature selection or data distribution methods in combination systems. In this work, an investigation of the impact of using data distribution methods among the components of combination systems will be performed. In this investigation, different methods of data distribution will be used and an analysis of the combination systems, using several different configurations, will be performed. As a result of this analysis, it is aimed to detect which combination systems are more suitable to use feature distribution among the components
Resumo:
The use of intelligent agents in multi-classifier systems appeared in order to making the centralized decision process of a multi-classifier system into a distributed, flexible and incremental one. Based on this, the NeurAge (Neural Agents) system (Abreu et al 2004) was proposed. This system has a superior performance to some combination-centered methods (Abreu, Canuto, and Santana 2005). The negotiation is important to the multiagent system performance, but most of negotiations are defined informaly. A way to formalize the negotiation process is using an ontology. In the context of classification tasks, the ontology provides an approach to formalize the concepts and rules that manage the relations between these concepts. This work aims at using ontologies to make a formal description of the negotiation methods of a multi-agent system for classification tasks, more specifically the NeurAge system. Through ontologies, we intend to make the NeurAge system more formal and open, allowing that new agents can be part of such system during the negotiation. In this sense, the NeurAge System will be studied on the basis of its functioning and reaching, mainly, the negotiation methods used by the same ones. After that, some negotiation ontologies found in literature will be studied, and then those that were chosen for this work will be adapted to the negotiation methods used in the NeurAge.
Resumo:
Multi-classifier systems, also known as ensembles, have been widely used to solve several problems, because they, often, present better performance than the individual classifiers that form these systems. But, in order to do so, it s necessary that the base classifiers to be as accurate as diverse among themselves this is also known as diversity/accuracy dilemma. Given its importance, some works have investigate the ensembles behavior in context of this dilemma. However, the majority of them address homogenous ensemble, i.e., ensembles composed only of the same type of classifiers. Thus, motivated by this limitation, this thesis, using genetic algorithms, performs a detailed study on the dilemma diversity/accuracy for heterogeneous ensembles
Resumo:
The World Wide Web has been consolidated over the last years as a standard platform to provide software systems in the Internet. Nowadays, a great variety of user applications are available on the Web, varying from corporate applications to the banking domain, or from electronic commerce to the governmental domain. Given the quantity of information available and the quantity of users dealing with their services, many Web systems have sought to present recommendations of use as part of their functionalities, in order to let the users to have a better usage of the services available, based on their profile, history navigation and system use. In this context, this dissertation proposes the development of an agent-based framework that offers recommendations for users of Web systems. It involves the conception, design and implementation of an object-oriented framework. The framework agents can be plugged or unplugged in a non-invasive way in existing Web applications using aspect-oriented techniques. The framework is evaluated through its instantiation to three different Web systems
Resumo:
The use of multi-agent systems for classification tasks has been proposed in order to overcome some drawbacks of multi-classifier systems and, as a consequence, to improve performance of such systems. As a result, the NeurAge system was proposed. This system is composed by several neural agents which communicate and negotiate a common result for the testing patterns. In the NeurAge system, a negotiation method is very important to the overall performance of the system since the agents need to reach and agreement about a problem when there is a conflict among the agents. This thesis presents an extensive analysis of the NeurAge System where it is used all kind of classifiers. This systems is now named ClassAge System. It is aimed to analyze the reaction of this system to some modifications in its topology and configuration