873 resultados para Multi-Agent Model


Relevância:

100.00% 100.00%

Publicador:

Relevância:

100.00% 100.00%

Publicador:

Relevância:

100.00% 100.00%

Publicador:

Relevância:

100.00% 100.00%

Publicador:

Resumo:

vVe examine the problem of a buyer who wishes to purehase and eombine ti. objeets owned by n individual owners to realize a higher V'illue. The owners are able to delay their entry into the sale proeess: They ean either seU now 01' seU later. Among other assumptions, the simple assumptions of compef'if'irnl, · .. that the presenee of more owners at point of sale reduees their surplus .. · and di..,(Jyun,fúl,g lead to interesting results: There is eostly delay in equilibdum. rvIoreover, with suffidently strong eompetition, the probability of delay inereases with n. Thus, buyers who diseount the future \\i11 faee inereased eosts as the number of owners inereases. The souree of transaetions eosts is the owners' desire to dis-eoordinate in the presenee of eompetition. These eosts are unrelated to transaetions eosts eurrently identified in the literature, spedfieally those due to asymmetrie information, 01' publie goods problems where players impose negative externalities on eaeh other by under-eontributing.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

On-line learning methods have been applied successfully in multi-agent systems to achieve coordination among agents. Learning in multi-agent systems implies in a non-stationary scenario perceived by the agents, since the behavior of other agents may change as they simultaneously learn how to improve their actions. Non-stationary scenarios can be modeled as Markov Games, which can be solved using the Minimax-Q algorithm a combination of Q-learning (a Reinforcement Learning (RL) algorithm which directly learns an optimal control policy) and the Minimax algorithm. However, finding optimal control policies using any RL algorithm (Q-learning and Minimax-Q included) can be very time consuming. Trying to improve the learning time of Q-learning, we considered the QS-algorithm. in which a single experience can update more than a single action value by using a spreading function. In this paper, we contribute a Minimax-QS algorithm which combines the Minimax-Q algorithm and the QS-algorithm. We conduct a series of empirical evaluation of the algorithm in a simplified simulator of the soccer domain. We show that even using a very simple domain-dependent spreading function, the performance of the learning algorithm can be improved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A multi-agent framework for spatial electric load forecasting, especially suited to simulate the different dynamics involved on distribution systems, is presented. The service zone is divided into several sub-zones, each subzone is considered as an independent agent identified with a corresponding load level, and their relationships with the neighbor zones are represented as development probabilities. With this setting, different kind of agents can be developed to simulate the growth pattern of the loads in distribution systems. This paper presents two different kinds of agents to simulate different situations, presenting some promissory results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The present study introduces a multi-agent architecture designed for doing automation process of data integration and intelligent data analysis. Different from other approaches the multi-agent architecture was designed using a multi-agent based methodology. Tropos, an agent based methodology was used for design. Based on the proposed architecture, we describe a Web based application where the agents are responsible to analyse petroleum well drilling data to identify possible abnormalities occurrence. The intelligent data analysis methods used was the Neural Network.