29 resultados para Warfare Agents
Resumo:
The development of practical agent languages has progressed significantly over recent years, but this has largely been independent of distinct developments in aspects of multiagent cooperation and planning. For example, while the popular AgentSpeak(L) has had various extensions and improvements proposed, it still essentially a single-agent language. In response, in this paper, we describe a simple, yet effective, technique for multiagent planning that enables an agent to take advantage of cooperating agents in a society. In particular, we build on a technique that enables new plans to be added to a plan library through the invocation of an external planning component, and extend it to include the construction of plans involving the chaining of subplans of others. Our mechanism makes use of plan patterns that insulate the planning process from the resulting distributed aspects of plan execution through local proxy plans that encode information about the preconditions and effects of the external plans provided by agents willing to cooperate. In this way, we allow an agent to discover new ways of achieving its goals through local planning and the delegation of tasks for execution by others, allowing it to overcome individual limitations.
Resumo:
While there has been much work on developing frameworks and models of norms and normative systems, consideration of the impact of norms on the practical reasoning of agents has attracted less attention. The problem is that traditional agent architectures and their associated languages provide no mechanism to adapt an agent at runtime to norms constraining their behaviour. This is important because if BDI-type agents are to operate in open environments, they need to adapt to changes in the norms that regulate such environments. In response, in this paper we provide a technique to extend BDI agent languages, by enabling them to enact behaviour modification at runtime in response to newly accepted norms. Our solution consists of creating new plans to comply with obligations and suppressing the execution of existing plans that violate prohibitions. We demonstrate the viability of our approach through an implementation of our solution in the AgentSpeak(L) language.
Resumo:
A system built in terms of autonomous agents may require even greater correctness assurance than one which is merely reacting to the immediate control of its users. Agents make substantial decisions for themselves, so thorough testing is an important consideration. However, autonomy also makes testing harder; by their nature, autonomous agents may react in different ways to the same inputs over time, because, for instance they have changeable goals and knowledge. For this reason, we argue that testing of autonomous agents requires a procedure that caters for a wide range of test case contexts, and that can search for the most demanding of these test cases, even when they are not apparent to the agents’ developers. In this paper, we address this problem, introducing and evaluating an approach to testing autonomous agents that uses evolutionary optimization to generate demanding test cases.