4 resultados para Budapest
em Department of Computer Science E-Repository - King's College London, Strand, London
Resumo:
The behaviours of autonomous agents may deviate from those deemed to be for the good of the societal systems of which they are a part. Norms have therefore been proposed as a means to regulate agent behaviours in open and dynamic systems, where these norms specify the obliged, permitted and prohibited behaviours of agents. Regulation can effectively be achieved through use of enforcement mechanisms that result in a net loss of utility for an agent in cases where the agent's behaviour fails to comply with the norms. Recognition of compliance is thus crucial for achieving regulation. In this paper we propose a generic architecture for observation of agent behaviours, and recognition of these behaviours as constituting, or counting as, compliance or violation. The architecture deploys monitors that receive inputs from observers, and processes these inputs together with transition network representations of individual norms. In this way, monitors determine the fulfillment or violation status of norms. The paper also describes a proof of concept implementation and deployment of monitors in electronic contracting environments.
Resumo:
While there has been much work on developing frameworks and models of norms and normative systems, consideration of the impact of norms on the practical reasoning of agents has attracted less attention. The problem is that traditional agent architectures and their associated languages provide no mechanism to adapt an agent at runtime to norms constraining their behaviour. This is important because if BDI-type agents are to operate in open environments, they need to adapt to changes in the norms that regulate such environments. In response, in this paper we provide a technique to extend BDI agent languages, by enabling them to enact behaviour modification at runtime in response to newly accepted norms. Our solution consists of creating new plans to comply with obligations and suppressing the execution of existing plans that violate prohibitions. We demonstrate the viability of our approach through an implementation of our solution in the AgentSpeak(L) language.
Resumo:
A system built in terms of autonomous agents may require even greater correctness assurance than one which is merely reacting to the immediate control of its users. Agents make substantial decisions for themselves, so thorough testing is an important consideration. However, autonomy also makes testing harder; by their nature, autonomous agents may react in different ways to the same inputs over time, because, for instance they have changeable goals and knowledge. For this reason, we argue that testing of autonomous agents requires a procedure that caters for a wide range of test case contexts, and that can search for the most demanding of these test cases, even when they are not apparent to the agents’ developers. In this paper, we address this problem, introducing and evaluating an approach to testing autonomous agents that uses evolutionary optimization to generate demanding test cases.