6 resultados para Trust in organization
em Department of Computer Science E-Repository - King's College London, Strand, London
Resumo:
Cooperation is the fundamental underpinning of multi-agent systems, allowing agents to interact to achieve their goals. Where agents are self-interested, or potentially unreliable, there must be appropriate mechanisms to cope with the uncertainty that arises. In particular, agents must manage the risk associated with interacting with others who have different objectives, or who may fail to fulfil their commitments. Previous work has utilised the notions of motivation and trust in engendering successful cooperation between self-interested agents. Motivations provide a means for representing and reasoning about agents' overall objectives, and trust offers a mechanism for modelling and reasoning about reliability, honesty, veracity and so forth. This paper extends that work to address some of its limitations. In particular, we introduce the concept of a clan: a group of agents who trust each other and have similar objectives. Clan members treat each other favourably when making private decisions about cooperation, in order to gain mutual benefit. We describe mechanisms for agents to form, maintain, and dissolve clans in accordance with their self-interested nature, along with giving details of how clan membership influences individual decision making. Finally, through some simulation experiments we illustrate the effectiveness of clan formation in addressing some of the inherent problems with cooperation among self-interested agents.
Resumo:
Expressing contractual agreements electronically potentially allows agents to automatically perform functions surrounding contract use: establishment, fulfilment, renegotiation etc. For such automation to be used for real business concerns, there needs to be a high level of trust in the agent-based system. While there has been much research on simulating trust between agents, there are areas where such trust is harder to establish. In particular, contract proposals may come from parties that an agent has had no prior interaction with and, in competitive business-to-business environments, little reputation information may be available. In human practice, trust in a proposed contract is determined in part from the content of the proposal itself, and the similarity of the content to that of prior contracts, executed to varying degrees of success. In this paper, we argue that such analysis is also appropriate in automated systems, and to provide it we need systems to record salient details of prior contract use and algorithms for assessing proposals on their content. We use provenance technology to provide the former and detail algorithms for measuring contract success and similarity for the latter, applying them to an aerospace case study.
Resumo:
Users are facing an increasing challenge of managing information and being available anytime anywhere, as the web exponentially grows. As a consequence, assisting them in their routine tasks has become a relevant issue to be addressed. In this paper, we introduce a software framework that supports the development of Personal Assistance Software (PAS). It relies on the idea of exposing a high level user model in order to increase user trust in the task delegation process as well as empowering them to manage it. The framework provides a synchronization mechanism that is responsible for dynamically adapting an underlying BDI agent-based running implementation in order to keep this high-level view of user customizations consistent with it.