824 resultados para trust graph
Resumo:
Architecture description languages (ADLs) are used to specify high-level, compositional views of a software application. ADL research focuses on software composed of prefabricated parts, so-called software components. ADLs usually come equipped with rigorous state-transition style semantics, facilitating verification and analysis of specifications. Consequently, ADLs are well suited to configuring distributed and event-based systems. However, additional expressive power is required for the description of enterprise software architectures – in particular, those built upon newer middleware, such as implementations of Java’s EJB specification, or Microsoft’s COM+/.NET. The enterprise requires distributed software solutions that are scalable, business-oriented and mission-critical. We can make progress toward attaining these qualities at various stages of the software development process. In particular, progress at the architectural level can be leveraged through use of an ADL that incorporates trust and dependability analysis. Also, current industry approaches to enterprise development do not address several important architectural design issues. The TrustME ADL is designed to meet these requirements, through combining approaches to software architecture specification with rigorous design-by-contract ideas. In this paper, we focus on several aspects of TrustME that facilitate specification and analysis of middleware-based architectures for trusted enterprise computing systems.
Resumo:
Cooperation is the fundamental underpinning of multi-agent systems, allowing agents to interact to achieve their goals. Where agents are self-interested, or potentially unreliable, there must be appropriate mechanisms to cope with the uncertainty that arises. In particular, agents must manage the risk associated with interacting with others who have different objectives, or who may fail to fulfil their commitments. Previous work has utilised the notions of motivation and trust in engendering successful cooperation between self-interested agents. Motivations provide a means for representing and reasoning about agents' overall objectives, and trust offers a mechanism for modelling and reasoning about reliability, honesty, veracity and so forth. This paper extends that work to address some of its limitations. In particular, we introduce the concept of a clan: a group of agents who trust each other and have similar objectives. Clan members treat each other favourably when making private decisions about cooperation, in order to gain mutual benefit. We describe mechanisms for agents to form, maintain, and dissolve clans in accordance with their self-interested nature, along with giving details of how clan membership influences individual decision making. Finally, through some simulation experiments we illustrate the effectiveness of clan formation in addressing some of the inherent problems with cooperation among self-interested agents.