960 resultados para BELIEF REVISION


Relevância:

70.00% 70.00%

Publicador:

Resumo:

Belief Revision deals with the problem of adding new information to a knowledge base in a consistent way. Ontology Debugging, on the other hand, aims to find the axioms in a terminological knowledge base which caused the base to become inconsistent. In this article, we propose a belief revision approach in order to find and repair inconsistencies in ontologies represented in some description logic (DL). As the usual belief revision operators cannot be directly applied to DLs, we propose new operators that can be used with more general logics and show that, in particular, they can be applied to the logics underlying OWL-DL and Lite.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Reasoning and change over inconsistent knowledge bases (KBs) is of utmost relevance in areas like medicine and law. Argumentation may bring the possibility to cope with both problems. Firstly, by constructing an argumentation framework (AF) from the inconsistent KB, we can decide whether to accept or reject a certain claim through the interplay among arguments and counterarguments. Secondly, by handling dynamics of arguments of the AF, we might deal with the dynamics of knowledge of the underlying inconsistent KB. Dynamics of arguments has recently attracted attention and although some approaches have been proposed, a full axiomatization within the theory of belief revision was still missing. A revision arises when we want the argumentation semantics to accept an argument. Argument Theory Change (ATC) encloses the revision operators that modify the AF by analyzing dialectical trees-arguments as nodes and attacks as edges-as the adopted argumentation semantics. In this article, we present a simple approach to ATC based on propositional KBs. This allows to manage change of inconsistent KBs by relying upon classical belief revision, although contrary to it, consistency restoration of the KB is avoided. Subsequently, a set of rationality postulates adapted to argumentation is given, and finally, the proposed model of change is related to the postulates through the corresponding representation theorem. Though we focus on propositional logic, the results can be easily extended to more expressive formalisms such as first-order logic and description logics, to handle evolution of ontologies.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Environmental management is a complex task. The amount and heterogeneity of the data needed for an environmental decision making tool is overwhelming without adequate database systems and innovative methodologies. As far as data management, data interaction and data processing is concerned we here propose the use of a Geographical Information System (GIS) whilst for the decision making we suggest a Multi-Agent System (MAS) architecture. With the adoption of a GIS we hope to provide a complementary coexistence between heterogeneous data sets, a correct data structure, a good storage capacity and a friendly user’s interface. By choosing a distributed architecture such as a Multi-Agent System, where each agent is a semi-autonomous Expert System with the necessary skills to cooperate with the others in order to solve a given task, we hope to ensure a dynamic problem decomposition and to achieve a better performance compared with standard monolithical architectures. Finally, and in view of the partial, imprecise, and ever changing character of information available for decision making, Belief Revision capabilities are added to the system. Our aim is to present and discuss an intelligent environmental management system capable of suggesting the more appropriate land-use actions based on the existing spatial and non-spatial constraints.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This article discusses the development of an Intelligent Distributed Environmental Decision Support System, built upon the association of a Multi-agent Belief Revision System with a Geographical Information System (GIS). The inherent multidisciplinary features of the involved expertises in the field of environmental management, the need to define clear policies that allow the synthesis of divergent perspectives, its systematic application, and the reduction of the costs and time that result from this integration, are the main reasons that motivate the proposal of this project. This paper is organised in two parts: in the first part we present and discuss the developed Distributed Belief Revision Test-bed — DiBeRT; in the second part we analyse its application to the environmental decision support domain, with special emphasis on the interface with a GIS.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In a real world multiagent system, where the agents are faced with partial, incomplete and intrinsically dynamic knowledge, conflicts are inevitable. Frequently, different agents have goals or beliefs that cannot hold simultaneously. Conflict resolution methodologies have to be adopted to overcome such undesirable occurrences. In this paper we investigate the application of distributed belief revision techniques as the support for conflict resolution in the analysis of the validity of the candidate beams to be produced in the CERN particle accelerators. This CERN multiagent system contains a higher hierarchy agent, the Specialist agent, which makes use of meta-knowledge (on how the con- flicting beliefs have been produced by the other agents) in order to detect which beliefs should be abandoned. Upon solving a conflict, the Specialist instructs the involved agents to revise their beliefs accordingly. Conflicts in the problem domain are mapped into conflicting beliefs of the distributed belief revision system, where they can be handled by proven formal methods. This technique builds on well established concepts and combines them in a new way to solve important problems. We find this approach generally applicable in several domains.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this dissertation we present a model for iteration of Katsuno and Mendelzon’s Update, inspired in the developments for iteration in AGM belief revision. We adapt Darwiche and Pearls’ postulates of iterated belief revision to update (as well as the independence postulate proposed in [BM06, JT07]) and show two families of such operators, based in natural [Bou96] and lexicographic revision [Nay94a, NPP03]. In all cases, we provide a possible worlds semantics of the models.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The AGM theory of belief revision provides a formal framework to represent the dynamics of epistemic states. In this framework, the beliefs of the agent are usually represented as logical formulas while the change operations are constrained by rationality postulates. In the original proposal, the logic underlying the reasoning was supposed to be supraclassical, among other properties. In this paper, we present some of the existing work in adapting the AGM theory for non-classical logics and discuss their interconnections and what is still missing for each approach.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Nowadays, the development of intelligent agents intends to be more refined, using improved architectures and reasoning mechanisms. Revise the beliefs of an agent is also an important subject, due to the consistency that agents should have about their knowledge. In this work we propose deliberative and argumentative agents using Lego Mindstorms robots, Argumentative NXT BDI-like Agents. These agents are built using the notions of the BDI model and they are capable to reason using the DeLP formalism. They update their knowledge base with their perceptions and revise it when necessary. Two variations are presented: the Single Argumentative NXT BDI-like Agent and the MAS Argumentative NXT BDI-like Agent.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this dissertation we present a model for iteration of Katsuno and Mendelzon’s Update, inspired in the developments for iteration in AGM belief revision. We adapt Darwiche and Pearls’ postulates of iterated belief revision to update (as well as the independence postulate proposed in [BM06, JT07]) and show two families of such operators, based in natural [Bou96] and lexicographic revision [Nay94a, NPP03]. In all cases, we provide a possible worlds semantics of the models.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This thesis concerns artificially intelligent natural language processing systems that are capable of learning the properties of lexical items (properties like verbal valency or inflectional class membership) autonomously while they are fulfilling their tasks for which they have been deployed in the first place. Many of these tasks require a deep analysis of language input, which can be characterized as a mapping of utterances in a given input C to a set S of linguistically motivated structures with the help of linguistic information encoded in a grammar G and a lexicon L: G + L + C → S (1) The idea that underlies intelligent lexical acquisition systems is to modify this schematic formula in such a way that the system is able to exploit the information encoded in S to create a new, improved version of the lexicon: G + L + S → L' (2) Moreover, the thesis claims that a system can only be considered intelligent if it does not just make maximum usage of the learning opportunities in C, but if it is also able to revise falsely acquired lexical knowledge. So, one of the central elements in this work is the formulation of a couple of criteria for intelligent lexical acquisition systems subsumed under one paradigm: the Learn-Alpha design rule. The thesis describes the design and quality of a prototype for such a system, whose acquisition components have been developed from scratch and built on top of one of the state-of-the-art Head-driven Phrase Structure Grammar (HPSG) processing systems. The quality of this prototype is investigated in a series of experiments, in which the system is fed with extracts of a large English corpus. While the idea of using machine-readable language input to automatically acquire lexical knowledge is not new, we are not aware of a system that fulfills Learn-Alpha and is able to deal with large corpora. To instance four major challenges of constructing such a system, it should be mentioned that a) the high number of possible structural descriptions caused by highly underspeci ed lexical entries demands for a parser with a very effective ambiguity management system, b) the automatic construction of concise lexical entries out of a bulk of observed lexical facts requires a special technique of data alignment, c) the reliability of these entries depends on the system's decision on whether it has seen 'enough' input and d) general properties of language might render some lexical features indeterminable if the system tries to acquire them with a too high precision. The cornerstone of this dissertation is the motivation and development of a general theory of automatic lexical acquisition that is applicable to every language and independent of any particular theory of grammar or lexicon. This work is divided into five chapters. The introductory chapter first contrasts three different and mutually incompatible approaches to (artificial) lexical acquisition: cue-based queries, head-lexicalized probabilistic context free grammars and learning by unification. Then the postulation of the Learn-Alpha design rule is presented. The second chapter outlines the theory that underlies Learn-Alpha and exposes all the related notions and concepts required for a proper understanding of artificial lexical acquisition. Chapter 3 develops the prototyped acquisition method, called ANALYZE-LEARN-REDUCE, a framework which implements Learn-Alpha. The fourth chapter presents the design and results of a bootstrapping experiment conducted on this prototype: lexeme detection, learning of verbal valency, categorization into nominal count/mass classes, selection of prepositions and sentential complements, among others. The thesis concludes with a review of the conclusions and motivation for further improvements as well as proposals for future research on the automatic induction of lexical features.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Frank Ramsey (1931) estableció ciertas condiciones que deberían cumplirse a fin de evaluar las proposiciones condicionales, conocidas hoy como Test de Ramsey (TR) En este trabajo se muestra que las teorías sobre condicionales contrafácticos de Chisholmj, Stalnaker y D. Lewis, satisfacen el TR y la incompatibilidad de TR con la Teoría de la revisión de creencias (AGM). En la última sección se analiza el comportamiento del TR en la propuesta de G. Grocco y L. Fariñas del Cerro, basada en una generalización del cálculo de Secuentes pero introduciendo la novedad de secuencias auxiliares cuya noción de consecuencia es no-monótona.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Frank Ramsey (1931) estableció ciertas condiciones que deberían cumplirse a fin de evaluar las proposiciones condicionales, conocidas hoy como Test de Ramsey (TR) En este trabajo se muestra que las teorías sobre condicionales contrafácticos de Chisholmj, Stalnaker y D. Lewis, satisfacen el TR y la incompatibilidad de TR con la Teoría de la revisión de creencias (AGM). En la última sección se analiza el comportamiento del TR en la propuesta de G. Grocco y L. Fariñas del Cerro, basada en una generalización del cálculo de Secuentes pero introduciendo la novedad de secuencias auxiliares cuya noción de consecuencia es no-monótona.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Frank Ramsey (1931) estableció ciertas condiciones que deberían cumplirse a fin de evaluar las proposiciones condicionales, conocidas hoy como Test de Ramsey (TR) En este trabajo se muestra que las teorías sobre condicionales contrafácticos de Chisholmj, Stalnaker y D. Lewis, satisfacen el TR y la incompatibilidad de TR con la Teoría de la revisión de creencias (AGM). En la última sección se analiza el comportamiento del TR en la propuesta de G. Grocco y L. Fariñas del Cerro, basada en una generalización del cálculo de Secuentes pero introduciendo la novedad de secuencias auxiliares cuya noción de consecuencia es no-monótona.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We study the induced aggregation operators. The analysis begins with a revision of some basic concepts such as the induced ordered weighted averaging (IOWA) operator and the induced ordered weighted geometric (IOWG) operator. We then analyze the problem of decision making with Dempster-Shafer theory of evidence. We suggest the use of induced aggregation operators in decision making with Dempster-Shafer theory. We focus on the aggregation step and examine some of its main properties, including the distinction between descending and ascending orders and different families of induced operators. Finally, we present an illustrative example in which the results obtained using different types of aggregation operators can be seen.