40 resultados para army bases


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we investigate the relationship between two prioritized knowledge bases by measuring both the conflict and the agreement between them.First of all, a quantity of conflict and two quantities of agreement are defined. The former is shown to be a generalization of the well-known Dalal distance which is the hamming distance between two interpretations. The latter are, respectively, a quantity of strong agreement which measures the amount ofinformation on which two belief bases “totally” agree, and a quantity of weak agreement which measures the amount of information that is believed by onesource but is unknown to the other. All three quantity measures are based on the weighted prime implicant, which represents beliefs in a prioritized belief base. We then define a degree of conflict and two degrees of agreement based on our quantity of conflict and quantities of agreement. We also consider the impact of these measures on belief merging and information source ordering.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The main result of the note is a characterization of 1-amenability of Banach algebras of approximable operators for a class of Banach spaces with 1-unconditional bases in terms of a new basis property. It is also shown that amenability and symmetric amenability are equivalent concepts for Banach algebras of approximable operators, and that a type of Banach space that was long suspected to lack property A has in fact the property. Some further ideas on the problem of whether or not amenability (in this setting) implies property A are discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present a first-principles molecular dynamics study of an excess electron in condensed phase models of solvated DNA bases. Calculations on increasingly large microsolvated clusters taken from liquid phase simulations show that adiabatic electron affinities increase systematically upon solvation, as for optimized gas-phase geometries. Dynamical simulations after vertical attachment indicate that the excess electron, which is initially found delocalized, localizes around the nucleobases within a 15 fs time scale. This transition requires small rearrangements in the geometry of the bases.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Measuring the degree of inconsistency of a belief base is an important issue in many real world applications. It has been increasingly recognized that deriving syntax sensitive inconsistency measures for a belief base from its minimal inconsistent subsets is a natural way forward. Most of the current proposals along this line do not take the impact of the size of each minimal inconsistent subset into account. However, as illustrated by the well-known Lottery Paradox, as the size of a minimal inconsistent subset increases, the degree of its inconsistency decreases. Another lack in current studies in this area is about the role of free formulas of a belief base in measuring the degree of inconsistency. This has not yet been characterized well. Adding free formulas to a belief base can enlarge the set of consistent subsets of that base. However, consistent subsets of a belief base also have an impact on the syntax sensitive normalized measures of the degree of inconsistency, the reason for this is that each consistent subset can be considered as a distinctive plausible perspective reflected by that belief base,whilst eachminimal inconsistent subset projects a distinctive viewof the inconsistency. To address these two issues,we propose a normalized framework formeasuring the degree of inconsistency of a belief base which unifies the impact of both consistent subsets and minimal inconsistent subsets. We also show that this normalized framework satisfies all the properties deemed necessary by common consent to characterize an intuitively satisfactory measure of the degree of inconsistency for belief bases. Finally, we use a simple but explanatory example in equirements engineering to illustrate the application of the normalized framework.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

It is increasingly recognized that identifying the degree of blame or responsibility of each formula for inconsistency of a knowledge base (i.e. a set of formulas) is useful for making rational decisions to resolve inconsistency in that knowledge base. Most current techniques for measuring the blame of each formula with regard to an inconsistent knowledge base focus on classical knowledge bases only. Proposals for measuring the blames of formulas with regard to an inconsistent prioritized knowledge base have not yet been given much consideration. However, the notion of priority is important in inconsistency-tolerant reasoning. This article investigates this issue and presents a family of measurements for the degree of blame of each formula in an inconsistent prioritized knowledge base by using the minimal inconsistent subsets of that knowledge base. First of all, we present a set of intuitive postulates as general criteria to characterize rational measurements for the blames of formulas of an inconsistent prioritized knowledge base. Then we present a family of measurements for the blame of each formula in an inconsistent prioritized knowledge base under the guidance of the principle of proportionality, one of the intuitive postulates. We also demonstrate that each of these measurements possesses the properties that it ought to have. Finally, we use a simple but explanatory example in requirements engineering to illustrate the application of these measurements. Compared to the related works, the postulates presented in this article consider the special characteristics of minimal inconsistent subsets as well as the priority levels of formulas. This makes them more appropriate to characterizing the inconsistency measures defined from minimal inconsistent subsets for prioritized knowledge bases as well as classical knowledge bases. Correspondingly, the measures guided by these postulates can intuitively capture the inconsistency for prioritized knowledge bases.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Belief merging is an important but difficult problem in Artificial Intelligence, especially when sources of information are pervaded with uncertainty. Many merging operators have been proposed to deal with this problem in possibilistic logic, a weighted logic which is powerful for handling inconsistency and deal-ing with uncertainty. They often result in a possibilistic knowledge base which is a set of weighted formulas. Although possibilistic logic is inconsistency tolerant, it suffers from the well-known "drowning effect". Therefore, we may still want to obtain a consistent possibilistic knowledge base as the result of merging. In such a case, we argue that it is not always necessary to keep weighted information after merging. In this paper, we define a merging operator that maps a set of possibilistic knowledge bases and a formula representing the integrity constraints to a classical knowledge base by using lexicographic ordering. We show that it satisfies nine postulates that generalize basic postulates for propositional merging given in [11]. These postulates capture the principle of minimal change in some sense. We then provide an algorithm for generating the resulting knowledge base of our merging operator. Finally, we discuss the compatibility of our merging operator with propositional merging and establish the advantage of our merging operator over existing semantic merging operators in the propositional case.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There is extensive theoretical work on measures of inconsistency for arbitrary formulae in knowledge bases. Many of these are defined in terms of the set of minimal inconsistent subsets (MISes) of the base. However, few have been implemented or experimentally evaluated to support their viability, since computing all MISes is intractable in the worst case. Fortunately, recent work on a related problem of minimal unsatisfiable sets of clauses (MUSes) offers a viable solution in many cases. In this paper, we begin by drawing connections between MISes and MUSes through algorithms based on a MUS generalization approach and a new optimized MUS transformation approach to finding MISes. We implement these algorithms, along with a selection of existing measures for flat and stratified knowledge bases, in a tool called mimus. We then carry out an extensive experimental evaluation of mimus using randomly generated arbitrary knowledge bases. We conclude that these measures are viable for many large and complex random instances. Moreover, they represent a practical and intuitive tool for inconsistency handling.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Knowledge is an important component in many intelligent systems.
Since items of knowledge in a knowledge base can be conflicting, especially if
there are multiple sources contributing to the knowledge in this base, significant
research efforts have been made on developing inconsistency measures for
knowledge bases and on developing merging approaches. Most of these efforts
start with flat knowledge bases. However, in many real-world applications, items
of knowledge are not perceived with equal importance, rather, weights (which
can be used to indicate the importance or priority) are associated with items of
knowledge. Therefore, measuring the inconsistency of a knowledge base with
weighted formulae as well as their merging is an important but difficult task. In
this paper, we derive a numerical characteristic function from each knowledge
base with weighted formulae, based on the Dempster-Shafer theory of evidence.
Using these functions, we are able to measure the inconsistency of the knowledge
base in a convenient and rational way, and are able to merge multiple knowledge
bases with weighted formulae, even if knowledge in these bases may be
inconsistent. Furthermore, by examining whether multiple knowledge bases are
dependent or independent, they can be combined in different ways using their
characteristic functions, which cannot be handled (or at least have never been
considered) in classic knowledge based merging approaches in the literature.