22 resultados para refinement calculus
em QUB Research Portal - Research Directory and Institutional Repository for Queen's University Belfast
Resumo:
Incidence calculus is a mechanism for probabilistic reasoning in which sets of possible worlds, called incidences, are associated with axioms, and probabilities are then associated with these sets. Inference rules are used to deduce bounds on the incidence of formulae which are not axioms, and bounds for the probability of such a formula can then be obtained. In practice an assignment of probabilities directly to axioms may be given, and it is then necessary to find an assignment of incidence which will reproduce these probabilities. We show that this task of assigning incidences can be viewed as a tree searching problem, and two techniques for performing this research are discussed. One of these is a new proposal involving a depth first search, while the other incorporates a random element. A Prolog implementation of these methods has been developed. The two approaches are compared for efficiency and the significance of their results are discussed. Finally we discuss a new proposal for applying techniques from linear programming to incidence calculus.
Resumo:
Dealing with uncertainty problems in intelligent systems has attracted a lot of attention in the AI community. Quite a few techniques have been proposed. Among them, the Dempster-Shafer theory of evidence (DS theory) has been widely appreciated. In DS theory, Dempster's combination rule plays a major role. However, it has been pointed out that the application domains of the rule are rather limited and the application of the theory sometimes gives unexpected results. We have previously explored the problem with Dempster's combination rule and proposed an alternative combination mechanism in generalized incidence calculus. In this paper we give a comprehensive comparison between generalized incidence calculus and the Dempster-Shafer theory of evidence. We first prove that these two theories have the same ability in representing evidence and combining DS-independent evidence. We then show that the new approach can deal with some dependent situations while Dempster's combination rule cannot. Various examples in the paper show the ways of using generalized incidence calculus in expert systems.
Resumo:
This paper discusses the relations between extended incidence calculus and assumption-based truth maintenance systems (ATMSs). We first prove that managing labels for statements (nodes) in an ATMS is equivalent to producing incidence sets of these statements in extended incidence calculus. We then demonstrate that the justification set for a node is functionally equivalent to the implication relation set for the same node in extended incidence calculus. As a consequence, extended incidence calculus can provide justifications for an ATMS, because implication relation sets are discovered by the system automatically. We also show that extended incidence calculus provides a theoretical basis for constructing a probabilistic ATMS by associating proper probability distributions on assumptions. In this way, we can not only produce labels for all nodes in the system, but also calculate the probability of any of such nodes in it. The nogood environments can also be obtained automatically. Therefore, extended incidence calculus and the ATMS are equivalent in carrying out inferences at both the symbolic level and the numerical level. This extends a result due to Laskey and Lehner.
Resumo:
In the identification of complex dynamic systems using fuzzy neural networks, one of the main issues is the curse of dimensionality, which makes it difficult to retain a large number of system inputs or to consider a large number of fuzzy sets. Moreover, due to the correlations, not all possible network inputs or regression vectors in the network are necessary and adding them simply increases the model complexity and deteriorates the network generalisation performance. In this paper, the problem is solved by first proposing a fast algorithm for selection of network terms, and then introducing a refinement procedure to tackle the correlation issue. Simulation results show the efficacy of the method.
Resumo:
The relationship between changes in retinal vessel morphology and the onset and progression of diseases such as diabetes, hypertension and retinopathy of prematurity (ROP) has been the subject of several large scale clinical studies. However, the difficulty of quantifying changes in retinal vessels in a sufficiently fast, accurate and repeatable manner has restricted the application of the insights gleaned from these studies to clinical practice. This paper presents a novel algorithm for the efficient detection and measurement of retinal vessels, which is general enough that it can be applied to both low and high resolution fundus photographs and fluorescein angiograms upon the adjustment of only a few intuitive parameters. Firstly, we describe the simple vessel segmentation strategy, formulated in the language of wavelets, that is used for fast vessel detection. When validated using a publicly available database of retinal images, this segmentation achieves a true positive rate of 70.27%, false positive rate of 2.83%, and accuracy score of 0.9371. Vessel edges are then more precisely localised using image profiles computed perpendicularly across a spline fit of each detected vessel centreline, so that both local and global changes in vessel diameter can be readily quantified. Using a second image database, we show that the diameters output by our algorithm display good agreement with the manual measurements made by three independent observers. We conclude that the improved speed and generality offered by our algorithm are achieved without sacrificing accuracy. The algorithm is implemented in MATLAB along with a graphical user interface, and we have made the source code freely available.