871 resultados para probabilistic refinement calculus


Relevância:

80.00% 80.00%

Publicador:

Resumo:

We provide an abstract command language for real-time programs and outline how a partial correctness semantics can be used to compute execution times. The notions of a timed command, refinement of a timed command, the command traversal condition, and the worst-case and best-case execution time of a command are formally introduced and investigated with the help of an underlying weakest liberal precondition semantics. The central result is a theory for the computation of worst-case and best-case execution times from the underlying semantics based on supremum and infimum calculations. The framework is applied to the analysis of a message transmitter program and its implementation. (c) 2005 Elsevier B.V. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper we discuss the refinement of exceptions. We extend the Guarded Command Language normally used in the refinement calculus, with a simple exception handling statement, which we model using King and Morgan's exit statement (1995). We derive some variants of King and Morgan's refinement laws for their exit statement, and illustrate the approach with an example of a refinement of a simple program.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper discusses the relations between extended incidence calculus and assumption-based truth maintenance systems (ATMSs). We first prove that managing labels for statements (nodes) in an ATMS is equivalent to producing incidence sets of these statements in extended incidence calculus. We then demonstrate that the justification set for a node is functionally equivalent to the implication relation set for the same node in extended incidence calculus. As a consequence, extended incidence calculus can provide justifications for an ATMS, because implication relation sets are discovered by the system automatically. We also show that extended incidence calculus provides a theoretical basis for constructing a probabilistic ATMS by associating proper probability distributions on assumptions. In this way, we can not only produce labels for all nodes in the system, but also calculate the probability of any of such nodes in it. The nogood environments can also be obtained automatically. Therefore, extended incidence calculus and the ATMS are equivalent in carrying out inferences at both the symbolic level and the numerical level. This extends a result due to Laskey and Lehner.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A computational algorithm (based on Smullyan's analytic tableau method) that varifies whether a given well-formed formula in propositional calculus is a tautology or not has been implemented on a DEC system 10. The stepwise refinement approch of program development used for this implementation forms the subject matter of this paper. The top-down design has resulted in a modular and reliable program package. This computational algoritlhm compares favourably with the algorithm based on the well-known resolution principle used in theorem provers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Estimating program worst case execution time(WCET) accurately and efficiently is a challenging task. Several programs exhibit phase behavior wherein cycles per instruction (CPI) varies in phases during execution. Recent work has suggested the use of phases in such programs to estimate WCET with minimal instrumentation. However the suggested model uses a function of mean CPI that has no probabilistic guarantees. We propose to use Chebyshev's inequality that can be applied to any arbitrary distribution of CPI samples, to probabilistically bound CPI of a phase. Applying Chebyshev's inequality to phases that exhibit high CPI variation leads to pessimistic upper bounds. We propose a mechanism that refines such phases into sub-phases based on program counter(PC) signatures collected using profiling and also allows the user to control variance of CPI within a sub-phase. We describe a WCET analyzer built on these lines and evaluate it with standard WCET and embedded benchmark suites on two different architectures for three chosen probabilities, p={0.9, 0.95 and 0.99}. For p= 0.99, refinement based on PC signatures alone, reduces average pessimism of WCET estimate by 36%(77%) on Arch1 (Arch2). Compared to Chronos, an open source static WCET analyzer, the average improvement in estimates obtained by refinement is 5%(125%) on Arch1 (Arch2). On limiting variance of CPI within a sub-phase to {50%, 10%, 5% and 1%} of its original value, average accuracy of WCET estimate improves further to {9%, 11%, 12% and 13%} respectively, on Arch1. On Arch2, average accuracy of WCET improves to 159% when CPI variance is limited to 50% of its original value and improvement is marginal beyond that point.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Lennart Åqvist (1992) proposed a logical theory of legal evidence, based on the Bolding-Ekelöf of degrees of evidential strength. This paper reformulates Åqvist's model in terms of the probabilistic version of the kappa calculus. Proving its acceptability in the legal context is beyond the present scope, but the epistemological debate about Bayesian Law isclearly relevant. While the present model is a possible link to that lineof inquiry, we offer some considerations about the broader picture of thepotential of AI & Law in the evidentiary context. Whereas probabilisticreasoning is well-researched in AI, calculations about the threshold ofpersuasion in litigation, whatever their value, are just the tip of theiceberg. The bulk of the modeling desiderata is arguably elsewhere, if one isto ideally make the most of AI's distinctive contribution as envisaged forlegal evidence research.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Incidence calculus is a mechanism for probabilistic reasoning in which sets of possible worlds, called incidences, are associated with axioms, and probabilities are then associated with these sets. Inference rules are used to deduce bounds on the incidence of formulae which are not axioms, and bounds for the probability of such a formula can then be obtained. In practice an assignment of probabilities directly to axioms may be given, and it is then necessary to find an assignment of incidence which will reproduce these probabilities. We show that this task of assigning incidences can be viewed as a tree searching problem, and two techniques for performing this research are discussed. One of these is a new proposal involving a depth first search, while the other incorporates a random element. A Prolog implementation of these methods has been developed. The two approaches are compared for efficiency and the significance of their results are discussed. Finally we discuss a new proposal for applying techniques from linear programming to incidence calculus.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The thesis applies the ICC tecniques to the probabilistic polinomial complexity classes in order to get an implicit characterization of them. The main contribution lays on the implicit characterization of PP (which stands for Probabilistic Polynomial Time) class, showing a syntactical characterisation of PP and a static complexity analyser able to recognise if an imperative program computes in Probabilistic Polynomial Time. The thesis is divided in two parts. The first part focuses on solving the problem by creating a prototype of functional language (a probabilistic variation of lambda calculus with bounded recursion) that is sound and complete respect to Probabilistic Prolynomial Time. The second part, instead, reverses the problem and develops a feasible way to verify if a program, written with a prototype of imperative programming language, is running in Probabilistic polynomial time or not. This thesis would characterise itself as one of the first step for Implicit Computational Complexity over probabilistic classes. There are still open hard problem to investigate and try to solve. There are a lot of theoretical aspects strongly connected with these topics and I expect that in the future there will be wide attention to ICC and probabilistic classes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Prediction of psychosis in patients at clinical high risk (CHR) has become a mainstream focus of clinical and research interest worldwide. When using CHR instruments for clinical purposes, the predicted outcome is but only a probability; and, consequently, any therapeutic action following the assessment is based on probabilistic prognostic reasoning. Yet, probabilistic reasoning makes considerable demands on the clinicians. We provide here a scholarly practical guide summarising the key concepts to support clinicians with probabilistic prognostic reasoning in the CHR state. We review risk or cumulative incidence of psychosis in, person-time rate of psychosis, Kaplan-Meier estimates of psychosis risk, measures of prognostic accuracy, sensitivity and specificity in receiver operator characteristic curves, positive and negative predictive values, Bayes’ theorem, likelihood ratios, potentials and limits of real-life applications of prognostic probabilistic reasoning in the CHR state. Understanding basic measures used for prognostic probabilistic reasoning is a prerequisite for successfully implementing the early detection and prevention of psychosis in clinical practice. Future refinement of these measures for CHR patients may actually influence risk management, especially as regards initiating or withholding treatment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Since Z, being a state-based language, describes a system in terms of its state and potential state changes, it is natural to want to describe properties of a specified system also in terms of its state. One means of doing this is to use Linear Temporal Logic (LTL) in which properties about the state of a system over time can be captured. This, however, raises the question of whether these properties are preserved under refinement. Refinement is observation preserving and the state of a specified system is regarded as internal and, hence, non-observable. In this paper, we investigate this issue by addressing the following questions. Given that a Z specification A is refined by a Z specification C, and that P is a temporal logic property which holds for A, what temporal logic property Q can we deduce holds for C? Furthermore, under what circumstances does the property Q preserve the intended meaning of the property P? The paper answers these questions for LTL, but the approach could also be applied to other temporal logics over states such as CTL and the mgr-calculus.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An important part of computed tomography is the calculation of a three-dimensional reconstruction of an object from series of X-ray images. Unfortunately, some applications do not provide sufficient X-ray images. Then, the reconstructed objects no longer truly represent the original. Inside of the volumes, the accuracy seems to vary unpredictably. In this paper, we introduce a novel method to evaluate any reconstruction, voxel by voxel. The evaluation is based on a sophisticated probabilistic handling of the measured X-rays, as well as the inclusion of a priori knowledge about the materials that the object receiving the X-ray examination consists of. For each voxel, the proposed method outputs a numerical value that represents the probability of existence of a predefined material at the position of the voxel while doing X-ray. Such a probabilistic quality measure was lacking so far. In our experiment, false reconstructed areas get detected by their low probability. In exact reconstructed areas, a high probability predominates. Receiver Operating Characteristics not only confirm the reliability of our quality measure but also demonstrate that existing methods are less suitable for evaluating a reconstruction.