911 resultados para formal
Resumo:
When we reason about change over time, causation provides an implicit preference: we prefer sequences of situations in which one situation leads causally to the next, rather than sequences in which one situation follows another at random and without causal connections. In this paper, we explore the problem of temporal reasoning --- reasoning about change over time --- and the crucial role that causation plays in our intuitions. We examine previous approaches to temporal reasoning, and their shortcomings, in light of this analysis. We propose a new system for causal reasoning, motivated action theory, which builds upon causation as a crucial preference creterion. Motivated action theory solves the traditional problems of both forward and backward reasoning, and additionally provides a basis for a new theory of explanation.
Resumo:
This paper discusses the use of relation algebra operations on formal contexts. These operations are a generalisation of some of the context operations that are described in the standard FCA textbook (Ganter & Wille, 1999). This paper extends previous research in this area with respect to applications and implementations. It also describes a software tool (FcaFlint) which in combination with FcaStone facilitates the application of relation algebra operations to contexts stored in many formats.
Resumo:
Dissertação apresentada à Universidade Fernando Pessoa, como parte dos requisitos necessários para a obtenção do grau Mestre em Criminologia
Resumo:
We survey several of the research efforts pursued by the iBench and snBench projects in the CS Department at Boston University over the last half dozen years. These activities use ideas and methodologies inspired by recent developments in other parts of computer science -- particularly in formal methods and in the foundations of programming languages -- but now specifically applied to the certification of safety-critical networking systems. This is research jointly led by Azer Bestavros and Assaf Kfoury with the participation of Adam Bradley, Andrei Lapets, and Michael Ocean.
Resumo:
In research areas involving mathematical rigor, there are numerous benefits to adopting a formal representation of models and arguments: reusability, automatic evaluation of examples, and verification of consistency and correctness. However, broad accessibility has not been a priority in the design of formal verification tools that can provide these benefits. We propose a few design criteria to address these issues: a simple, familiar, and conventional concrete syntax that is independent of any environment, application, or verification strategy, and the possibility of reducing workload and entry costs by employing features selectively. We demonstrate the feasibility of satisfying such criteria by presenting our own formal representation and verification system. Our system’s concrete syntax overlaps with English, LATEX and MediaWiki markup wherever possible, and its verifier relies on heuristic search techniques that make the formal authoring process more manageable and consistent with prevailing practices. We employ techniques and algorithms that ensure a simple, uniform, and flexible definition and design for the system, so that it easy to augment, extend, and improve.
Resumo:
NetSketch is a tool for the specification of constrained-flow applications and the certification of desirable safety properties imposed thereon. NetSketch is conceived to assist system integrators in two types of activities: modeling and design. As a modeling tool, it enables the abstraction of an existing system while retaining sufficient information about it to carry out future analysis of safety properties. As a design tool, NetSketch enables the exploration of alternative safe designs as well as the identification of minimal requirements for outsourced subsystems. NetSketch embodies a lightweight formal verification philosophy, whereby the power (but not the heavy machinery) of a rigorous formalism is made accessible to users via a friendly interface. NetSketch does so by exposing tradeoffs between exactness of analysis and scalability, and by combining traditional whole-system analysis with a more flexible compositional analysis. The compositional analysis is based on a strongly-typed Domain-Specific Language (DSL) for describing and reasoning about constrained-flow networks at various levels of sketchiness along with invariants that need to be enforced thereupon. In this paper, we define the formal system underlying the operation of NetSketch, in particular the DSL behind NetSketch's user-interface when used in "sketch mode", and prove its soundness relative to appropriately-defined notions of validity. In a companion paper [6], we overview NetSketch, highlight its salient features, and illustrate how it could be used in two applications: the management/shaping of traffic flows in a vehicular network (as a proxy for CPS applications) and in a streaming media network (as a proxy for Internet applications).
Resumo:
In college courses dealing with material that requires mathematical rigor, the adoption of a machine-readable representation for formal arguments can be advantageous. Students can focus on a specific collection of constructs that are represented consistently. Examples and counterexamples can be evaluated. Assignments can be assembled and checked with the help of an automated formal reasoning system. However, usability and accessibility do not have a high priority and are not addressed sufficiently well in the design of many existing machine-readable representations and corresponding formal reasoning systems. In earlier work [Lap09], we attempt to address this broad problem by proposing several specific design criteria organized around the notion of a natural context: the sphere of awareness a working human user maintains of the relevant constructs, arguments, experiences, and background materials necessary to accomplish the task at hand. We report on our attempt to evaluate our proposed design criteria by deploying within the classroom a lightweight formal verification system designed according to these criteria. The lightweight formal verification system was used within the instruction of a common application of formal reasoning: proving by induction formal propositions about functional code. We present all of the formal reasoning examples and assignments considered during this deployment, most of which are drawn directly from an introductory text on functional programming. We demonstrate how the design of the system improves the effectiveness and understandability of the examples, and how it aids in the instruction of basic formal reasoning techniques. We make brief remarks about the practical and administrative implications of the system’s design from the perspectives of the student, the instructor, and the grader.
Resumo:
In work that involves mathematical rigor, there are numerous benefits to adopting a representation of models and arguments that can be supplied to a formal reasoning or verification system: reusability, automatic evaluation of examples, and verification of consistency and correctness. However, accessibility has not been a priority in the design of formal verification tools that can provide these benefits. In earlier work [Lap09a], we attempt to address this broad problem by proposing several specific design criteria organized around the notion of a natural context: the sphere of awareness a working human user maintains of the relevant constructs, arguments, experiences, and background materials necessary to accomplish the task at hand. This work expands one aspect of the earlier work by considering more extensively an essential capability for any formal reasoning system whose design is oriented around simulating the natural context: native support for a collection of mathematical relations that deal with common constructs in arithmetic and set theory. We provide a formal definition for a context of relations that can be used to both validate and assist formal reasoning activities. We provide a proof that any algorithm that implements this formal structure faithfully will necessary converge. Finally, we consider the efficiency of an implementation of this formal structure that leverages modular implementations of well-known data structures: balanced search trees and transitive closures of hypergraphs.
Resumo:
A weak reference is a reference to an object that is not followed by the pointer tracer when garbage collection is called. That is, a weak reference cannot prevent the object it references from being garbage collected. Weak references remain a troublesome programming feature largely because there is not an accepted, precise semantics that describes their behavior (in fact, we are not aware of any formalization of their semantics). The trouble is that weak references allow reachable objects to be garbage collected, therefore allowing garbage collection to influence the result of a program. Despite this difficulty, weak references continue to be used in practice for reasons related to efficient storage management, and are included in many popular programming languages (Standard ML, Haskell, OCaml, and Java). We give a formal semantics for a calculus called λweak that includes weak references and is derived from Morrisett, Felleisen, and Harper’s λgc. λgc formalizes the notion of garbage collection by means of a rewrite rule. Such a formalization is required to precisely characterize the semantics of weak references. However, the inclusion of a garbage-collection rewrite-rule in a language with weak references introduces non-deterministic evaluation, even if the parameter-passing mechanism is deterministic (call-by-value in our case). This raises the question of confluence for our rewrite system. We discuss natural restrictions under which our rewrite system is confluent, thus guaranteeing uniqueness of program result. We define conditions that allow other garbage collection algorithms to co-exist with our semantics of weak references. We also introduce a polymorphic type system to prove the absence of erroneous program behavior (i.e., the absence of “stuck evaluation”) and a corresponding type inference algorithm. We prove the type system sound and the inference algorithm sound and complete.
Resumo:
OBJECTIVE: To compare the performance of formal prognostic instruments vs subjective clinical judgment with regards to predicting functional outcome in patients with spontaneous intracerebral hemorrhage (ICH). METHODS: This prospective observational study enrolled 121 ICH patients hospitalized at 5 US tertiary care centers. Within 24 hours of each patient's admission to the hospital, one physician and one nurse on each patient's clinical team were each asked to predict the patient's modified Rankin Scale (mRS) score at 3 months and to indicate whether he or she would recommend comfort measures. The admission ICH score and FUNC score, 2 prognostic scales selected for their common use in neurologic practice, were calculated for each patient. Spearman rank correlation coefficients (r) with respect to patients' actual 3-month mRS for the physician and nursing predictions were compared against the same correlation coefficients for the ICH score and FUNC score. RESULTS: The absolute value of the correlation coefficient for physician predictions with respect to actual outcome (0.75) was higher than that of either the ICH score (0.62, p = 0.057) or the FUNC score (0.56, p = 0.01). The nursing predictions of outcome (r = 0.72) also trended towards an accuracy advantage over the ICH score (p = 0.09) and FUNC score (p = 0.03). In an analysis that excluded patients for whom comfort care was recommended, the 65 available attending physician predictions retained greater accuracy (r = 0.73) than either the ICH score (r = 0.50, p = 0.02) or the FUNC score (r = 0.42, p = 0.004). CONCLUSIONS: Early subjective clinical judgment of physicians correlates more closely with 3-month outcome after ICH than prognostic scales.
Resumo:
The phrase “not much mathematics required” can imply a variety of skill levels. When this phrase is applied to computer scientists, software engineers, and clients in the area of formal specification, the word “much” can be widely misinterpreted with disastrous consequences. A small experiment in reading specifications revealed that students already trained in discrete mathematics and the specification notation performed very poorly; much worse than could reasonably be expected if formal methods proponents are to be believed.
Resumo:
Argumentation as reflected in a short communication from the published literature of botany and zoology is discussed. Trying to capture the logic structure of the argument, however imperfectly, is relevant to information science and depends on a particular goal: namely, to potentially benefit the task of sketching the relationship between bibliographic entries in a better manner than is possible with present-day bibliometric or scientometric practice. This imposes tight limits on the depth of analysis of the text.
Resumo:
Use of structuring mechanisms (such as modularisation) is widely believed to be one of the key ways to improve software quality. Structuring is considered to be at least as important for specification documents as for source code, since it is assumed to improve comprehensibility. Yet, as with most widely held assumptions in software engineering, there is little empirical evidence to support this hypothesis. Also, even if structuring can be shown to he a good thing, we do not know how much structuring is somehow optimal. One of the more popular formal specification languages, Z, encourages structuring through its schema calculus. A controlled experiment is described in which two hypotheses about the effects of structure on the comprehensibility of Z specifications are tested. Evidence was found that structuring a specification into schemas of about 20 lines long significantly improved comprehensibility over a monolithic specification. However, there seems to be no perceived advantage in breaking down the schemas into much smaller components. The experiment can he fully replicated.
Resumo:
This Acknowledgement refers to the special issue "Formal Approaches to Legal Evidence" of the Artificial Intelligence and Law, September 2001, Vol. 9, Issue 2-3, which was guest edited by Ephraim Nissan.
Resumo:
This special issue "Formal Approaches to Legal Evidence" of the Artificial Intelligence and Law, September 2001, Vol. 9, Issue 2-3, which was guest edited by Ephraim Nissan.