946 resultados para Formal Verification Methods


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Formal specification is vital to the development of distributed real-time systems as these systems are inherently complex and safety-critical. It is widely acknowledged that formal specification and automatic analysis of specifications can significantly increase system reliability. Although a number of specification techniques for real-time systems have been reported in the literature, most of these formalisms do not adequately address to the constraints that the aspects of 'distribution' and 'real-time' impose on specifications. Further, an automatic verification tool is necessary to reduce human errors in the reasoning process. In this regard, this paper is an attempt towards the development of a novel executable specification language for distributed real-time systems. First, we give a precise characterization of the syntax and semantics of DL. Subsequently, we discuss the problems of model checking, automatic verification of satisfiability of DL specifications, and testing conformance of event traces with DL specifications. Effective solutions to these problems are presented as extensions to the classical first-order tableau algorithm. The use of the proposed framework is illustrated by specifying a sample problem.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Memory models of shared memory concurrent programs define the values a read of a shared memory location is allowed to see. Such memory models are typically weaker than the intuitive sequential consistency semantics to allow efficient execution. In this paper, we present WOMM (abbreviation for Weak Operational Memory Model) that formally unifies two sources of weak behavior in hardware memory models: reordering of instructions and weakly consistent memory. We show that a large number of optimizations are allowed by WOMM. We also show that WOMM is weaker than a number of hardware memory models. Consequently, if a program behaves correctly under WOMM, it will be correct with respect to those hardware memory models. Hence, WOMM can be used as a formally specified abstraction of the hardware memory models. Moreover; unlike most weak memory models, WOMM is described using operational semantics, making it easy to integrate into a model checker for concurrent programs. We further show that WOMM has an important property - it has sequential consistency semantics for datarace-free programs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We give a detailed construction of a finite-state transition system for a com-connected Message Sequence Graph. Though this result is well-known in the literature and forms the basis for the solution to several analysis and verification problems concerning MSG specifications, the constructions given in the literature are either not amenable to implementation, or imprecise, or simply incorrect. In contrast we give a detailed construction along with a proof of its correctness. Our transition system is amenable to implementation, and can also be used for a bounded analysis of general (not necessarily com-connected) MSG specifications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Continuous advances in VLSI technology have made implementation of very complicated systems possible. Modern System-on -Chips (SoCs) have many processors, IP cores and other functional units. As a result, complete verification of whole systems before implementation is becoming infeasible; hence it is likely that these systems may have some errors after manufacturing. This increases the need to find design errors in chips after fabrication. The main challenge for post-silicon debug is the observability of the internal signals. Post-silicon debug is the problem of determining what's wrong when the fabricated chip of a new design behaves incorrectly. This problem now consumes over half of the overall verification effort on large designs, and the problem is growing worse.Traditional post-silicon debug methods concentrate on functional parts of systems and provide mechanisms to increase the observability of internal state of systems. Those methods may not be sufficient as modern SoCs have lots of blocks (processors, IP cores, etc.) which are communicating with one another and communication is another source of design errors. This tutorial will be provide an insight into various observability enhancement techniques, on chip instrumentation techniques and use of high level models to support the debug process targeting both inside blocks and communication among them. It will also cover the use of formal methods to help debug process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Package-board co-design plays a crucial role in determining the performance of high-speed systems. Although there exist several commercial solutions for electromagnetic analysis and verification, lack of Computer Aided Design (CAD) tools for SI aware design and synthesis lead to longer design cycles and non-optimal package-board interconnect geometries. In this work, the functional similarities between package-board design and radio-frequency (RF) imaging are explored. Consequently, qualitative methods common to the imaging community, like Tikhonov Regularization (TR) and Landweber method are applied to solve multi-objective, multi-variable package design problems. In addition, a new hierarchical iterative piecewise linear algorithm is developed as a wrapper over LBP for an efficient solution in the design space.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The problem addressed in this paper is sound, scalable, demand-driven null-dereference verification for Java programs. Our approach consists conceptually of a base analysis, plus two major extensions for enhanced precision. The base analysis is a dataflow analysis wherein we propagate formulas in the backward direction from a given dereference, and compute a necessary condition at the entry of the program for the dereference to be potentially unsafe. The extensions are motivated by the presence of certain ``difficult'' constructs in real programs, e.g., virtual calls with too many candidate targets, and library method calls, which happen to need excessive analysis time to be analyzed fully. The base analysis is hence configured to skip such a difficult construct when it is encountered by dropping all information that has been tracked so far that could potentially be affected by the construct. Our extensions are essentially more precise ways to account for the effect of these constructs on information that is being tracked, without requiring full analysis of these constructs. The first extension is a novel scheme to transmit formulas along certain kinds of def-use edges, while the second extension is based on using manually constructed backward-direction summary functions of library methods. We have implemented our approach, and applied it on a set of real-life benchmarks. The base analysis is on average able to declare about 84% of dereferences in each benchmark as safe, while the two extensions push this number up to 91%. (C) 2014 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose apractical, feature-level and score-level fusion approach by combining acoustic and estimated articulatory information for both text independent and text dependent speaker verification. From a practical point of view, we study how to improve speaker verification performance by combining dynamic articulatory information with the conventional acoustic features. On text independent speaker verification, we find that concatenating articulatory features obtained from measured speech production data with conventional Mel-frequency cepstral coefficients (MFCCs) improves the performance dramatically. However, since directly measuring articulatory data is not feasible in many real world applications, we also experiment with estimated articulatory features obtained through acoustic-to-articulatory inversion. We explore both feature level and score level fusion methods and find that the overall system performance is significantly enhanced even with estimated articulatory features. Such a performance boost could be due to the inter-speaker variation information embedded in the estimated articulatory features. Since the dynamics of articulation contain important information, we included inverted articulatory trajectories in text dependent speaker verification. We demonstrate that the articulatory constraints introduced by inverted articulatory features help to reject wrong password trials and improve the performance after score level fusion. We evaluate the proposed methods on the X-ray Microbeam database and the RSR 2015 database, respectively, for the aforementioned two tasks. Experimental results show that we achieve more than 15% relative equal error rate reduction for both speaker verification tasks. (C) 2015 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A novel slope delay model for CMOS switch-level timing verification is presented. It differs from conventional methods in being semianalytic in character. The model assumes that all input waveforms are trapezoidal in overall shape, but that they vary in their slope. This simplification is quite reasonable and does not seriously affect precision, but it facilitates rapid solution. The model divides the stages in a switch-level circuit into two types. One corresponds to the logic gates, and the other corresponds to logic gates with pass transistors connected to their outputs. Semianalytic modeling for both cases is discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

 Instrumented indentation tests have been widely adopted for elastic modulus determination. Recently, a number of indentation-based methods for plastic properties characterization have been proposed, and rigorous verification is absolutely necessary for their wide application. In view of the advantages of spherical indentation compared with conical indentation in determining plastic proper-ties, this study mainly concerns verification of spherical indentation methods. Five convenient and simple models were selected for this purpose, and numerical experiments for a wide range of materials are carried out to identify their accuracy and sensitivity characteristics. The verification results show that four of these five methods can give relatively accurate and stable results within a certain material domain, which is defined as their validity range and has been summarized for each method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The binding behavior of two cationic dyes, brilliant cresyl blue (BCB) and methylene green (MG) to calf thymus DNA was studied by spectrophotometric and voltammetric methods. A red shift of the adsorption spectra and hypochromism accompany the binding of BCB and MG to calf thymus DNA. In 5 x 10(-2) mol dm(-3) NaCl, 5 x 10(-3) mol dm(-3) tris-HCl pH 6.87 buffer solution, the apparent binding constants are: K-BCB+ 3.0 x 10(4)M(-1) (N = 4.13) and K-MG+ = 8.8 x 10(4)M(-1) (n = 4.44). Electrochemical studies show that the formal potentials shift negatively upon addition of DNA, indicating that the oxidized forms of the dyes have stronger affinity to DNA than the reduced ones. K-BCB+/K-BCBH and K-MG+/K-MGH are evaluated to be 10.39 and 7.04. respectively. Our investigation suggests that the two cationic dyes interact with DNA predominantly via electrostatic interaction.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new theoretical framework of tracer methods is proposed in the present contribution, on the basis of mass conservation. This model is applicable for both artificial and natural tracers. It can be used to calculate the spatial distribution patterns of sediment transport rate, thus providing independent information and verification for the results derived from empirical formulae. For the procedures of the calculation, first, the tracer concentration and topographic maps of two times are obtained. Then, the spatial and temporal changes in the concentration and seabed elevation are calculated, and the boundary conditions required are determined by field observations (such as flow and bedform migration measurements). Finally, based upon eqs. (1) and (13), the transport rate is calculated and expressed as a function of the position over the study area. Further, appropriate modifications to the model may allow the tracer to have different densities and grain size distributions from the bulk sediment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The work reported here lies in the area of overlap between artificial intelligence software engineering. As research in artificial intelligence, it is a step towards a model of problem solving in the domain of programming. In particular, this work focuses on the routine aspects of programming which involve the application of previous experience with similar programs. I call this programming by inspection. Programming is viewed here as a kind of engineering activity. Analysis and synthesis by inspection area prominent part of expert problem solving in many other engineering disciplines, such as electrical and mechanical engineering. The notion of inspections methods in programming developed in this work is motivated by similar notions in other areas of engineering. This work is also motivated by current practical concerns in the area of software engineering. The inadequacy of current programming technology is universally recognized. Part of the solution to this problem will be to increase the level of automation in programming. I believe that the next major step in the evolution of more automated programming will be interactive systems which provide a mixture of partially automated program analysis, synthesis and verification. One such system being developed at MIT, called the programmer's apprentice, is the immediate intended application of this work. This report concentrates on the knowledge are of the programmer's apprentice, which is the form of a taxonomy of commonly used algorithms and data structures. To the extent that a programmer is able to construct and manipulate programs in terms of the forms in such a taxonomy, he may relieve himself of many details and generally raise the conceptual level of his interaction with the system, as compared with present day programming environments. Also, since it is practical to expand a great deal of effort pre-analyzing the entries in a library, the difficulty of verifying the correctness of programs constructed this way is correspondingly reduced. The feasibility of this approach is demonstrated by the design of an initial library of common techniques for manipulating symbolic data. This document also reports on the further development of a formalism called the plan calculus for specifying computations in a programming language independent manner. This formalism combines both data and control abstraction in a uniform framework that has facilities for representing multiple points of view and side effects.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper defines a structured methodology which is based on the foundational work of Al-Shaer et al. in [1] and that of Hamed and Al-Shaer in [2]. It defines a methodology for the declaration of policy field elements, through to the syntax, ontology and functional verification stages. In their works of [1] and [2] the authors concentrated on developing formal definitions of possible anomalies between rules in a network firewall rule set. Their work is considered as the foundation for further works on anomaly detection, including those of Fitzgerald et al. [3], Chen et al. [4], Hu et al. [5], among others. This paper extends this work by applying the methods to information sharing policies, and outlines the evaluation related to these.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Formal correctness of complex multi-party network protocols can be difficult to verify. While models of specific fixed compositions of agents can be checked against design constraints, protocols which lend themselves to arbitrarily many compositions of agents-such as the chaining of proxies or the peering of routers-are more difficult to verify because they represent potentially infinite state spaces and may exhibit emergent behaviors which may not materialize under particular fixed compositions. We address this challenge by developing an algebraic approach that enables us to reduce arbitrary compositions of network agents into a behaviorally-equivalent (with respect to some correctness property) compact, canonical representation, which is amenable to mechanical verification. Our approach consists of an algebra and a set of property-preserving rewrite rules for the Canonical Homomorphic Abstraction of Infinite Network protocol compositions (CHAIN). Using CHAIN, an expression over our algebra (i.e., a set of configurations of network protocol agents) can be reduced to another behaviorally-equivalent expression (i.e., a smaller set of configurations). Repeated applications of such rewrite rules produces a canonical expression which can be checked mechanically. We demonstrate our approach by characterizing deadlock-prone configurations of HTTP agents, as well as establishing useful properties of an overlay protocol for scheduling MPEG frames, and of a protocol for Web intra-cache consistency.