933 resultados para Automatic theorem proving
Resumo:
This dissertation has two almost unrelated themes: privileged words and Sturmian words. Privileged words are a new class of words introduced recently. A word is privileged if it is a complete first return to a shorter privileged word, the shortest privileged words being letters and the empty word. Here we give and prove almost all results on privileged words known to date. On the other hand, the study of Sturmian words is a well-established topic in combinatorics on words. In this dissertation, we focus on questions concerning repetitions in Sturmian words, reproving old results and giving new ones, and on establishing completely new research directions. The study of privileged words presented in this dissertation aims to derive their basic properties and to answer basic questions regarding them. We explore a connection between privileged words and palindromes and seek out answers to questions on context-freeness, computability, and enumeration. It turns out that the language of privileged words is not context-free, but privileged words are recognizable by a linear-time algorithm. A lower bound on the number of binary privileged words of given length is proven. The main interest, however, lies in the privileged complexity functions of the Thue-Morse word and Sturmian words. We derive recurrences for computing the privileged complexity function of the Thue-Morse word, and we prove that Sturmian words are characterized by their privileged complexity function. As a slightly separate topic, we give an overview of a certain method of automated theorem-proving and show how it can be applied to study privileged factors of automatic words. The second part of this dissertation is devoted to Sturmian words. We extensively exploit the interpretation of Sturmian words as irrational rotation words. The essential tools are continued fractions and elementary, but powerful, results of Diophantine approximation theory. With these tools at our disposal, we reprove old results on powers occurring in Sturmian words with emphasis on the fractional index of a Sturmian word. Further, we consider abelian powers and abelian repetitions and characterize the maximum exponents of abelian powers with given period occurring in a Sturmian word in terms of the continued fraction expansion of its slope. We define the notion of abelian critical exponent for Sturmian words and explore its connection to the Lagrange spectrum of irrational numbers. The results obtained are often specialized for the Fibonacci word; for instance, we show that the minimum abelian period of a factor of the Fibonacci word is a Fibonacci number. In addition, we propose a completely new research topic: the square root map. We prove that the square root map preserves the language of any Sturmian word. Moreover, we construct a family of non-Sturmian optimal squareful words whose language the square root map also preserves.This construction yields examples of aperiodic infinite words whose square roots are periodic.
Resumo:
A program can be refined either by transforming the whole program or by refining one of its components. The refinement of a component is, for the main part, independent of the remainder of the program. However, refinement of a component can depend on the context of the component for information about the variables that are in scope and what their types are. The refinement can also take advantage of additional information, such as any precondition the component can assume. The aim of this paper is to introduce a technique, which we call program window inference, to handle such contextual information during derivations in the refinement calculus. The idea is borrowed from a technique, called window inference, for handling context in theorem proving. Window inference is the primary proof paradigm of the Ergo proof editor. This tool has been extended to mechanize refinement using program window inference. (C) 1997 Elsevier Science B.V.
Resumo:
Resilience is the property of a system to remain trustworthy despite changes. Changes of a different nature, whether due to failures of system components or varying operational conditions, significantly increase the complexity of system development. Therefore, advanced development technologies are required to build robust and flexible system architectures capable of adapting to such changes. Moreover, powerful quantitative techniques are needed to assess the impact of these changes on various system characteristics. Architectural flexibility is achieved by embedding into the system design the mechanisms for identifying changes and reacting on them. Hence a resilient system should have both advanced monitoring and error detection capabilities to recognise changes as well as sophisticated reconfiguration mechanisms to adapt to them. The aim of such reconfiguration is to ensure that the system stays operational, i.e., remains capable of achieving its goals. Design, verification and assessment of the system reconfiguration mechanisms is a challenging and error prone engineering task. In this thesis, we propose and validate a formal framework for development and assessment of resilient systems. Such a framework provides us with the means to specify and verify complex component interactions, model their cooperative behaviour in achieving system goals, and analyse the chosen reconfiguration strategies. Due to the variety of properties to be analysed, such a framework should have an integrated nature. To ensure the system functional correctness, it should rely on formal modelling and verification, while, to assess the impact of changes on such properties as performance and reliability, it should be combined with quantitative analysis. To ensure scalability of the proposed framework, we choose Event-B as the basis for reasoning about functional correctness. Event-B is a statebased formal approach that promotes the correct-by-construction development paradigm and formal verification by theorem proving. Event-B has a mature industrial-strength tool support { the Rodin platform. Proof-based verification as well as the reliance on abstraction and decomposition adopted in Event-B provides the designers with a powerful support for the development of complex systems. Moreover, the top-down system development by refinement allows the developers to explicitly express and verify critical system-level properties. Besides ensuring functional correctness, to achieve resilience we also need to analyse a number of non-functional characteristics, such as reliability and performance. Therefore, in this thesis we also demonstrate how formal development in Event-B can be combined with quantitative analysis. Namely, we experiment with integration of such techniques as probabilistic model checking in PRISM and discrete-event simulation in SimPy with formal development in Event-B. Such an integration allows us to assess how changes and di erent recon guration strategies a ect the overall system resilience. The approach proposed in this thesis is validated by a number of case studies from such areas as robotics, space, healthcare and cloud domain.
Resumo:
In this study, a given quasilinear problem is solved using variational methods. In particular, the existence of nontrivial solutions for GP is examined using minimax methods. The main theorem on the existence of a nontrivial solution for GP is detailed.
Resumo:
An artificial neural network (ANN) approach is proposed for the detection of workpiece `burn', the undesirable change in metallurgical properties of the material produced by overly aggressive or otherwise inappropriate grinding. The grinding acoustic emission (AE) signals for 52100 bearing steel were collected and digested to extract feature vectors that appear to be suitable for ANN processing. Two feature vectors are represented: one concerning band power, kurtosis and skew; and the other autoregressive (AR) coefficients. The result (burn or no-burn) of the signals was identified on the basis of hardness and profile tests after grinding. The trained neural network works remarkably well for burn detection. Other signal-processing approaches are also discussed, and among them the constant false-alarm rate (CFAR) power law and the mean-value deviance (MVD) prove useful.
Resumo:
In this paper we use the Hermite-Biehler theorem to establish results for the design of proportional plus integral (PI) controllers for a class of time delay systems. We extend results of the polynomial case to quasipolynomials using the property of interlacing in high frequencies of the class of time delay systems considered. A signature for the quasipolynomials in this class is derived and used in the proposed approach which yields the complete set of the stabilizing PI controllers.
Resumo:
In this paper we use the Hermite-Biehler theorem to establish results for the design of proportional plus integral plus derivative (PID) controllers concerning a class of time delay systems. Using the property of interlacing at high frequencies of the class of systems considered and linear programming we obtain the set of all stabilizing PID controllers. © 2005 IEEE.
Resumo:
The standard way of evaluating residues and some real integrals through the residue theorem (Cauchy's theorem) is well-known and widely applied in many branches of Physics. Herein we present an alternative technique based on the negative dimensional integration method (NDIM) originally developed to handle Feynman integrals. The advantage of this new technique is that we need only to apply Gaussian integration and solve systems of linear algebraic equations, with no need to determine the poles themselves or their residues, as well as obtaining a whole class of results for differing orders of poles simultaneously.
Resumo:
A major challenge in cancer radiotherapy is to deliver a lethal dose of radiation to the target volume while minimizing damage to the surrounding normal tissue. We have proposed a model on how treatment efficacy might be improved by interfering with biological responses to DNA damage using exogenous electric fields as a strategy to drastically reduce radiation doses in cancer therapy. This approach is demonstrated at this Laboratory through case studies with prokaryotes (bacteria) and eukaryotes (yeast) cells, in which cellkilling rates induced by both gamma radiation and exogenous electric fields were measured. It was found that when cells exposed to gamma radiation are immediately submitted to a weak electric field, cell death increases more than an order of magnitude compared to the effect of radiation alone. This finding suggests, although does not prove, that DNA damage sites are reached and recognized by means of long-range electric DNA-protein interaction, and that exogenous electric fields could destructively interfere with this process. As a consequence, DNA repair is avoided leading to massive cell death. Here we are proposing the use this new technique for the design and construction of novel radiotherapy facilities associated with linac generated gamma beams under controlled conditions of dose and beam intensity.
Resumo:
Interactive theorem provers are tools designed for the certification of formal proofs developed by means of man-machine collaboration. Formal proofs obtained in this way cover a large variety of logical theories, ranging from the branches of mainstream mathematics, to the field of software verification. The border between these two worlds is marked by results in theoretical computer science and proofs related to the metatheory of programming languages. This last field, which is an obvious application of interactive theorem proving, poses nonetheless a serious challenge to the users of such tools, due both to the particularly structured way in which these proofs are constructed, and to difficulties related to the management of notions typical of programming languages like variable binding. This thesis is composed of two parts, discussing our experience in the development of the Matita interactive theorem prover and its use in the mechanization of the metatheory of programming languages. More specifically, part I covers: - the results of our effort in providing a better framework for the development of tactics for Matita, in order to make their implementation and debugging easier, also resulting in a much clearer code; - a discussion of the implementation of two tactics, providing infrastructure for the unification of constructor forms and the inversion of inductive predicates; we point out interactions between induction and inversion and provide an advancement over the state of the art. In the second part of the thesis, we focus on aspects related to the formalization of programming languages. We describe two works of ours: - a discussion of basic issues we encountered in our formalizations of part 1A of the Poplmark challenge, where we apply the extended inversion principles we implemented for Matita; - a formalization of an algebraic logical framework, posing more complex challenges, including multiple binding and a form of hereditary substitution; this work adopts, for the encoding of binding, an extension of Masahiko Sato's canonical locally named representation we designed during our visit to the Laboratory for Foundations of Computer Science at the University of Edinburgh, under the supervision of Randy Pollack.
Resumo:
Despite decades of research, the takeup of formal methods for developing provably correct software in industry remains slow. One reason for this is the high cost of proof construction, an activity that, due to the complexity of the required proofs, is typically carried out using interactive theorem provers. In this paper we propose an agent-oriented architecture for interactive theorem proving with the aim of reducing the user interactions (and thus the cost) of constructing software verification proofs. We describe a prototype implementation of our architecture and discuss its application to a small, but non-trivial case study.
Resumo:
An inherent incomputability in the specification of a functional language extension that combines assertions with dynamic type checking is isolated in an explicit derivation from mathematical specifications. The combination of types and assertions (into "dynamic assertion-types" - DATs) is a significant issue since, because the two are congruent means for program correctness, benefit arises from their better integration in contrast to the harm resulting from their unnecessary separation. However, projecting the "set membership" view of assertion-checking into dynamic types results in some incomputable combinations. Refinement of the specification of DAT checking into an implementation by rigorous application of mathematical identities becomes feasible through the addition of a "best-approximate" pseudo-equality that isolates the incomputable component of the specification. This formal treatment leads to an improved, more maintainable outcome with further development potential.
Resumo:
In this thesis we present an approach to automated verification of floating point programs. Existing techniques for automated generation of correctness theorems are extended to produce proof obligations for accuracy guarantees and absence of floating point exceptions. A prototype automated real number theorem prover is presented, demonstrating a novel application of function interval arithmetic in the context of subdivision-based numerical theorem proving. The prototype is tested on correctness theorems for two simple yet nontrivial programs, proving exception freedom and tight accuracy guarantees automatically. The prover demonstrates a novel application of function interval arithmetic in the context of subdivision-based numerical theorem proving. The experiments show how function intervals can be used to combat the information loss problems that limit the applicability of traditional interval arithmetic in the context of hard real number theorem proving.
Resumo:
We propose an arithmetic of function intervals as a basis for convenient rigorous numerical computation. Function intervals can be used as mathematical objects in their own right or as enclosures of functions over the reals. We present two areas of application of function interval arithmetic and associated software that implements the arithmetic: (1) Validated ordinary differential equation solving using the AERN library and within the Acumen hybrid system modeling tool. (2) Numerical theorem proving using the PolyPaver prover. © 2014 Springer-Verlag.