859 resultados para proof search
Resumo:
Coinduction is a proof rule. It is the dual of induction. It allows reasoning about non--well--founded structures such as lazy lists or streams and is of particular use for reasoning about equivalences. A central difficulty in the automation of coinductive proof is the choice of a relation (called a bisimulation). We present an automation of coinductive theorem proving. This automation is based on the idea of proof planning. Proof planning constructs the higher level steps in a proof, using knowledge of the general structure of a family of proofs and exploiting this knowledge to control the proof search. Part of proof planning involves the use of failure information to modify the plan by the use of a proof critic which exploits the information gained from the failed proof attempt. Our approach to the problem was to develop a strategy that makes an initial simple guess at a bisimulation and then uses generalisation techniques, motivated by a critic, to refine this guess, so that a larger class of coinductive problems can be automatically verified. The implementation of this strategy has focused on the use of coinduction to prove the equivalence of programs in a small lazy functional language which is similar to Haskell. We have developed a proof plan for coinduction and a critic associated with this proof plan. These have been implemented in CoClam, an extended version of Clam with encouraging results. The planner has been successfully tested on a number of theorems.
Resumo:
First-order temporal logic is a concise and powerful notation, with many potential applications in both Computer Science and Artificial Intelligence. While the full logic is highly complex, recent work on monodic first-order temporal logics has identified important enumerable and even decidable fragments. Although a complete and correct resolution-style calculus has already been suggested for this specific fragment, this calculus involves constructions too complex to be of practical value. In this paper, we develop a machine-oriented clausal resolution method which features radically simplified proof search. We first define a normal form for monodic formulae and then introduce a novel resolution calculus that can be applied to formulae in this normal form. By careful encoding, parts of the calculus can be implemented using classical first-order resolution and can, thus, be efficiently implemented. We prove correctness and completeness results for the calculus and illustrate it on a comprehensive example. An implementation of the method is briefly discussed.
Resumo:
A sound and complete first-order goal-oriented sequent-type calculus is developed with ``large-block'' inference rules. In particular, the calculus contains formal analogues of such natural proof-search techniques as handling definitions and applying auxiliary propositions.
Resumo:
The goal of a research programme Evidence Algorithm is a development of an open system of automated proving that is able to accumulate mathematical knowledge and to prove theorems in a context of a self-contained mathematical text. By now, the first version of such a system called a System for Automated Deduction, SAD, is implemented in software. The system SAD possesses the following main features: mathematical texts are formalized using a specific formal language that is close to a natural language of mathematical publications; a proof search is based on special sequent-type calculi formalizing natural reasoning style, such as application of definitions and auxiliary propositions. These calculi also admit a separation of equality handling from deduction that gives an opportunity to integrate logical reasoning with symbolic calculation.
Resumo:
Matita (that means pencil in Italian) is a new interactive theorem prover under development at the University of Bologna. When compared with state-of-the-art proof assistants, Matita presents both traditional and innovative aspects. The underlying calculus of the system, namely the Calculus of (Co)Inductive Constructions (CIC for short), is well-known and is used as the basis of another mainstream proof assistant—Coq—with which Matita is to some extent compatible. In the same spirit of several other systems, proof authoring is conducted by the user as a goal directed proof search, using a script for storing textual commands for the system. In the tradition of LCF, the proof language of Matita is procedural and relies on tactic and tacticals to proceed toward proof completion. The interaction paradigm offered to the user is based on the script management technique at the basis of the popularity of the Proof General generic interface for interactive theorem provers: while editing a script the user can move forth the execution point to deliver commands to the system, or back to retract (or “undo”) past commands. Matita has been developed from scratch in the past 8 years by several members of the Helm research group, this thesis author is one of such members. Matita is now a full-fledged proof assistant with a library of about 1.000 concepts. Several innovative solutions spun-off from this development effort. This thesis is about the design and implementation of some of those solutions, in particular those relevant for the topic of user interaction with theorem provers, and of which this thesis author was a major contributor. Joint work with other members of the research group is pointed out where needed. The main topics discussed in this thesis are briefly summarized below. Disambiguation. Most activities connected with interactive proving require the user to input mathematical formulae. Being mathematical notation ambiguous, parsing formulae typeset as mathematicians like to write down on paper is a challenging task; a challenge neglected by several theorem provers which usually prefer to fix an unambiguous input syntax. Exploiting features of the underlying calculus, Matita offers an efficient disambiguation engine which permit to type formulae in the familiar mathematical notation. Step-by-step tacticals. Tacticals are higher-order constructs used in proof scripts to combine tactics together. With tacticals scripts can be made shorter, readable, and more resilient to changes. Unfortunately they are de facto incompatible with state-of-the-art user interfaces based on script management. Such interfaces indeed do not permit to position the execution point inside complex tacticals, thus introducing a trade-off between the usefulness of structuring scripts and a tedious big step execution behavior during script replaying. In Matita we break this trade-off with tinycals: an alternative to a subset of LCF tacticals which can be evaluated in a more fine-grained manner. Extensible yet meaningful notation. Proof assistant users often face the need of creating new mathematical notation in order to ease the use of new concepts. The framework used in Matita for dealing with extensible notation both accounts for high quality bidimensional rendering of formulae (with the expressivity of MathMLPresentation) and provides meaningful notation, where presentational fragments are kept synchronized with semantic representation of terms. Using our approach interoperability with other systems can be achieved at the content level, and direct manipulation of formulae acting on their rendered forms is possible too. Publish/subscribe hints. Automation plays an important role in interactive proving as users like to delegate tedious proving sub-tasks to decision procedures or external reasoners. Exploiting the Web-friendliness of Matita we experimented with a broker and a network of web services (called tutors) which can try independently to complete open sub-goals of a proof, currently being authored in Matita. The user receives hints from the tutors on how to complete sub-goals and can interactively or automatically apply them to the current proof. Another innovative aspect of Matita, only marginally touched by this thesis, is the embedded content-based search engine Whelp which is exploited to various ends, from automatic theorem proving to avoiding duplicate work for the user. We also discuss the (potential) reusability in other systems of the widgets presented in this thesis and how we envisage the evolution of user interfaces for interactive theorem provers in the Web 2.0 era.
Resumo:
International audience
Resumo:
The Smart Drug Search is publicly accessible at http://sing.ei.uvigo.es/sds/. The BIOMedical Search Engine Framework is freely available for non-commercial use at https://github.com/agjacome/biomsef
Resumo:
The thesis examines the phenomenon most commonly known as “ayahuasca tourism” – i.e. the practice of westerners traveling to South America and partaking in ceremonies in which a powerful entheogenic brew, ayahuasca, is consumed. While this popular phenomenon has been steadily increasing during the last decades, it has, however, been insufficiently studied by scholars. An important question which has not been properly addressed in earlier studies is how ayahuasca tourism relates to the wider occurrence of travel and how it should be perceived with reference to the theoretical frameworks on the subject of travel. Drawing on theories regarding pilgrimage and tourism, the main purpose of this thesis is to examine the relationship between ayahuasca tourism and the broader spectrum of travel. In particular, the study tests the designations “pilgrimage”, “religious tourism” and “spiritual tourism” with reference to ayahuasca tourism. Utilizing earlier literature as well as ayahuasca tourists‟ reports obtained from an Internet forum as a basis for analysis, I search for a suitable terminology to be used for the phenomenon. The study lays special emphasis on the protagonists‟ motivations, experiences and outcomes in order to take note of various aspects of the wide-ranging occurrence of ayahuasca tourism. Key findings indicate that ayahuasca tourism is best understood as a combination of pilgrimage and tourism. On the basis of the analysis I argue that ayahuasca tourism should be labeled as “pilgrimage” and/or “spiritual tourism”, and the tourists respectively as “pilgrims” and/or “spiritual tourists”. The category of “religious tourism/tourist”, on the other hand, turns out to be an inappropriate designation when describing the phenomenon. In general, through my study I show that the results are consistent with the present trend in the study of travel to perceive pilgrimage and tourism as theoretically similar phenomena. The study of ayahuasca tourism serves thus as living proof of contemporary travel, in which the categories of pilgrimage and tourism are often indistinguishable. I suggest that ayahuasca tourism is by no means exceptional on this point, but can rather be used as an illustration of modern travel forms on a general level. Thus, the present study does not only add to the research of ayahuasca tourism, but also provides additional insights into the study of travel.
Resumo:
We present results of a search for continuously emitted gravitational radiation, directed at the brightest low-mass x-ray binary, Scorpius X-1. Our semicoherent analysis covers 10 days of LIGO S5 data ranging from 50-550 Hz, and performs an incoherent sum of coherent F-statistic power distributed amongst frequency-modulated orbital sidebands. All candidates not removed at the veto stage were found to be consistent with noise at a 1% false alarm rate. We present Bayesian 95% confidence upper limits on gravitational-wave strain amplitude using two different prior distributions: a standard one, with no a priori assumptions about the orientation of Scorpius X-1; and an angle-restricted one, using a prior derived from electromagnetic observations. Median strain upper limits of 1.3 x 10(-24) and 8 x 10(-25) are reported at 150 Hz for the standard and angle-restricted searches respectively. This proof-of-principle analysis was limited to a short observation time by unknown effects of accretion on the intrinsic spin frequency of the neutron star, but improves upon previous upper limits by factors of similar to 1.4 for the standard, and 2.3 for the angle-restricted search at the sensitive region of the detector.
Resumo:
The promise of search-driven development is that developers will save time and resources by reusing external code in their local projects. To efficiently integrate this code, users must be able to trust it, thus trustability of code search results is just as important as their relevance. In this paper, we introduce a trustability metric to help users assess the quality of code search results and therefore ease the cost-benefit analysis they undertake trying to find suitable integration candidates. The proposed trustability metric incorporates both user votes and cross-project activity of developers to calculate a "karma" value for each developer. Through the karma value of all its developers a project is ranked on a trustability scale. We present JBENDER, a proof-of-concept code search engine which implements our trustability metric and we discuss preliminary results from an evaluation of the prototype.
Resumo:
Software developers are often unsure of the exact name of the method they need to use to invoke the desired behavior in a given context. This results in a process of searching for the correct method name in documentation, which can be lengthy and distracting to the developer. We can decrease the method search time by enhancing the documentation of a class with the most frequently used methods. Usage frequency data for methods is gathered by analyzing other projects from the same ecosystem - written in the same language and sharing dependencies. We implemented a proof of concept of the approach for Pharo Smalltalk and Java. In Pharo Smalltalk, methods are commonly searched for using a code browser tool called "Nautilus", and in Java using a web browser displaying HTML based documentation - Javadoc. We developed plugins for both browsers and gathered method usage data from open source projects, in order to increase developer productivity by reducing method search time. A small initial evaluation has been conducted showing promising results in improving developer productivity.
Resumo:
Proof reuse, or analogical reasoning, involves reusing the proof of a source theorem in the proof of a target conjecture. We have developed a method for proof reuse that is based on the generalisation replay paradigm described in the literature, in which a generalisation of the source proof is replayed to construct the target proof. In this paper, we describe the novel aspects of our method, which include a technique for producing more accurate source proof generalisations (using knowledge of the target goal), as well as a flexible replay strategy that allows the user to set various parameters to control the size and the shape of the search space. Finally, we report on the results of applying this method to a case study from the realm of software verification.
Resumo:
This paper presents a generic architecture for proof planning systems in terms of an interaction between a customisable proof module and search module. These refer to both global and local information contained in reasoning states.
Resumo:
We report on the shape resonance spectra of phenol-water clusters, as obtained from elastic electron scattering calculations. Our results, along with virtual orbital analysis, indicate that the well-known indirect mechanism for hydrogen elimination in the gas phase is significantly impacted on by microsolvation, due to the competition between vibronic couplings on the solute and solvent molecules. This fact suggests how relevant the solvation effects could be for the electron-driven damage of biomolecules and the biomass delignification [E. M. de Oliveira et al., Phys. Rev. A 86, 020701(R) (2012)]. We also discuss microsolvation signatures in the differential cross sections that could help to identify the solvated complexes and access the composition of gaseous admixtures of these species.
Resumo:
Using a desorption/ionization technique, easy ambient sonic-spray ionization coupled to mass spectrometry (EASI-MS), documents related to the 2nd generation of Brazilian Real currency (R$) were screened in the positive ion mode for authenticity based on chemical profiles obtained directly from the banknote surface. Characteristic profiles were observed for authentic, seized suspect counterfeit and counterfeited homemade banknotes from inkjet and laserjet printers. The chemicals in the authentic banknotes' surface were detected via a few minor sets of ions, namely from the plasticizers bis(2-ethylhexyl)phthalate (DEHP) and dibutyl phthalate (DBP), most likely related to the official offset printing process, and other common quaternary ammonium cations, presenting a similar chemical profile to 1st-generation R$. The seized suspect counterfeit banknotes, however, displayed abundant diagnostic ions in the m/z 400-800 range due to the presence of oligomers. High-accuracy FT-ICR MS analysis enabled molecular formula assignment for each ion. The ions were separated by 44 m/z, which enabled their characterization as Surfynol® 4XX (S4XX, XX=40, 65, and 85), wherein increasing XX values indicate increasing amounts of ethoxylation on a backbone of 2,4,7,9-tetramethyl-5-decyne-4,7-diol (Surfynol® 104). Sodiated triethylene glycol monobutyl ether (TBG) of m/z 229 (C10H22O4Na) was also identified in the seized counterfeit banknotes via EASI(+) FT-ICR MS. Surfynol® and TBG are constituents of inks used for inkjet printing.