1000 resultados para denotational proof language
Resumo:
In this thesis, I designed and implemented a virtual machine (VM) for a monomorphic variant of Athena, a type-omega denotational proof language (DPL). This machine attempts to maintain the minimum state required to evaluate Athena phrases. This thesis also includes the design and implementation of a compiler for monomorphic Athena that compiles to the VM. Finally, it includes details on my implementation of a read-eval-print loop that glues together the VM core and the compiler to provide a full, user-accessible interface to monomorphic Athena. The Athena VM provides the same basis for DPLs that the SECD machine does for pure, functional programming and the Warren Abstract Machine does for Prolog.
Resumo:
We present a framework for describing proof planners. This framework is based around a decomposition of proof planners into planning states, proof language, proof plans, proof methods, proof revision, proof control and planning algorithms. We use this framework to motivate the comparison of three recent proof planning systems, lclam, OMEGA and IsaPlanner, and demonstrate how the framework allows us to discuss and illustrate both their similarities and differences in a consistent fashion. This analysis reveals that proof control and the use of contextual information in planning states are key areas in need of further investigation.
Resumo:
Matita (that means pencil in Italian) is a new interactive theorem prover under development at the University of Bologna. When compared with state-of-the-art proof assistants, Matita presents both traditional and innovative aspects. The underlying calculus of the system, namely the Calculus of (Co)Inductive Constructions (CIC for short), is well-known and is used as the basis of another mainstream proof assistant—Coq—with which Matita is to some extent compatible. In the same spirit of several other systems, proof authoring is conducted by the user as a goal directed proof search, using a script for storing textual commands for the system. In the tradition of LCF, the proof language of Matita is procedural and relies on tactic and tacticals to proceed toward proof completion. The interaction paradigm offered to the user is based on the script management technique at the basis of the popularity of the Proof General generic interface for interactive theorem provers: while editing a script the user can move forth the execution point to deliver commands to the system, or back to retract (or “undo”) past commands. Matita has been developed from scratch in the past 8 years by several members of the Helm research group, this thesis author is one of such members. Matita is now a full-fledged proof assistant with a library of about 1.000 concepts. Several innovative solutions spun-off from this development effort. This thesis is about the design and implementation of some of those solutions, in particular those relevant for the topic of user interaction with theorem provers, and of which this thesis author was a major contributor. Joint work with other members of the research group is pointed out where needed. The main topics discussed in this thesis are briefly summarized below. Disambiguation. Most activities connected with interactive proving require the user to input mathematical formulae. Being mathematical notation ambiguous, parsing formulae typeset as mathematicians like to write down on paper is a challenging task; a challenge neglected by several theorem provers which usually prefer to fix an unambiguous input syntax. Exploiting features of the underlying calculus, Matita offers an efficient disambiguation engine which permit to type formulae in the familiar mathematical notation. Step-by-step tacticals. Tacticals are higher-order constructs used in proof scripts to combine tactics together. With tacticals scripts can be made shorter, readable, and more resilient to changes. Unfortunately they are de facto incompatible with state-of-the-art user interfaces based on script management. Such interfaces indeed do not permit to position the execution point inside complex tacticals, thus introducing a trade-off between the usefulness of structuring scripts and a tedious big step execution behavior during script replaying. In Matita we break this trade-off with tinycals: an alternative to a subset of LCF tacticals which can be evaluated in a more fine-grained manner. Extensible yet meaningful notation. Proof assistant users often face the need of creating new mathematical notation in order to ease the use of new concepts. The framework used in Matita for dealing with extensible notation both accounts for high quality bidimensional rendering of formulae (with the expressivity of MathMLPresentation) and provides meaningful notation, where presentational fragments are kept synchronized with semantic representation of terms. Using our approach interoperability with other systems can be achieved at the content level, and direct manipulation of formulae acting on their rendered forms is possible too. Publish/subscribe hints. Automation plays an important role in interactive proving as users like to delegate tedious proving sub-tasks to decision procedures or external reasoners. Exploiting the Web-friendliness of Matita we experimented with a broker and a network of web services (called tutors) which can try independently to complete open sub-goals of a proof, currently being authored in Matita. The user receives hints from the tutors on how to complete sub-goals and can interactively or automatically apply them to the current proof. Another innovative aspect of Matita, only marginally touched by this thesis, is the embedded content-based search engine Whelp which is exploited to various ends, from automatic theorem proving to avoiding duplicate work for the user. We also discuss the (potential) reusability in other systems of the widgets presented in this thesis and how we envisage the evolution of user interfaces for interactive theorem provers in the Web 2.0 era.
Resumo:
Le applicazioni che offrono servizi sulla base della posizione degli utenti sono sempre più utilizzate, a partire dal navigatore fino ad arrivare ai sistemi di trasporto intelligenti (ITS) i quali permetteranno ai veicoli di comunicare tra loro. Alcune di questi servizi permettono perfino di ottenere qualche incentivo se l'utente visita o passa per determinate zone. Per esempio un negozio potrebbe offrire dei coupon alle persone che si trovano nei paraggi. Tuttavia, la posizione degli utenti è facilmente falsificabile, ed in quest'ultima tipologia di servizi, essi potrebbero ottenere gli incentivi in modo illecito, raggirando il sistema. Diviene quindi necessario implementare un'architettura in grado di impedire alle persone di falsificare la loro posizione. A tal fine, numerosi lavori sono stati proposti, i quali delegherebbero la realizzazione di "prove di luogo" a dei server centralizzati oppure collocherebbero degli access point in grado di rilasciare prove o certificati a quegli utenti che si trovano vicino. In questo lavoro di tesi abbiamo ideato un'architettura diversa da quelle dei lavori correlati, cercando di utilizzare le funzionalità offerte dalla tecnologia blockchain e dalla memorizzazione distribuita. In questo modo è stato possibile progettare una soluzione che fosse decentralizzata e trasparente, assicurando l'immutabilità dei dati mediante l'utilizzo della blockchain. Inoltre, verrà dettagliato un'idea di caso d'uso da realizzare utilizzando l'architettura da noi proposta, andando ad evidenziare i vantaggi che, potenzialmente, si potrebbero trarre da essa. Infine, abbiamo implementato parte del sistema in questione, misurando i tempi ed i costi richiesti dalle transazioni su alcune delle blockchain disponibili al giorno d'oggi, utilizzando le infrastrutture messe a disposizione da Ethereum, Polygon e Algorand.
Resumo:
Trust is a vital feature for Semantic Web: If users (humans and agents) are to use and integrate system answers, they must trust them. Thus, systems should be able to explain their actions, sources, and beliefs, and this issue is the topic of the proof layer in the design of the Semantic Web. This paper presents the design and implementation of a system for proof explanation on the Semantic Web, based on defeasible reasoning. The basis of this work is the DR-DEVICE system that is extended to handle proofs. A critical aspect is the representation of proofs in an XML language, which is achieved by a RuleML language extension.
Resumo:
Under the Dynamic Model of Multilingualism multilinguals are especially vulnerable to language attrition. It was the aim of the present study to verify if this was the case and to observe whether the different linguistic skills (receptive vs. descriptive) and the different linguistic levels (syntactic, lexical, morphological, etc.) would be affected equally.Data were gathered longitudinally by means of a language test for the subject’s reading, writing, listening and speaking skills as well as her knowledge of grammar and vocabulary. Although the overall accuracy remained intact and no proof for attrition in the receptive skills was found, the productive skills - mainly fluency - were shown to have suffered from language attrition. This was demonstrated by an increase in the number of pauses, hesitations, repetitions and self-corrections among others and decrease in the percentage of error-free clauses and decrease in the clause length, in oral and written fluency respectively.
Resumo:
Mutation testing has been used to assess the quality of test case suites by analyzing the ability in distinguishing the artifact under testing from a set of alternative artifacts, the so-called mutants. The mutants are generated from the artifact under testing by applying a set of mutant operators, which produce artifacts with simple syntactical differences. The mutant operators are usually based on typical errors that occur during the software development and can be related to a fault model. In this paper, we propose a language-named MuDeL (MUtant DEfinition Language)-for the definition of mutant operators, aiming not only at automating the mutant generation, but also at providing precision and formality to the operator definition. The proposed language is based on concepts from transformational and logical programming paradigms, as well as from context-free grammar theory. Denotational semantics formal framework is employed to define the semantics of the MuDeL language. We also describe a system-named mudelgen-developed to support the use of this language. An executable representation of the denotational semantics of the language is used to check the correctness of the implementation of mudelgen. At the very end, a mutant generator module is produced, which can be incorporated into a specific mutant tool/environment. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
Traditionally, ontologies describe knowledge representation in a denotational, formalized, and deductive way. In addition, in this paper, we propose a semiotic, inductive, and approximate approach to ontology creation. We define a conceptual framework, a semantics extraction algorithm, and a first proof of concept applying the algorithm to a small set of Wikipedia documents. Intended as an extension to the prevailing top-down ontologies, we introduce an inductive fuzzy grassroots ontology, which organizes itself organically from existing natural language Web content. Using inductive and approximate reasoning to reflect the natural way in which knowledge is processed, the ontology’s bottom-up build process creates emergent semantics learned from the Web. By this means, the ontology acts as a hub for computing with words described in natural language. For Web users, the structural semantics are visualized as inductive fuzzy cognitive maps, allowing an initial form of intelligence amplification. Eventually, we present an implementation of our inductive fuzzy grassroots ontology Thus,this paper contributes an algorithm for the extraction of fuzzy grassroots ontologies from Web data by inductive fuzzy classification.
Resumo:
Language resources, such as multilingual lexica and multilingual electronic dictionaries, contain collections of lexical entries in several languages. Having access to the corresponding explicit or implicit translation relations between such entries might be of great interest for many NLP-based applications. By using Semantic Web-based techniques, translations can be available on the Web to be consumed by other (semantic enabled) resources in a direct manner, not relying on application-specific formats. To that end, in this paper we propose a model for representing translations as linked data, as an extension of the lemon model. Our translation module represents some core information associated to term translations and does not commit to specific views or translation theories. As a proof of concept, we have extracted the translations of the terms contained in Terminesp, a multilingual terminological database, and represented them as linked data. We have made them accessible on the Web both for humans (via a Web interface) and software agents (with a SPARQL endpoint).
Resumo:
Coinduction is a proof rule. It is the dual of induction. It allows reasoning about non--well--founded structures such as lazy lists or streams and is of particular use for reasoning about equivalences. A central difficulty in the automation of coinductive proof is the choice of a relation (called a bisimulation). We present an automation of coinductive theorem proving. This automation is based on the idea of proof planning. Proof planning constructs the higher level steps in a proof, using knowledge of the general structure of a family of proofs and exploiting this knowledge to control the proof search. Part of proof planning involves the use of failure information to modify the plan by the use of a proof critic which exploits the information gained from the failed proof attempt. Our approach to the problem was to develop a strategy that makes an initial simple guess at a bisimulation and then uses generalisation techniques, motivated by a critic, to refine this guess, so that a larger class of coinductive problems can be automatically verified. The implementation of this strategy has focused on the use of coinduction to prove the equivalence of programs in a small lazy functional language which is similar to Haskell. We have developed a proof plan for coinduction and a critic associated with this proof plan. These have been implemented in CoClam, an extended version of Clam with encouraging results. The planner has been successfully tested on a number of theorems.
Resumo:
Using a desorption/ionization technique, easy ambient sonic-spray ionization coupled to mass spectrometry (EASI-MS), documents related to the 2nd generation of Brazilian Real currency (R$) were screened in the positive ion mode for authenticity based on chemical profiles obtained directly from the banknote surface. Characteristic profiles were observed for authentic, seized suspect counterfeit and counterfeited homemade banknotes from inkjet and laserjet printers. The chemicals in the authentic banknotes' surface were detected via a few minor sets of ions, namely from the plasticizers bis(2-ethylhexyl)phthalate (DEHP) and dibutyl phthalate (DBP), most likely related to the official offset printing process, and other common quaternary ammonium cations, presenting a similar chemical profile to 1st-generation R$. The seized suspect counterfeit banknotes, however, displayed abundant diagnostic ions in the m/z 400-800 range due to the presence of oligomers. High-accuracy FT-ICR MS analysis enabled molecular formula assignment for each ion. The ions were separated by 44 m/z, which enabled their characterization as Surfynol® 4XX (S4XX, XX=40, 65, and 85), wherein increasing XX values indicate increasing amounts of ethoxylation on a backbone of 2,4,7,9-tetramethyl-5-decyne-4,7-diol (Surfynol® 104). Sodiated triethylene glycol monobutyl ether (TBG) of m/z 229 (C10H22O4Na) was also identified in the seized counterfeit banknotes via EASI(+) FT-ICR MS. Surfynol® and TBG are constituents of inks used for inkjet printing.
Resumo:
PURPOSE: To determine the association between language and number of citations of ophthalmology articles published in Brazilian journals. METHODS: This study was a systematic review. Original articles were identified by review of documents published at the two Brazilian ophthalmology journals indexed at Science Citation Index Expanded - SCIE [Arquivos Brasileiros de Oftalmologia (ABO) and Revista Brasileira de Oftalmologia (RBO)]. All document types (articles and reviews) listed at SCIE in English (English Group) or in Portuguese (Portuguese Group) from January 1, 2008 to December 31, 2009 were included, except: editorial materials; corrections; letters; and biographical items. The primary outcome was the number of citations through the end of second year after publication date. Subgroup analysis included likelihood of citation (cited at least once versus no citation), journal, and year of publication. RESULTS: The search at the web of science revealed 382 articles [107 (28%) in the English Group and 275 (72%) in the Portuguese Group]. Of those, 297 (77.7%) were published at the ABO and 85 (23.3%) at the RBO. The citation counts were statistically significantly higher (P<0.001) in the English Group (1.51 - SD 1.98 - range 0 to 11) compared with the Portuguese Group (0.57 - SD 1.06 - range 0 to 7). The likelihood citation was statistically significant higher (P<0.001) in the English Group (70/107 - 65.4%) compared with the Portuguese Group (89/275 - 32.7%). There were more articles published in English at the ABO (98/297 - 32.9%) than at the RBO (9/85 - 10.6%) [P<0.001]. There were no significant difference (P=0.967) at the proportion of articles published in English at the years 2008 (48/172 - 27.9%) and 2009 (59/210 - 28.1%). CONCLUSION: The number of citations of articles published in Portuguese at Brazilian ophthalmology journals is lower than the published in English. The results of this study suggest that the editorial boards should strongly encourage the authors to adopt English as the main language in their future articles.
Resumo:
The objective of this study is to describe preliminary results from the cross-cultural adaptation of the Quality of Life Assessment Questionnaire, used to measure health related quality of life (HRQL) in Brazilian children aged between 5 and 11 with HIV/AIDS. The cross-cultural model evaluated the Concept, Item, Semantic and Measurement Equivalences (internal consistency and intra-observer reliability). Evaluation of the conceptual, item, semantic equivalences showed that the Portuguese version is pertinent for the Brazilian context. Four of seven domains showed internal consistency above 0.70 (α: 0.76-0.90) and five of seven revealed intra-observer reliability (ricc: 0.41-0.70). This first Portuguese version of the HRQL questionnaire can be understood as a valuable tool for assessing children's HRQL, but further studies with large samples and more robust analyses are recommended before use in the Brazilian context.
Resumo:
In Natural Language Processing (NLP) symbolic systems, several linguistic phenomena, for instance, the thematic role relationships between sentence constituents, such as AGENT, PATIENT, and LOCATION, can be accounted for by the employment of a rule-based grammar. Another approach to NLP concerns the use of the connectionist model, which has the benefits of learning, generalization and fault tolerance, among others. A third option merges the two previous approaches into a hybrid one: a symbolic thematic theory is used to supply the connectionist network with initial knowledge. Inspired on neuroscience, it is proposed a symbolic-connectionist hybrid system called BIO theta PRED (BIOlogically plausible thematic (theta) symbolic-connectionist PREDictor), designed to reveal the thematic grid assigned to a sentence. Its connectionist architecture comprises, as input, a featural representation of the words (based on the verb/noun WordNet classification and on the classical semantic microfeature representation), and, as output, the thematic grid assigned to the sentence. BIO theta PRED is designed to ""predict"" thematic (semantic) roles assigned to words in a sentence context, employing biologically inspired training algorithm and architecture, and adopting a psycholinguistic view of thematic theory.
Resumo:
Let P be a linear partial differential operator with analytic coefficients. We assume that P is of the form ""sum of squares"", satisfying Hormander's bracket condition. Let q be a characteristic point; for P. We assume that q lies on a symplectic Poisson stratum of codimension two. General results of Okaji Show that P is analytic hypoelliptic at q. Hence Okaji has established the validity of Treves' conjecture in the codimension two case. Our goal here is to give a simple, self-contained proof of this fact.