730 resultados para Interactive computing.
Resumo:
Interactive theorem provers (ITP for short) are tools whose final aim is to certify proofs written by human beings. To reach that objective they have to fill the gap between the high level language used by humans for communicating and reasoning about mathematics and the lower level language that a machine is able to “understand” and process. The user perceives this gap in terms of missing features or inefficiencies. The developer tries to accommodate the user requests without increasing the already high complexity of these applications. We believe that satisfactory solutions can only come from a strong synergy between users and developers. We devoted most part of our PHD designing and developing the Matita interactive theorem prover. The software was born in the computer science department of the University of Bologna as the result of composing together all the technologies developed by the HELM team (to which we belong) for the MoWGLI project. The MoWGLI project aimed at giving accessibility through the web to the libraries of formalised mathematics of various interactive theorem provers, taking Coq as the main test case. The motivations for giving life to a new ITP are: • study the architecture of these tools, with the aim of understanding the source of their complexity • exploit such a knowledge to experiment new solutions that, for backward compatibility reasons, would be hard (if not impossible) to test on a widely used system like Coq. Matita is based on the Curry-Howard isomorphism, adopting the Calculus of Inductive Constructions (CIC) as its logical foundation. Proof objects are thus, at some extent, compatible with the ones produced with the Coq ITP, that is itself able to import and process the ones generated using Matita. Although the systems have a lot in common, they share no code at all, and even most of the algorithmic solutions are different. The thesis is composed of two parts where we respectively describe our experience as a user and a developer of interactive provers. In particular, the first part is based on two different formalisation experiences: • our internship in the Mathematical Components team (INRIA), that is formalising the finite group theory required to attack the Feit Thompson Theorem. To tackle this result, giving an effective classification of finite groups of odd order, the team adopts the SSReflect Coq extension, developed by Georges Gonthier for the proof of the four colours theorem. • our collaboration at the D.A.M.A. Project, whose goal is the formalisation of abstract measure theory in Matita leading to a constructive proof of Lebesgue’s Dominated Convergence Theorem. The most notable issues we faced, analysed in this part of the thesis, are the following: the difficulties arising when using “black box” automation in large formalisations; the impossibility for a user (especially a newcomer) to master the context of a library of already formalised results; the uncomfortable big step execution of proof commands historically adopted in ITPs; the difficult encoding of mathematical structures with a notion of inheritance in a type theory without subtyping like CIC. In the second part of the manuscript many of these issues will be analysed with the looking glasses of an ITP developer, describing the solutions we adopted in the implementation of Matita to solve these problems: integrated searching facilities to assist the user in handling large libraries of formalised results; a small step execution semantic for proof commands; a flexible implementation of coercive subtyping allowing multiple inheritance with shared substructures; automatic tactics, integrated with the searching facilities, that generates proof commands (and not only proof objects, usually kept hidden to the user) one of which specifically designed to be user driven.
Resumo:
[EN]The use of large corpora in the study of languages is a well established tradition. In the same vein, scholarship is also well represented in the case of the study of corpora for making grammars of languages. This is the case of the COBUILD grammar and dictionary and the case of the Longman Grammar of Spoken and Written English. This means that corpora have been analyzed in order to identify patterns in languages that can be later practised by learners following those patterns described and exemplified with real instances.
Resumo:
[EN]This paper focuses on four different initialization methods for determining the initial shape for the AAM algorithm and their particular performance in two different classification tasks with respect to either the facial expression DaFEx database and to the real world data obtained from a robot’s point of view.
Resumo:
La rapida crescita di Internet e del numero di host connessi sta portando sempre di più alla nascita di nuove forme di tecnlogie ed applicazioni serverside, facendo del client un thin-client. Il Cloud Computing offre una valida piattaforma a queste nuove tecnologie, ma esso si deve confrontare con diverse problematiche, fra cui la richiesta energetica sempre più crescente, che si ripercuote su un'inevitabile aumento dei gas serra prodotti indirettamente. In questa tesi analizzeremo i problemi energetici legati al Cloud Computing e le possibili soluzioni, andando infine a creare una tassonomia fra i diversi Cloud Computing più importanti sul mercato attuale.
Resumo:
Nel corso di questa tesi analizzeremo che cos'è il cloud computing, illustrando i contratti di service level agreement e le soluzioni presenti nel mercato.
Resumo:
Progettazione ed implementazione di una piattaforma di cloud computing per erogare macchine virtuali, in particolare macchine utilizzate come proxy server da applicazioni VoIP