11 resultados para Interactive video
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
Matita (that means pencil in Italian) is a new interactive theorem prover under development at the University of Bologna. When compared with state-of-the-art proof assistants, Matita presents both traditional and innovative aspects. The underlying calculus of the system, namely the Calculus of (Co)Inductive Constructions (CIC for short), is well-known and is used as the basis of another mainstream proof assistant—Coq—with which Matita is to some extent compatible. In the same spirit of several other systems, proof authoring is conducted by the user as a goal directed proof search, using a script for storing textual commands for the system. In the tradition of LCF, the proof language of Matita is procedural and relies on tactic and tacticals to proceed toward proof completion. The interaction paradigm offered to the user is based on the script management technique at the basis of the popularity of the Proof General generic interface for interactive theorem provers: while editing a script the user can move forth the execution point to deliver commands to the system, or back to retract (or “undo”) past commands. Matita has been developed from scratch in the past 8 years by several members of the Helm research group, this thesis author is one of such members. Matita is now a full-fledged proof assistant with a library of about 1.000 concepts. Several innovative solutions spun-off from this development effort. This thesis is about the design and implementation of some of those solutions, in particular those relevant for the topic of user interaction with theorem provers, and of which this thesis author was a major contributor. Joint work with other members of the research group is pointed out where needed. The main topics discussed in this thesis are briefly summarized below. Disambiguation. Most activities connected with interactive proving require the user to input mathematical formulae. Being mathematical notation ambiguous, parsing formulae typeset as mathematicians like to write down on paper is a challenging task; a challenge neglected by several theorem provers which usually prefer to fix an unambiguous input syntax. Exploiting features of the underlying calculus, Matita offers an efficient disambiguation engine which permit to type formulae in the familiar mathematical notation. Step-by-step tacticals. Tacticals are higher-order constructs used in proof scripts to combine tactics together. With tacticals scripts can be made shorter, readable, and more resilient to changes. Unfortunately they are de facto incompatible with state-of-the-art user interfaces based on script management. Such interfaces indeed do not permit to position the execution point inside complex tacticals, thus introducing a trade-off between the usefulness of structuring scripts and a tedious big step execution behavior during script replaying. In Matita we break this trade-off with tinycals: an alternative to a subset of LCF tacticals which can be evaluated in a more fine-grained manner. Extensible yet meaningful notation. Proof assistant users often face the need of creating new mathematical notation in order to ease the use of new concepts. The framework used in Matita for dealing with extensible notation both accounts for high quality bidimensional rendering of formulae (with the expressivity of MathMLPresentation) and provides meaningful notation, where presentational fragments are kept synchronized with semantic representation of terms. Using our approach interoperability with other systems can be achieved at the content level, and direct manipulation of formulae acting on their rendered forms is possible too. Publish/subscribe hints. Automation plays an important role in interactive proving as users like to delegate tedious proving sub-tasks to decision procedures or external reasoners. Exploiting the Web-friendliness of Matita we experimented with a broker and a network of web services (called tutors) which can try independently to complete open sub-goals of a proof, currently being authored in Matita. The user receives hints from the tutors on how to complete sub-goals and can interactively or automatically apply them to the current proof. Another innovative aspect of Matita, only marginally touched by this thesis, is the embedded content-based search engine Whelp which is exploited to various ends, from automatic theorem proving to avoiding duplicate work for the user. We also discuss the (potential) reusability in other systems of the widgets presented in this thesis and how we envisage the evolution of user interfaces for interactive theorem provers in the Web 2.0 era.
Resumo:
Interactive theorem provers (ITP for short) are tools whose final aim is to certify proofs written by human beings. To reach that objective they have to fill the gap between the high level language used by humans for communicating and reasoning about mathematics and the lower level language that a machine is able to “understand” and process. The user perceives this gap in terms of missing features or inefficiencies. The developer tries to accommodate the user requests without increasing the already high complexity of these applications. We believe that satisfactory solutions can only come from a strong synergy between users and developers. We devoted most part of our PHD designing and developing the Matita interactive theorem prover. The software was born in the computer science department of the University of Bologna as the result of composing together all the technologies developed by the HELM team (to which we belong) for the MoWGLI project. The MoWGLI project aimed at giving accessibility through the web to the libraries of formalised mathematics of various interactive theorem provers, taking Coq as the main test case. The motivations for giving life to a new ITP are: • study the architecture of these tools, with the aim of understanding the source of their complexity • exploit such a knowledge to experiment new solutions that, for backward compatibility reasons, would be hard (if not impossible) to test on a widely used system like Coq. Matita is based on the Curry-Howard isomorphism, adopting the Calculus of Inductive Constructions (CIC) as its logical foundation. Proof objects are thus, at some extent, compatible with the ones produced with the Coq ITP, that is itself able to import and process the ones generated using Matita. Although the systems have a lot in common, they share no code at all, and even most of the algorithmic solutions are different. The thesis is composed of two parts where we respectively describe our experience as a user and a developer of interactive provers. In particular, the first part is based on two different formalisation experiences: • our internship in the Mathematical Components team (INRIA), that is formalising the finite group theory required to attack the Feit Thompson Theorem. To tackle this result, giving an effective classification of finite groups of odd order, the team adopts the SSReflect Coq extension, developed by Georges Gonthier for the proof of the four colours theorem. • our collaboration at the D.A.M.A. Project, whose goal is the formalisation of abstract measure theory in Matita leading to a constructive proof of Lebesgue’s Dominated Convergence Theorem. The most notable issues we faced, analysed in this part of the thesis, are the following: the difficulties arising when using “black box” automation in large formalisations; the impossibility for a user (especially a newcomer) to master the context of a library of already formalised results; the uncomfortable big step execution of proof commands historically adopted in ITPs; the difficult encoding of mathematical structures with a notion of inheritance in a type theory without subtyping like CIC. In the second part of the manuscript many of these issues will be analysed with the looking glasses of an ITP developer, describing the solutions we adopted in the implementation of Matita to solve these problems: integrated searching facilities to assist the user in handling large libraries of formalised results; a small step execution semantic for proof commands; a flexible implementation of coercive subtyping allowing multiple inheritance with shared substructures; automatic tactics, integrated with the searching facilities, that generates proof commands (and not only proof objects, usually kept hidden to the user) one of which specifically designed to be user driven.
Resumo:
This work has been realized by the author in his PhD course in Electrical, Computer Science and Telecommunication at the University of Bologna, Faculty of Engineering, Italy. All the documentation here reported is a summary of years of work, under the supervision of Prof. Oreste Andrisano, coordinator of Wireless Communication Laboratory - WiLab, in Bologna. The subject of this thesis is the transmission of video in a context of heterogeneous network, and in particular, using a wireless channel. All the instrumentation that has been used for the characterization of the telecommunication systems belongs to CNR (National Research Council), CNIT (Italian Inter- University Center), and DEIS (Dept. of Electrical, Computer Science, and Systems). From November 2009 to July 2010, the author spent his time abroad, working in collaboration with DLR - German Aerospace Center in Munich, Germany, on channel coding area, developing a general purpose decoder machine to decode a huge family of iterative codes. A patent concerning Doubly Generalized-Low Density Parity Check codes has been produced by the author as well as some important scientic papers, published on IEEE journals and conferences.
Resumo:
A main objective of the human movement analysis is the quantitative description of joint kinematics and kinetics. This information may have great possibility to address clinical problems both in orthopaedics and motor rehabilitation. Previous studies have shown that the assessment of kinematics and kinetics from stereophotogrammetric data necessitates a setup phase, special equipment and expertise to operate. Besides, this procedure may cause feeling of uneasiness on the subjects and may hinder with their walking. The general aim of this thesis is the implementation and evaluation of new 2D markerless techniques, in order to contribute to the development of an alternative technique to the traditional stereophotogrammetric techniques. At first, the focus of the study has been the estimation of the ankle-foot complex kinematics during stance phase of the gait. Two particular cases were considered: subjects barefoot and subjects wearing ankle socks. The use of socks was investigated in view of the development of the hybrid method proposed in this work. Different algorithms were analyzed, evaluated and implemented in order to have a 2D markerless solution to estimate the kinematics for both cases. The validation of the proposed technique was done with a traditional stereophotogrammetric system. The implementation of the technique leads towards an easy to configure (and more comfortable for the subject) alternative to the traditional stereophotogrammetric system. Then, the abovementioned technique has been improved so that the measurement of knee flexion/extension could be done with a 2D markerless technique. The main changes on the implementation were on occlusion handling and background segmentation. With the additional constraints, the proposed technique was applied to the estimation of knee flexion/extension and compared with a traditional stereophotogrammetric system. Results showed that the knee flexion/extension estimation from traditional stereophotogrammetric system and the proposed markerless system were highly comparable, making the latter a potential alternative for clinical use. A contribution has also been given in the estimation of lower limb kinematics of the children with cerebral palsy (CP). For this purpose, a hybrid technique, which uses high-cut underwear and ankle socks as “segmental markers” in combination with a markerless methodology, was proposed. The proposed hybrid technique is different than the abovementioned markerless technique in terms of the algorithm chosen. Results showed that the proposed hybrid technique can become a simple and low-cost alternative to the traditional stereophotogrammetric systems.
Resumo:
The purpose of this research is to contribute to the literature on organizational demography and new product development by investigating how diverse individual career histories impact team performance. Moreover we highlighted the importance of considering also the institutional context and the specific labour market arrangements in which a team is embedded, in order to interpret correctly the effect of career-related diversity measures on performance. The empirical setting of the study is the videogame industry, and the teams in charge of the development of new game titles. Video games development teams are the ideal setting to investigate the influence of career histories on team performance, since the development of videogames is performed by multidisciplinary teams composed by specialists with a wide variety of technical and artistic backgrounds, who execute a significant amounts of creative thinking. We investigate our research question both with quantitative methods and with a case study on the Japanese videogame industry: one of the most innovative in this sector. Our results show how career histories in terms of occupational diversity, prior functional diversity and prior product diversity, usually have a positive influence on team performance. However, when the moderating effect of the institutional setting is taken in to account, career diversity has different or even opposite effect on team performance, according to the specific national context in which a team operates.
Resumo:
Poiché la diagnosi differenziale degli episodi parossistici notturni è affidata alla VEPSG, tenendo conto dei limiti di tale metodica, il progetto attuale ha lo scopo di definire la resa diagnostica di strumenti alternativi alla VEPSG: anamnesi, home-made video ed EEG intercritico. Sono stati reclutati consecutivamente 13 pazienti, afferiti al nostro Dipartimento per episodi parossistici notturni. Ciascun paziente è stato sottoposto ad un protocollo diagnostico standardizzato. A 5 Medici Esperti in Epilessia e Medicina del Sonno è stato chiesto di formulare un orientamento diagnostico sulla base di anamnesi, EEG intercritico, home-made video e VEPSG. Attraverso l’elaborazione degli orientamenti diagnostici è stata calcolata la resa diagnostica delle procedure esaminate, a confronto con la VEPSG, attraverso il concetto di “accuratezza diagnostica”. Per 6 pazienti è stato possibile porre una diagnosi di Epilessia Frontale Notturna, per 2 di parasonnia, in 5 la diagnosi è rimasta dubbia. L’accuratezza diagnostica di ciascuna procedura è risultata moderata, con lievi differenze tra le diverse procedure (61.5% anamnesi; 66% home-made video; 69,2 % EEG intercritico). E’ essenziale migliorare ulteriormente l’accuratezza diagnostica di anamnesi, EEG intercritico ed home-made video, che possono risultare cruciali nei casi in cui la diagnosi non è certa o quando la VEPSG non è disponibile.
Resumo:
La neuroriabilitazione è un processo attraverso cui individui affetti da patologie neurologiche mirano al conseguimento di un recupero completo o alla realizzazione del loro potenziale ottimale benessere fisico, mentale e sociale. Elementi essenziali per una riabilitazione efficace sono: una valutazione clinica da parte di un team multidisciplinare, un programma riabilitativo mirato e la valutazione dei risultati conseguiti mediante misure scientifiche e clinicamente appropriate. Obiettivo principale di questa tesi è stato sviluppare metodi e strumenti quantitativi per il trattamento e la valutazione motoria di pazienti neurologici. I trattamenti riabilitativi convenzionali richiedono a pazienti neurologici l’esecuzione di esercizi ripetitivi, diminuendo la loro motivazione. La realtà virtuale e i feedback sono in grado di coinvolgerli nel trattamento, permettendo ripetibilità e standardizzazione dei protocolli. È stato sviluppato e valutato uno strumento basato su feedback aumentati per il controllo del tronco. Inoltre, la realtà virtuale permette l’individualizzare il trattamento in base alle esigenze del paziente. Un’applicazione virtuale per la riabilitazione del cammino è stata sviluppata e testata durante un training su pazienti di sclerosi multipla, valutandone fattibilità e accettazione e dimostrando l'efficacia del trattamento. La valutazione quantitativa delle capacità motorie dei pazienti viene effettuata utilizzando sistemi di motion capture. Essendo il loro uso nella pratica clinica limitato, una metodologia per valutare l’oscillazione delle braccia in soggetti parkinsoniani basata su sensori inerziali è stata proposta. Questi sono piccoli, accurati e flessibili ma accumulano errori durante lunghe misurazioni. È stato affrontato questo problema e i risultati suggeriscono che, se il sensore è sul piede e le accelerazioni sono integrate iniziando dalla fase di mid stance, l’errore e le sue conseguenze nella determinazione dei parametri spaziali sono contenuti. Infine, è stata presentata una validazione del Kinect per il tracking del cammino in ambiente virtuale. Risultati preliminari consentono di definire il campo di utilizzo del sensore in riabilitazione.
Resumo:
La sintomatologia ansiosa materna nel periodo prenatale risulta influire negativamente non sullo stato materno ma anche sul successivo sviluppo infantile, Tuttavia, sono limitati gli studi che hanno considerato lo specifico contributo dei disturbi d’ansia nel periodo prenatale. L’obiettivo generale dello studio è quello di indagare nel primo periodo post partum la relazione tra psicopatologia ansiosa materna e: temperamento e sviluppo neonatale, qualità del caregiving materno e dei pattern interattivi madre-bambino. 138 donne sono state intervistate utilizzando SCID-I (First et al., 1997) durante il terzo trimestre di gravidanza. 31 donne (22,5%) presentano disturbo d’ansia nel periodo prenatale. A 1 mese post partum il comportamento del neonato è stato valutato mediante NBAS (Brazelton, Nugent, 1995), mentre le madri hanno compilato MBAS (Brazelton, Nugent, 1995). A 3 mesi postpartum, una sequenza interattiva madre-bambino è stata videoregistrata e codificata utilizzando GRS (Murray et al., 1996). La procedura dello Stranger Episode (Murray et al., 2007) è stata utilizzata per osservare i pattern interattivi materni e infantili nell’interazione con una persona estranea. I neonati di madri con disturbo d’ansia manifestano alle NBAS minori capacità a livello di organizzazione di stati comportamentali, minori capacità attentive e di autoregolazione. Le madri ansiose si percepiscono significativamente meno sicure nell’occuparsi di loro, valutando i propri figli maggiormente instabili e irregolari. Nell’interazione face to face, esse mostrano comportamenti significativamente meno sensibilI, risultando meno coinvolte attivamente con il proprio bambino. Durante lo Stranger Episode, le madri con fobia sociale presentano maggiori livelli di ansia e incoraggiando in modo significativamente inferiore l’interazione del bambino con l’estraneo. I risultati sottolineano l’importanza di valutare in epoca prenatale la psicopatologia ansiosa materna. Le evidenze confermano la rilevanza che può assumere un modello multifattoriale di rischio in cui i disturbi d’ansia prenatali e la qualità del caregiving materno possono agire in modo sinergico nell’influire sugli esiti infantili.
Resumo:
La tesi affronta il ruolo problematico della documentazione nella Land art, sia per quanto riguarda l’immagine diffusa dai magazine e dalle riviste d’arte, sia affrontando nello specifico il rapporto con i media nella poetica degli artisti, dedicando particolare attenzione a De Maria, Heizer, Oppenheim, Smithson, Holt, Dibbets, Long, Fulton e Christo e Jeanne-Claude.
Resumo:
Oggetto di questo lavoro è un’analisi dettagliata di un campione ristretto di riprese televisive della Quinta Sinfonia di Beethoven, con l’obiettivo di far emergere il loro meccanismo costruttivo, in senso sia tecnico sia culturale. Premessa dell’indagine è che ciascuna ripresa sia frutto di un’autorialità specifica che si sovrappone alle due già presenti in ogni esecuzione della Quinta Sinfonia, quella del compositore e quella dell’interprete, definita perciò «terza autorialità» riassumendo nella nozione la somma di contributi specifici che portano alla produzione di una ripresa (consulente musicale, regista, operatori di ripresa). La ricerca esamina i rapporti che volta a volta si stabiliscono fra i tre diversi piani autoriali, ma non mira a una ricostruzione filologica: l’obiettivo non è ricostruire le intenzioni dell’autore materiale quanto di far emergere dall’esame della registrazione della ripresa, così com’è data a noi oggi (spesso in una versione già più volte rimediata, in genere sotto forma di dvd commercializzato), scelte tecniche, musicali e culturali che potevano anche essere inconsapevoli. L’analisi dettagliata delle riprese conferma l’ipotesi di partenza che ci sia una sorta di sistema convenzionale, quasi una «solita forma» o approccio standardizzato, che sottende la gran parte delle riprese; gli elementi che si possono definire convenzionali, sia per la presenza sia per la modalità di trattamento, sono diversi, ma sono soprattutto due gli aspetti che sembrano esserne costitutivi: il legame con il rito del concerto, che viene rispettato e reincarnato televisivamente, con la costruzione di una propria, specifica aura; e la presenza di un paradigma implicito e sostanzialmente ineludibile che pone la maggior parte delle riprese televisive entro l’alveo della concezione della musica classica come musica pura, astratta, che deve essere compresa nei suoi propri termini.