746 resultados para syntax


Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis intends to investigate two aspects of Constraint Handling Rules (CHR). It proposes a compositional semantics and a technique for program transformation. CHR is a concurrent committed-choice constraint logic programming language consisting of guarded rules, which transform multi-sets of atomic formulas (constraints) into simpler ones until exhaustion [Frü06] and it belongs to the declarative languages family. It was initially designed for writing constraint solvers but it has recently also proven to be a general purpose language, being as it is Turing equivalent [SSD05a]. Compositionality is the first CHR aspect to be considered. A trace based compositional semantics for CHR was previously defined in [DGM05]. The reference operational semantics for such a compositional model was the original operational semantics for CHR which, due to the propagation rule, admits trivial non-termination. In this thesis we extend the work of [DGM05] by introducing a more refined trace based compositional semantics which also includes the history. The use of history is a well-known technique in CHR which permits us to trace the application of propagation rules and consequently it permits trivial non-termination avoidance [Abd97, DSGdlBH04]. Naturally, the reference operational semantics, of our new compositional one, uses history to avoid trivial non-termination too. Program transformation is the second CHR aspect to be considered, with particular regard to the unfolding technique. Said technique is an appealing approach which allows us to optimize a given program and in more detail to improve run-time efficiency or spaceconsumption. Essentially it consists of a sequence of syntactic program manipulations which preserve a kind of semantic equivalence called qualified answer [Frü98], between the original program and the transformed ones. The unfolding technique is one of the basic operations which is used by most program transformation systems. It consists in the replacement of a procedure-call by its definition. In CHR every conjunction of constraints can be considered as a procedure-call, every CHR rule can be considered as a procedure and the body of said rule represents the definition of the call. While there is a large body of literature on transformation and unfolding of sequential programs, very few papers have addressed this issue for concurrent languages. We define an unfolding rule, show its correctness and discuss some conditions in which it can be used to delete an unfolded rule while preserving the meaning of the original program. Finally, confluence and termination maintenance between the original and transformed programs are shown. This thesis is organized in the following manner. Chapter 1 gives some general notion about CHR. Section 1.1 outlines the history of programming languages with particular attention to CHR and related languages. Then, Section 1.2 introduces CHR using examples. Section 1.3 gives some preliminaries which will be used during the thesis. Subsequentely, Section 1.4 introduces the syntax and the operational and declarative semantics for the first CHR language proposed. Finally, the methodologies to solve the problem of trivial non-termination related to propagation rules are discussed in Section 1.5. Chapter 2 introduces a compositional semantics for CHR where the propagation rules are considered. In particular, Section 2.1 contains the definition of the semantics. Hence, Section 2.2 presents the compositionality results. Afterwards Section 2.3 expounds upon the correctness results. Chapter 3 presents a particular program transformation known as unfolding. This transformation needs a particular syntax called annotated which is introduced in Section 3.1 and its related modified operational semantics !0t is presented in Section 3.2. Subsequently, Section 3.3 defines the unfolding rule and prove its correctness. Then, in Section 3.4 the problems related to the replacement of a rule by its unfolded version are discussed and this in turn gives a correctness condition which holds for a specific class of rules. Section 3.5 proves that confluence and termination are preserved by the program modifications introduced. Finally, Chapter 4 concludes by discussing related works and directions for future work.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Data coming out from various researches carried out over the last years in Italy on the problem of school dispersion in secondary school show that difficulty in studying mathematics is one of the most frequent reasons of discomfort reported by students. Nevertheless, it is definitely unrealistic to think we can do without such knowledge in today society: mathematics is largely taught in secondary school and it is not confined within technical-scientific courses only. It is reasonable to say that, although students may choose academic courses that are, apparently, far away from mathematics, all students will have to come to terms, sooner or later in their life, with this subject. Among the reasons of discomfort given by the study of mathematics, some mention the very nature of this subject and in particular the complex symbolic language through which it is expressed. In fact, mathematics is a multimodal system composed by oral and written verbal texts, symbol expressions, such as formulae and equations, figures and graphs. For this, the study of mathematics represents a real challenge to those who suffer from dyslexia: this is a constitutional condition limiting people performances in relation to the activities of reading and writing and, in particular, to the study of mathematical contents. Here the difficulties in working with verbal and symbolic codes entail, in turn, difficulties in the comprehension of texts from which to deduce operations that, once combined together, would lead to the problem final solution. Information technologies may support this learning disorder effectively. However, these tools have some implementation limits, restricting their use in the study of scientific subjects. Vocal synthesis word processors are currently used to compensate difficulties in reading within the area of classical studies, but they are not used within the area of mathematics. This is because the vocal synthesis (or we should say the screen reader supporting it) is not able to interpret all that is not textual, such as symbols, images and graphs. The DISMATH software, which is the subject of this project, would allow dyslexic users to read technical-scientific documents with the help of a vocal synthesis, to understand the spatial structure of formulae and matrixes, to write documents with a technical-scientific content in a format that is compatible with main scientific editors. The system uses LaTex, a text mathematic language, as mediation system. It is set up as LaTex editor, whose graphic interface, in line with main commercial products, offers some additional specific functions with the capability to support the needs of users who are not able to manage verbal and symbolic codes on their own. LaTex is translated in real time into a standard symbolic language and it is read by vocal synthesis in natural language, in order to increase, through the bimodal representation, the ability to process information. The understanding of the mathematic formula through its reading is made possible by the deconstruction of the formula itself and its “tree” representation, so allowing to identify the logical elements composing it. Users, even without knowing LaTex language, are able to write whatever scientific document they need: in fact the symbolic elements are recalled by proper menus and automatically translated by the software managing the correct syntax. The final aim of the project, therefore, is to implement an editor enabling dyslexic people (but not only them) to manage mathematic formulae effectively, through the integration of different software tools, so allowing a better teacher/learner interaction too.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

ZusammenfassungLautäußerungen von Singvögeln (Passeriformes) werden gemeinhin als Träger phylogenetischer Information betrachtet, obwohl direkte Nachweise in vergleichend bioakustischen Studien rar sind. Dieser Thematik widmet sich meine Dissertation am Beispiel dreier Singvogelgruppen: Goldhähnchen (Regulus), Goldbrillenlaubsänger (Seicercus) sowie verwandter Laubsänger (Phylloscopus) und Kohlmeisen (Parus major). Neben der Erhebung bioakustischer Daten wurde für jede Gruppe eine molekulare Phylogenie basierend auf Cytochrom-b-Sequenzen erstellt und für verschiedene akustische Merkmale Homoplasie-indizes berechnet (CI, RI und RC). Die phylogenetisch informativen Gesangsstrukturen innerhalb der Gattungen Regulus und Seicercus/ Phylloscopus sind sämtlich Syntaxmerkmale, zumeist der Gesamtstrophe, seltener von Strophenabschnitten. Bei den Goldhähnchen (Regulus) sind solche Syntaxmerkmale angeboren, Elementmerkmale hingegen sind erlernt und phylogenetisch nicht informativ. Die innerhalb der Kohlmeisen homogene Gesangssyntax ist erst auf höherer taxonomischer Ebene (Gattung Parus) ein informatives Merkmal. Der mittels einer Merkmalsmatrix berechnete akustische Divergenzindex zwischen Taxonpaaren steigt signifikant proportional zur genetischen Distanz. Damit ist erstmalig der Zusammenhang zwischen genetischer und akustischer Differenzierung quantifiziert. Die molekulare Phylogenie erhellt zudem bislang ungeklärte phylogenetische Beziehungen innerhalb aller drei Taxa. Diese werden im Hinblick auf das phylogenetische und das biologische Artkonzept diskutiert. Der Artstatus des Teneriffa-Goldhähnchens (Regulus teneriffae) sowie der bokharensis-Kohlmeisen ist fragwürdig aufgrund ihrer engen Verwandtschaft zu zu einzelnen Subspezies der Wintergoldhähnchen bzw. der Kohlmeisen.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Two of the main features of today complex software systems like pervasive computing systems and Internet-based applications are distribution and openness. Distribution revolves around three orthogonal dimensions: (i) distribution of control|systems are characterised by several independent computational entities and devices, each representing an autonomous and proactive locus of control; (ii) spatial distribution|entities and devices are physically distributed and connected in a global (such as the Internet) or local network; and (iii) temporal distribution|interacting system components come and go over time, and are not required to be available for interaction at the same time. Openness deals with the heterogeneity and dynamism of system components: complex computational systems are open to the integration of diverse components, heterogeneous in terms of architecture and technology, and are dynamic since they allow components to be updated, added, or removed while the system is running. The engineering of open and distributed computational systems mandates for the adoption of a software infrastructure whose underlying model and technology could provide the required level of uncoupling among system components. This is the main motivation behind current research trends in the area of coordination middleware to exploit tuple-based coordination models in the engineering of complex software systems, since they intrinsically provide coordinated components with communication uncoupling and further details in the references therein. An additional daunting challenge for tuple-based models comes from knowledge-intensive application scenarios, namely, scenarios where most of the activities are based on knowledge in some form|and where knowledge becomes the prominent means by which systems get coordinated. Handling knowledge in tuple-based systems induces problems in terms of syntax - e.g., two tuples containing the same data may not match due to differences in the tuple structure - and (mostly) of semantics|e.g., two tuples representing the same information may not match based on a dierent syntax adopted. Till now, the problem has been faced by exploiting tuple-based coordination within a middleware for knowledge intensive environments: e.g., experiments with tuple-based coordination within a Semantic Web middleware (surveys analogous approaches). However, they appear to be designed to tackle the design of coordination for specic application contexts like Semantic Web and Semantic Web Services, and they result in a rather involved extension of the tuple space model. The main goal of this thesis was to conceive a more general approach to semantic coordination. In particular, it was developed the model and technology of semantic tuple centres. It is adopted the tuple centre model as main coordination abstraction to manage system interactions. A tuple centre can be seen as a programmable tuple space, i.e. an extension of a Linda tuple space, where the behaviour of the tuple space can be programmed so as to react to interaction events. By encapsulating coordination laws within coordination media, tuple centres promote coordination uncoupling among coordinated components. Then, the tuple centre model was semantically enriched: a main design choice in this work was to try not to completely redesign the existing syntactic tuple space model, but rather provide a smooth extension that { although supporting semantic reasoning { keep the simplicity of tuple and tuple matching as easier as possible. By encapsulating the semantic representation of the domain of discourse within coordination media, semantic tuple centres promote semantic uncoupling among coordinated components. The main contributions of the thesis are: (i) the design of the semantic tuple centre model; (ii) the implementation and evaluation of the model based on an existent coordination infrastructure; (iii) a view of the application scenarios in which semantic tuple centres seem to be suitable as coordination media.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

La tesi ha lo scopo di esplorare la produzione di sistemi software per Embedded Systems mediante l'utilizzo di tecniche relative al mondo del Model Driven Software Development. La fase più importante dello sviluppo sarà la definizione di un Meta-Modello che caratterizza i concetti fondamentali relativi agli embedded systems. Tale modello cercherà di astrarre dalla particolare piattaforma utilizzata ed individuare quali astrazioni caratterizzano il mondo degli embedded systems in generale. Tale meta-modello sarà quindi di tipo platform-independent. Per la generazione automatica di codice è stata adottata una piattaforma di riferimento, cioè Arduino. Arduino è un sistema embedded che si sta sempre più affermando perché coniuga un buon livello di performance ed un prezzo relativamente basso. Tale piattaforma permette lo sviluppo di sistemi special purpose che utilizzano sensori ed attuatori di vario genere, facilmente connessi ai pin messi a disposizione. Il meta-modello definito è un'istanza del meta-metamodello MOF, definito formalmente dall'organizzazione OMG. Questo permette allo sviluppatore di pensare ad un sistema sotto forma di modello, istanza del meta-modello definito. Un meta-modello può essere considerato anche come la sintassi astratta di un linguaggio, quindi può essere definito da un insieme di regole EBNF. La tecnologia utilizzata per la definizione del meta-modello è stata Xtext: un framework che permette la scrittura di regole EBNF e che genera automaticamente il modello Ecore associato al meta-modello definito. Ecore è l'implementazione di EMOF in ambiente Eclipse. Xtext genera inoltre dei plugin che permettono di avere un editor guidato dalla sintassi, definita nel meta-modello. La generazione automatica di codice è stata realizzata usando il linguaggio Xtend2. Tale linguaggio permette di esplorare l'Abstract Syntax Tree generato dalla traduzione del modello in Ecore e di generare tutti i file di codice necessari. Il codice generato fornisce praticamente tutta la schematic part dell'applicazione, mentre lascia all'application designer lo sviluppo della business logic. Dopo la definizione del meta-modello di un sistema embedded, il livello di astrazione è stato spostato più in alto, andando verso la definizione della parte di meta-modello relativa all'interazione di un sistema embedded con altri sistemi. Ci si è quindi spostati verso un ottica di Sistema, inteso come insieme di sistemi concentrati che interagiscono. Tale difinizione viene fatta dal punto di vista del sistema concentrato di cui si sta definendo il modello. Nella tesi viene inoltre introdotto un caso di studio che, anche se abbastanza semplice, fornisce un esempio ed un tutorial allo sviluppo di applicazioni mediante l'uso del meta-modello. Ci permette inoltre di notare come il compito dell'application designer diventi piuttosto semplice ed immediato, sempre se basato su una buona analisi del problema. I risultati ottenuti sono stati di buona qualità ed il meta-modello viene tradotto in codice che funziona correttamente.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Le musiche “popolaresche” urbane, in genere trascurate nella letteratura etnomusicologica, sono state quasi completamente ignorate nel caso della Romania. Il presente studio si propone di colmare almeno in parte questa lacuna, indagando questo fenomeno musicale nella Bucarest degli anni Trenta e Quaranta del Novecento. Le musiche esaminate sono tuttavia inserite entro una cornice storica più ampia, che data a partire dalla fine del XVIII secolo, e messe in relazione con alcune produzioni di origine rurale che con queste hanno uno stretto rapporto. Il caso di Maria Lătărețu (1911-1972) si è rivelato particolarmente fecondo in questo senso, dal momento che la cantante apparteneva ad entrambi i versanti musicali, rurale e urbano, e nepadroneggiava con disinvoltura i rispettivi repertori. Dopo il suo trasferimento nella capitale, negli anni Trenta, è diventata una delle figure di maggior spicco di quel fenomeno noto come muzică populară (creazione musicale eminentemente urbana e borghese con radici però nel mondo delle musiche rurali). L’analisi del repertorio (o, per meglio dire, dei due repertori) della Lătărețu, anche nel confronto con repertori limitrofi, ha permesso di comprendere più da vicino alcuni dei meccanismi musicali alla base di questa creazione. Un genere musicale che non nasce dal nulla nel dopo-guerra, ma piuttosto continua una tradizione di musica urbana, caratterizzata in senso locale, ma influenzata dal modello della canzone europea occidentale, che data almeno dagli inizi del Novecento. Attraverso procedimenti in parte già collaudati da compositori colti che sin dal XIX secolo, in Romania come altrove, si erano cimentati con la creazione di melodie in stile popolare o nell’armonizzazione di musiche di provenienza contadina, le melodie rurali nel bagaglio della cantante venivano trasformate in qualcosa di inedito. Una trasformazione che, come viene dimostrato efficacemente nell’ultimo capitolo, non investe solo il livello superficiale, ma coinvolge in modo profondo la sintassi musicale.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

La produzione ontologica è un processo fondamentale per la crescita del Web Semantico in quanto le ontologie rappresentano i vocabolari formali con cui strutturare il Web of Data. Le notazioni grafiche ontologiche costituiscono il mezzo ideale per progettare ontologie OWL sensate e ben strutturate. Tuttavia la successiva fase di generazione ontologica richiede all'utente un fastidioso cambio sia di prospettiva sia di strumentazione. Questa tesi propone dunque GraMOS, Graffoo to Manchester OWL Syntax, un motore di trasformazione da modelli Graffoo a ontologie formali in grado di fondere le due fasi di progettazione e generazione ontologica.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

La presente tesi rappresenta il primo studio dedicato all’interpretazione simultanea dal polacco all’italiano. La presente ricerca cerca di identificare il modo in cui interpreti di comprovata esperienza gestiscono alcune difficoltà tipiche della sintassi polacca fortemente divergenti da quella italiana. La scelta di studiare le catene nominali deriva dal confronto di quanto emerso dalle indagini sulla linguistica contrastiva con un’inchiesta tra gli interpreti accreditati presso le istituzioni europee per quella combinazione. Il primo capitolo è dedicato ad una panoramica sui contatti passati e presenti tra l’Italia e la Polonia e ad una riflessione sulla lingua polacca in chiave contrastiva con l’italiano. Il secondo capitolo si concentra sulla ricerca nell’ambito dell’interpretazione simultanea, in particolare sugli studi contrastivi e sulla discussione delle strategie usate dagli interpreti. Il terzo capitolo approfondisce il contesto di questo studio ovvero le istituzioni europee, il il multilinguismo e il regime linguistico al Parlamento Europeo. Il quarto capitolo include l’indagine del lavoro, condotta su un ampio corpus di dati. Sono stati infatti trascritti e analizzati tutti gli interventi tenuti in lingua polacca in occasione delle sedute parlamentari a Strasburgo e a Bruxelles del 2011 e del primo semestre 2009 e le relative interppretazioni in italiano (per un totale di oltre 9 ore di parlato per lingua). Dall’analisi è risultato che l’interprete nella maggior parte dei casi cerca, nonostante la velocità d’eloquio dell’oratore, di riprodurre fedelmente il messaggio. Tuttavia, qualora questo non risulti possibile, si è notato come gli interpreti ricorrano in maniera consapevole all’omissione di quelle informazioni desumibili o dal contesto o dalle conoscenze pregresse dell’ascoltatore. Di conseguenza la riduzione non rappresenta una strategia di emergenza ma una risorsa da applicare consapevolmente per superare le difficoltà poste da lunghe sequenze di sostantivi.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this thesis we present ad study an object-oriented language, characterized by two different types of objects, passive and active objects, of which we define the operational syntax and semantics. For this language we also define the type system, that will be used for the type checking and for the extraction of behavioral types, which are an abstract description of the behavior of the methods, used in deadlock analysis. Programs can manifest deadlock due to the errors of the programmer. To statically identify possible unintended behaviors we studied and implemented a technique for the analysis of deadlock based on behavioral types.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this thesis, the author presents a query language for an RDF (Resource Description Framework) database and discusses its applications in the context of the HELM project (the Hypertextual Electronic Library of Mathematics). This language aims at meeting the main requirements coming from the RDF community. in particular it includes: a human readable textual syntax and a machine-processable XML (Extensible Markup Language) syntax both for queries and for query results, a rigorously exposed formal semantics, a graph-oriented RDF data access model capable of exploring an entire RDF graph (including both RDF Models and RDF Schemata), a full set of Boolean operators to compose the query constraints, fully customizable and highly structured query results having a 4-dimensional geometry, some constructions taken from ordinary programming languages that simplify the formulation of complex queries. The HELM project aims at integrating the modern tools for the automation of formal reasoning with the most recent electronic publishing technologies, in order create and maintain a hypertextual, distributed virtual library of formal mathematical knowledge. In the spirit of the Semantic Web, the documents of this library include RDF metadata describing their structure and content in a machine-understandable form. Using the author's query engine, HELM exploits this information to implement some functionalities allowing the interactive and automatic retrieval of documents on the basis of content-aware requests that take into account the mathematical nature of these documents.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This dissertation intends to present and analyse the translation from English into Italian of the first twenty-five chapters of Habibi, a teenager and young adult’s novel written by the Palestinian-American author Naomi Shihab Nye. It has been necessary to present the theoretical approach to the literary translation studies, in order to focus the attention on the translation problems and provide suitable solutions and strategies. The first chapter gives an historical perspective on children’s literature from the end of the Nineteenth century till today, distinguishing the Italian and the American contexts. The description of the evolution process of this literary genre, which developed to satisfy the needs of the educational system, is its central issue. At the end of the chapter, we outline the main traits characterizing children’s literature and the image of the ideal reader. The second chapter provides an overview on Palestinian literature, especially on the features of the contemporary novel and the Diaspora literature. Afterwards, we present the author Naomi Shihab Nye and the book Habibi. In the third chapter we analyze the novel, focusing the attention on the rhythm of the narration, on the linguistic registers and the textual peculiarities found during the reading stage, according to the features of the children’s literature. The fourth chapter consists of the translation of the first twenty-five chapters of the novel. The fifth chapter begins with a section about the translation theory, the literary translation studies, focusing on the translation of children’s literature. Finally, there is the translation analysis. Here we present the strategies and the choices of the translation process, along with some practical examples with reference to theoretical studies. Special attention is paid to culture-specific items, lexicon and syntax.The source text can be found in appendix.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A feature represents a functional requirement fulfilled by a system. Since many maintenance tasks are expressed in terms of features, it is important to establish the correspondence between a feature and its implementation in source code. Traditional approaches to establish this correspondence exercise features to generate a trace of runtime events, which is then processed by post-mortem analysis. These approaches typically generate large amounts of data to analyze. Due to their static nature, these approaches do not support incremental and interactive analysis of features. We propose a radically different approach called live feature analysis, which provides a model at runtime of features. Our approach analyzes features on a running system and also makes it possible to grow feature representations by exercising different scenarios of the same feature, and identifies execution elements even to the sub-method level. We describe how live feature analysis is implemented effectively by annotating structural representations of code based on abstract syntax trees. We illustrate our live analysis with a case study where we achieve a more complete feature representation by exercising and merging variants of feature behavior and demonstrate the efficiency or our technique with benchmarks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Domain-specific languages (DSLs) are increasingly used as embedded languages within general-purpose host languages. DSLs provide a compact, dedicated syntax for specifying parts of an application related to specialized domains. Unfortunately, such language extensions typically do not integrate well with the development tools of the host language. Editors, compilers and debuggers are either unaware of the extensions, or must be adapted at a non-trivial cost. We present a novel approach to embed DSLs into an existing host language by leveraging the underlying representation of the host language used by these tools. Helvetia is an extensible system that intercepts the compilation pipeline of the Smalltalk host language to seamlessly integrate language extensions. We validate our approach by case studies that demonstrate three fundamentally different ways to extend or adapt the host language syntax and semantics.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Lint-like program checkers are popular tools that ensure code quality by verifying compliance with best practices for a particular programming language. The proliferation of internal domain-specific languages and models, however, poses new challenges for such tools. Traditional program checkers produce many false positives and fail to accurately check constraints, best practices, common errors, possible optimizations and portability issues particular to domain-specific languages. We advocate the use of dedicated rules to check domain-specific practices. We demonstrate the implementation of domain-specific rules, the automatic fixing of violations, and their application to two case-studies: (1) Seaside defines several internal DSLs through a creative use of the syntax of the host language; and (2) Magritte adds meta-descriptions to existing code by means of special methods. Our empirical validation demonstrates that domain-specific program checking significantly improves code quality when compared with general purpose program checking.