895 resultados para Metafore, Teoria della Metafora Concettuale, Corpus Linguistics, crisi economica
Resumo:
La tesi analizza l'estensione del calcolo classico, il calcolo frazionario, descrivendone le proprietà principali e dandone esempi concreti. Si procede con la definizione di indice di Hurst e di moto Browniano frazionario. Si vede poi come è possibile estendere il calcolo frazionario al calcolo stocastico rispetto ad un moto Browniano frazionario. Infine, si richiamano alcuni concetti di teoria della probabilità.
Resumo:
Il seguente elaborato presenta una proposta di traduzione dei primi due capitoli del romanzo autopubblicato “Summer at sea” della scrittrice americana Beth Labonte, uscito nel 2015. L'elaborato si articola a partire da un'analisi approfondita del genere della chick lit, cui il romanzo appartiene, e prosegue con un riassunto delle principali tappe all'interno della storia della teoria della traduzione. Segue il commento alla traduzione, che evidenzia le difficoltà riscontrate e le soluzioni traduttive impiegate durante la stesura dell’elaborato.
Resumo:
The aim of this dissertation is to provide a translation from English into Italian of a specialised scientific article published in the Cambridge Working Papers in Economics series. In this text, the authors estimate the economic consequences of the earthquake that hit the Abruzzo region in 2009. An extract of this translation will be published as part of conference proceedings. The main reason behind this choice is a personal interest in specialised translation in the economic domain. Moreover, the subject of the article is of particular interest to the Italian readership. The aim of this study is to show how a non-specialised translator can tackle with such a highly specialised translation with the use of appropriate terminology resources and the collaboration of field experts. The translation could be of help to other Italian linguists looking for translated material in this particular domain where English seems to be the dominant language. In order to ensure consistent terminology and adequate style, the document has been translated with the use of different resources, such as dictionaries, glossaries and specialised corpora. I also contacted field experts and the authors of text. The collaboration with the authors proved to be an invaluable resource yet one to be carefully managed. This work is divided into 5 chapters. The first deals with domain-specific sublanguages. The second gives an overview of corpus linguistics and describes the corpora designed for the translation. The third provides an analysis of the article, focusing on syntactical, lexical and structural features while the fourth presents the translation, side-by-side with the source text. The fifth comments on the main difficulties encountered in the translation and the strategies used, as well as the relationship with the authors and their review of the published text. Appendix I contains the econometric glossary English – Italian.
Resumo:
Il presente elaborato vuole illustrare alcuni risultati matematici di teoria della misura grazie ai quali si sono sviluppate interessanti conseguenze nel campo della statistica inferenziale relativamente al concetto di statistica sufficiente. Il primo capitolo riprende alcune nozioni preliminari e si espone il teorema di Radon-Nikodym, sulle misure assolutamente continue, con conseguente dimostrazione. Il secondo capitolo dal titolo ‘Applicazioni alla statistica sufficiente’ si apre con le definizioni degli oggetti di studio e con la presentazione di alcune loro proprietà matematiche. Nel secondo paragrafo si espongono i concetti di attesa condizionata e probabilità condizionata in relazione agli elementi definiti nel paragrafo iniziale. Si entra nel corpo di questo capitolo con il terzo paragrafo nel quale definiamo gli insiemi di misura, gli insiemi di misura dominati e il concetto di statistica sufficiente. Viene qua presentato un importante teorema di caratterizzazione delle statistiche sufficienti per insiemi dominati e un suo corollario che descrive la relativa proprietà di fattorizzazione. Definiamo poi gli insiemi omogenei ed esponiamo un secondo corollario al teorema, relativo a tali insiemi. Si considera poi l’esempio del controllo di qualità per meglio illustrare la nozione di statistica sufficiente osservando una situazione più concreta. Successivamente viene introdotta la nozione di statistica sufficiente a coppie e viene enunciato un secondo teorema di caratterizzazione in termini di rapporto di verosimiglianza. Si procede quindi ad un confronto tra questi due tipi di sufficienza. Tale confronto viene operato in due situazioni differenti e porta a risultati diversi per ogni caso. Si conclude dunque l’elaborato marcando ancora l’effettiva bontà di una statistica sufficiente in termini di informazioni contenute al suo interno.
Resumo:
In uno scenario in cui il tema dell’emergenza abitativa recita un ruolo da protagonista nelle politiche europee per la casa, il recupero del patrimonio edilizio esistente si pone come una delle strategie più efficaci per rispondere alla problematica dell’abitare sociale. Nel contesto di grave crisi economica ed emergenza energetica che caratterizza la società contemporanea, il valore del retrorfit sta rapidamente trasformando l’approccio progettuale tradizionale, definendo un concetto basilare per la sostenibilità del domani: “fare meglio con meno”. L’oggetto di questa tesi è la riqualificazione energetica e funzionale di un edificio popolare situato a Bologna, nel quartiere Navile, zona Bolognina. Il fabbricato si innesta in un isolato a corte che contraddistingue questa zona, caratterizzato da una pianta ad “L” e da sistemi costruttivi tipici della ricostruzione del primo periodo del secondo dopoguerra. L’ipotesi presentata è il risultato dell’interazione di strategie progettuali mirate alla risoluzione delle problematiche di tipo funzionale ed energetico che affliggono il complesso. L’intervento è caratterizzato dall’ampio impiego di tecnologie leggere a “secco”, utilizzate sia per l’integrazione dell’edificio con aggiunte volumetriche, che per la realizzazioni di elementi progettuali ex-novo.
Resumo:
Questa tesi illustra il teorema di decomposizione delle misure e come questo viene applicato alle trasformazioni che conservano la misura. Dopo aver dato le definizioni di σ-algebra e di misura ed aver enunciato alcuni teoremi di teoria della misura, si introducono due differenti concetti di separabilità: quello di separabilità stretta e quello di separabilità, collegati mediante un lemma. Si descrivono poi la funzione di densità relativa e le relative proprietà e, dopo aver definito il concetto di somma diretta di spazi di misura, si dimostra il teorema di decomposizione delle misure, che permette sotto certe ipotesi di esprimere uno spazio di misura come somma diretta di spazi di misura. Infine, dopo aver spiegato cosa significa che una trasformazione conserva la misura e che è ergodica, si dimostra il teorema di Von Neumann, per il quale le trasformazioni che conservano la misura risultano decomponibili in parti ergodiche.
Resumo:
Il presente lavoro si propone di sviluppare una analogia formale tra sistemi dinamici e teoria della computazione in relazione all’emergenza di proprietà biologiche da tali sistemi. Il primo capitolo sarà dedicato all’estensione della teoria delle macchine di Turing ad un più ampio contesto di funzioni computabili e debolmente computabili. Mostreremo quindi come un sistema dinamico continuo possa essere elaborato da una macchina computante, e come proprietà informative quali l’universalità possano essere naturalmente estese alla fisica attraverso questo ponte formale. Nel secondo capitolo applicheremo i risultati teorici derivati nel primo allo sviluppo di un sistema chimico che mostri tali proprietà di universalità, ponendo particolare attenzione alla plausibilità fisica di tale sistema.
Resumo:
Mr. Kubon's project was inspired by the growing need for an automatic, syntactic analyser (parser) of Czech, which could be used in the syntactic processing of large amounts of texts. Mr. Kubon notes that such a tool would be very useful, especially in the field of corpus linguistics, where creating a large-scale "tree bank" (a collection of syntactic representations of natural language sentences) is a very important step towards the investigation of the properties of a given language. The work involved in syntactically parsing a whole corpus in order to get a representative set of syntactic structures would be almost inconceivable without the help of some kind of robust (semi)automatic parser. The need for the automatic natural language parser to be robust increases with the size of the linguistic data in the corpus or in any other kind of text which is going to be parsed. Practical experience shows that apart from syntactically correct sentences, there are many sentences which contain a "real" grammatical error. These sentences may be corrected in small-scale texts, but not generally in the whole corpus. In order to be able to complete the overall project, it was necessary to address a number of smaller problems. These were; 1. the adaptation of a suitable formalism able to describe the formal grammar of the system; 2. the definition of the structure of the system's dictionary containing all relevant lexico-syntactic information, and the development of a formal grammar able to robustly parse Czech sentences from the test suite; 3. filling the syntactic dictionary with sample data allowing the system to be tested and debugged during its development (about 1000 words); 4. the development of a set of sample sentences containing a reasonable amount of grammatical and ungrammatical phenomena covering some of the most typical syntactic constructions being used in Czech. Number 3, building a formal grammar, was the main task of the project. The grammar is of course far from complete (Mr. Kubon notes that it is debatable whether any formal grammar describing a natural language may ever be complete), but it covers the most frequent syntactic phenomena, allowing for the representation of a syntactic structure of simple clauses and also the structure of certain types of complex sentences. The stress was not so much on building a wide coverage grammar, but on the description and demonstration of a method. This method uses a similar approach as that of grammar-based grammar checking. The problem of reconstructing the "correct" form of the syntactic representation of a sentence is closely related to the problem of localisation and identification of syntactic errors. Without a precise knowledge of the nature and location of syntactic errors it is not possible to build a reliable estimation of a "correct" syntactic tree. The incremental way of building the grammar used in this project is also an important methodological issue. Experience from previous projects showed that building a grammar by creating a huge block of metarules is more complicated than the incremental method, which begins with the metarules covering most common syntactic phenomena first, and adds less important ones later, especially from the point of view of testing and debugging the grammar. The sample of the syntactic dictionary containing lexico-syntactical information (task 4) now has slightly more than 1000 lexical items representing all classes of words. During the creation of the dictionary it turned out that the task of assigning complete and correct lexico-syntactic information to verbs is a very complicated and time-consuming process which would itself be worth a separate project. The final task undertaken in this project was the development of a method allowing effective testing and debugging of the grammar during the process of its development. The problem of the consistency of new and modified rules of the formal grammar with the rules already existing is one of the crucial problems of every project aiming at the development of a large-scale formal grammar of a natural language. This method allows for the detection of any discrepancy or inconsistency of the grammar with respect to a test-bed of sentences containing all syntactic phenomena covered by the grammar. This is not only the first robust parser of Czech, but also one of the first robust parsers of a Slavic language. Since Slavic languages display a wide range of common features, it is reasonable to claim that this system may serve as a pattern for similar systems in other languages. To transfer the system into any other language it is only necessary to revise the grammar and to change the data contained in the dictionary (but not necessarily the structure of primary lexico-syntactic information). The formalism and methods used in this project can be used in other Slavic languages without substantial changes.
Resumo:
Discourse connectives are lexical items indicating coherence relations between discourse segments. Even though many languages possess a whole range of connectives, important divergences exist cross-linguistically in the number of connectives that are used to express a given relation. For this reason, connectives are not easily paired with a univocal translation equivalent across languages. This paper is a first attempt to design a reliable method to annotate the meaning of discourse connectives cross-linguistically using corpus data. We present the methodological choices made to reach this aim and report three annotation experiments using the framework of the Penn Discourse Tree Bank.
Resumo:
Edited by Annette Kern-Stähler, Beatrix Busse, and Wietse de Boer The essays collected in The Five Senses in Medieval and Early Modern England examine the interrelationships between sense perception and secular and Christian cultures in England from the medieval into the early modern periods. They address canonical texts and writers in the fields of poetry, drama, homiletics, martyrology and early scientific writing, and they espouse methods associated with the fields of corpus linguistics, disability studies, translation studies, art history and archaeology, as well as approaches derived from traditional literary studies. Together, these papers constitute a major contribution to the growing field of sensorial research that will be of interest to historians of perception and cognition as well as to historians with more generalist interests in medieval and early modern England.
Resumo:
In this essay I review a recent research study from Italy, “Le Radici nel Futuro – La Continuita’ della Relazione Genitoriale oltre la Crisi Familiare,” edited by Paola Dallanegra (2005). The contributors focus on “Spazio Neutro,” a multi-purpose child welfare agency in southern Italy that facilitates parent-child visiting and relationships between children placed in out-of-home care and their families. They delineate and illustrate, through comments from family members, selected principles and strategies for maintaining such continuity throughout the out-of-home placement.
Resumo:
OntoTag - A Linguistic and Ontological Annotation Model Suitable for the Semantic Web
1. INTRODUCTION. LINGUISTIC TOOLS AND ANNOTATIONS: THEIR LIGHTS AND SHADOWS
Computational Linguistics is already a consolidated research area. It builds upon the results of other two major ones, namely Linguistics and Computer Science and Engineering, and it aims at developing computational models of human language (or natural language, as it is termed in this area). Possibly, its most well-known applications are the different tools developed so far for processing human language, such as machine translation systems and speech recognizers or dictation programs.
These tools for processing human language are commonly referred to as linguistic tools. Apart from the examples mentioned above, there are also other types of linguistic tools that perhaps are not so well-known, but on which most of the other applications of Computational Linguistics are built. These other types of linguistic tools comprise POS taggers, natural language parsers and semantic taggers, amongst others. All of them can be termed linguistic annotation tools.
Linguistic annotation tools are important assets. In fact, POS and semantic taggers (and, to a lesser extent, also natural language parsers) have become critical resources for the computer applications that process natural language. Hence, any computer application that has to analyse a text automatically and ‘intelligently’ will include at least a module for POS tagging. The more an application needs to ‘understand’ the meaning of the text it processes, the more linguistic tools and/or modules it will incorporate and integrate.
However, linguistic annotation tools have still some limitations, which can be summarised as follows:
1. Normally, they perform annotations only at a certain linguistic level (that is, Morphology, Syntax, Semantics, etc.).
2. They usually introduce a certain rate of errors and ambiguities when tagging. This error rate ranges from 10 percent up to 50 percent of the units annotated for unrestricted, general texts.
3. Their annotations are most frequently formulated in terms of an annotation schema designed and implemented ad hoc.
A priori, it seems that the interoperation and the integration of several linguistic tools into an appropriate software architecture could most likely solve the limitations stated in (1). Besides, integrating several linguistic annotation tools and making them interoperate could also minimise the limitation stated in (2). Nevertheless, in the latter case, all these tools should produce annotations for a common level, which would have to be combined in order to correct their corresponding errors and inaccuracies. Yet, the limitation stated in (3) prevents both types of integration and interoperation from being easily achieved.
In addition, most high-level annotation tools rely on other lower-level annotation tools and their outputs to generate their own ones. For example, sense-tagging tools (operating at the semantic level) often use POS taggers (operating at a lower level, i.e., the morphosyntactic) to identify the grammatical category of the word or lexical unit they are annotating. Accordingly, if a faulty or inaccurate low-level annotation tool is to be used by other higher-level one in its process, the errors and inaccuracies of the former should be minimised in advance. Otherwise, these errors and inaccuracies would be transferred to (and even magnified in) the annotations of the high-level annotation tool.
Therefore, it would be quite useful to find a way to
(i) correct or, at least, reduce the errors and the inaccuracies of lower-level linguistic tools;
(ii) unify the annotation schemas of different linguistic annotation tools or, more generally speaking, make these tools (as well as their annotations) interoperate.
Clearly, solving (i) and (ii) should ease the automatic annotation of web pages by means of linguistic tools, and their transformation into Semantic Web pages (Berners-Lee, Hendler and Lassila, 2001). Yet, as stated above, (ii) is a type of interoperability problem. There again, ontologies (Gruber, 1993; Borst, 1997) have been successfully applied thus far to solve several interoperability problems. Hence, ontologies should help solve also the problems and limitations of linguistic annotation tools aforementioned.
Thus, to summarise, the main aim of the present work was to combine somehow these separated approaches, mechanisms and tools for annotation from Linguistics and Ontological Engineering (and the Semantic Web) in a sort of hybrid (linguistic and ontological) annotation model, suitable for both areas. This hybrid (semantic) annotation model should (a) benefit from the advances, models, techniques, mechanisms and tools of these two areas; (b) minimise (and even solve, when possible) some of the problems found in each of them; and (c) be suitable for the Semantic Web. The concrete goals that helped attain this aim are presented in the following section.
2. GOALS OF THE PRESENT WORK
As mentioned above, the main goal of this work was to specify a hybrid (that is, linguistically-motivated and ontology-based) model of annotation suitable for the Semantic Web (i.e. it had to produce a semantic annotation of web page contents). This entailed that the tags included in the annotations of the model had to (1) represent linguistic concepts (or linguistic categories, as they are termed in ISO/DCR (2008)), in order for this model to be linguistically-motivated; (2) be ontological terms (i.e., use an ontological vocabulary), in order for the model to be ontology-based; and (3) be structured (linked) as a collection of ontology-based
Resumo:
Tra il 2005 e il 2013 la disciplina italiana sulle crisi d’impresa è stata oggetto di varie riforme, che hanno cercato di rafforzare il concordato preventivo, in particolare nella sua funzione di istituto atto a favorire la continuità di aziende in crisi non irreversibile. Anche tenendo conto degli effetti della crisi economica, tali riforme hanno conseguito l’obiettivo di allargare il ricorso al concordato, in particolare a seguito dell’introduzione di quello “in bianco”, che consente di posticipare la presentazione del piano di risanamento. Le riforme hanno anche contribuito a un lieve miglioramento della continuità aziendale. Ciò nonostante, solo una quota limitata di aziende (circa il 4,5 per cento) sopravvive dopo il concordato, la cui funzione principale è rimasta quella di fornire uno strumento liquidatorio di tipo negoziale, alternativo al fallimento che vede un maggior ruolo degli organi giudiziari. Il ricorso al concordato risulta correlato, oltre che con caratteristiche strutturali dell’impresa (un maggior peso delle immobilizzazioni materiali) e delle relazioni creditizie (un minore peso dei crediti dotati di garanzie reali), anche con la durata temporale delle procedure fallimentari nel tribunale di riferimento.
Resumo:
Estudi de l’evolució semàntica de enze, enza i de les unitats fraseològiques en què participa, des de les primeres atestacions escrites fins als usos contemporanis en català. S’hi té en compte l’evolució dels altres derivats romànics del llatí INDEX, -ICIS. S’hi aplica una anàlisi d’orientació cognitiva i es fonamenta l’estudi en l’aprofitament de corpora textuals (antics i contemporanis), dins dels quals hi ha l’obra literària i gramatical d’Enric Valor.