974 resultados para Language processing
Resumo:
In the architecture of a natural language processing system based on linguistic knowledge, two types of component are important: the knowledge databases and the processing modules. One of the knowledge databases is the lexical database, which is responsible for providing the lexical unities and its properties to the processing modules. The systems that process two or more languages require bilingual and/or multilingual lexical databases. These databases can be constructed by aligning distinct monolingual databases. In this paper, we present the interlingua and the strategy of aligning the two monolingual databases in REBECA, which only stores concepts from the “wheeled vehicle” domain.
Resumo:
The realization that statistical physics methods can be applied to analyze written texts represented as complex networks has led to several developments in natural language processing, including automatic summarization and evaluation of machine translation. Most importantly, so far only a few metrics of complex networks have been used and therefore there is ample opportunity to enhance the statistics-based methods as new measures of network topology and dynamics are created. In this paper, we employ for the first time the metrics betweenness, vulnerability and diversity to analyze written texts in Brazilian Portuguese. Using strategies based on diversity metrics, a better performance in automatic summarization is achieved in comparison to previous work employing complex networks. With an optimized method the Rouge score (an automatic evaluation method used in summarization) was 0.5089, which is the best value ever achieved for an extractive summarizer with statistical methods based on complex networks for Brazilian Portuguese. Furthermore, the diversity metric can detect keywords with high precision, which is why we believe it is suitable to produce good summaries. It is also shown that incorporating linguistic knowledge through a syntactic parser does enhance the performance of the automatic summarizers, as expected, but the increase in the Rouge score is only minor. These results reinforce the suitability of complex network methods for improving automatic summarizers in particular, and treating text in general. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
This thesis concerns artificially intelligent natural language processing systems that are capable of learning the properties of lexical items (properties like verbal valency or inflectional class membership) autonomously while they are fulfilling their tasks for which they have been deployed in the first place. Many of these tasks require a deep analysis of language input, which can be characterized as a mapping of utterances in a given input C to a set S of linguistically motivated structures with the help of linguistic information encoded in a grammar G and a lexicon L: G + L + C → S (1) The idea that underlies intelligent lexical acquisition systems is to modify this schematic formula in such a way that the system is able to exploit the information encoded in S to create a new, improved version of the lexicon: G + L + S → L' (2) Moreover, the thesis claims that a system can only be considered intelligent if it does not just make maximum usage of the learning opportunities in C, but if it is also able to revise falsely acquired lexical knowledge. So, one of the central elements in this work is the formulation of a couple of criteria for intelligent lexical acquisition systems subsumed under one paradigm: the Learn-Alpha design rule. The thesis describes the design and quality of a prototype for such a system, whose acquisition components have been developed from scratch and built on top of one of the state-of-the-art Head-driven Phrase Structure Grammar (HPSG) processing systems. The quality of this prototype is investigated in a series of experiments, in which the system is fed with extracts of a large English corpus. While the idea of using machine-readable language input to automatically acquire lexical knowledge is not new, we are not aware of a system that fulfills Learn-Alpha and is able to deal with large corpora. To instance four major challenges of constructing such a system, it should be mentioned that a) the high number of possible structural descriptions caused by highly underspeci ed lexical entries demands for a parser with a very effective ambiguity management system, b) the automatic construction of concise lexical entries out of a bulk of observed lexical facts requires a special technique of data alignment, c) the reliability of these entries depends on the system's decision on whether it has seen 'enough' input and d) general properties of language might render some lexical features indeterminable if the system tries to acquire them with a too high precision. The cornerstone of this dissertation is the motivation and development of a general theory of automatic lexical acquisition that is applicable to every language and independent of any particular theory of grammar or lexicon. This work is divided into five chapters. The introductory chapter first contrasts three different and mutually incompatible approaches to (artificial) lexical acquisition: cue-based queries, head-lexicalized probabilistic context free grammars and learning by unification. Then the postulation of the Learn-Alpha design rule is presented. The second chapter outlines the theory that underlies Learn-Alpha and exposes all the related notions and concepts required for a proper understanding of artificial lexical acquisition. Chapter 3 develops the prototyped acquisition method, called ANALYZE-LEARN-REDUCE, a framework which implements Learn-Alpha. The fourth chapter presents the design and results of a bootstrapping experiment conducted on this prototype: lexeme detection, learning of verbal valency, categorization into nominal count/mass classes, selection of prepositions and sentential complements, among others. The thesis concludes with a review of the conclusions and motivation for further improvements as well as proposals for future research on the automatic induction of lexical features.
Resumo:
La Word Sense Disambiguation è un problema informatico appartenente al campo di studi del Natural Language Processing, che consiste nel determinare il senso di una parola a seconda del contesto in cui essa viene utilizzata. Se un processo del genere può apparire banale per un essere umano, può risultare d'altra parte straordinariamente complicato se si cerca di codificarlo in una serie di istruzioni esguibili da una macchina. Il primo e principale problema necessario da affrontare per farlo è quello della conoscenza: per operare una disambiguazione sui termini di un testo, un computer deve poter attingere da un lessico che sia il più possibile coerente con quello di un essere umano. Sebbene esistano altri modi di agire in questo caso, quello di creare una fonte di conoscenza machine-readable è certamente il metodo che permette di affrontare il problema in maniera più diretta. Nel corso di questa tesi si cercherà, come prima cosa, di spiegare in cosa consiste la Word Sense Disambiguation, tramite una descrizione breve ma il più possibile dettagliata del problema. Nel capitolo 1 esso viene presentato partendo da alcuni cenni storici, per poi passare alla descrizione dei componenti fondamentali da tenere in considerazione durante il lavoro. Verranno illustrati concetti ripresi in seguito, che spaziano dalla normalizzazione del testo in input fino al riassunto dei metodi di classificazione comunemente usati in questo campo. Il capitolo 2 è invece dedicato alla descrizione di BabelNet, una risorsa lessico-semantica multilingua di recente costruzione nata all'Università La Sapienza di Roma. Verranno innanzitutto descritte le due fonti da cui BabelNet attinge la propria conoscenza, WordNet e Wikipedia. In seguito saranno illustrati i passi della sua creazione, dal mapping tra le due risorse base fino alla definizione di tutte le relazioni che legano gli insiemi di termini all'interno del lessico. Infine viene proposta una serie di esperimenti che mira a mettere BabelNet su un banco di prova, prima per verificare la consistenza del suo metodo di costruzione, poi per confrontarla, in termini di prestazioni, con altri sistemi allo stato dell'arte sottoponendola a diversi task estrapolati dai SemEval, eventi internazionali dedicati alla valutazione dei problemi WSD, che definiscono di fatto gli standard di questo campo. Nel capitolo finale vengono sviluppate alcune considerazioni sulla disambiguazione, introdotte da un elenco dei principali campi applicativi del problema. Vengono in questa sede delineati i possibili sviluppi futuri della ricerca, ma anche i problemi noti e le strade recentemente intraprese per cercare di portare le prestazioni della Word Sense Disambiguation oltre i limiti finora definiti.
Resumo:
In recent years, Deep Learning techniques have shown to perform well on a large variety of problems both in Computer Vision and Natural Language Processing, reaching and often surpassing the state of the art on many tasks. The rise of deep learning is also revolutionizing the entire field of Machine Learning and Pattern Recognition pushing forward the concepts of automatic feature extraction and unsupervised learning in general. However, despite the strong success both in science and business, deep learning has its own limitations. It is often questioned if such techniques are only some kind of brute-force statistical approaches and if they can only work in the context of High Performance Computing with tons of data. Another important question is whether they are really biologically inspired, as claimed in certain cases, and if they can scale well in terms of "intelligence". The dissertation is focused on trying to answer these key questions in the context of Computer Vision and, in particular, Object Recognition, a task that has been heavily revolutionized by recent advances in the field. Practically speaking, these answers are based on an exhaustive comparison between two, very different, deep learning techniques on the aforementioned task: Convolutional Neural Network (CNN) and Hierarchical Temporal memory (HTM). They stand for two different approaches and points of view within the big hat of deep learning and are the best choices to understand and point out strengths and weaknesses of each of them. CNN is considered one of the most classic and powerful supervised methods used today in machine learning and pattern recognition, especially in object recognition. CNNs are well received and accepted by the scientific community and are already deployed in large corporation like Google and Facebook for solving face recognition and image auto-tagging problems. HTM, on the other hand, is known as a new emerging paradigm and a new meanly-unsupervised method, that is more biologically inspired. It tries to gain more insights from the computational neuroscience community in order to incorporate concepts like time, context and attention during the learning process which are typical of the human brain. In the end, the thesis is supposed to prove that in certain cases, with a lower quantity of data, HTM can outperform CNN.
Resumo:
In questo lavoro si introducono i concetti di base di Natural Language Processing, soffermandosi su Information Extraction e analizzandone gli ambiti applicativi, le attività principali e la differenza rispetto a Information Retrieval. Successivamente si analizza il processo di Named Entity Recognition, focalizzando l’attenzione sulle principali problematiche di annotazione di testi e sui metodi per la valutazione della qualità dell’estrazione di entità. Infine si fornisce una panoramica della piattaforma software open-source di language processing GATE/ANNIE, descrivendone l’architettura e i suoi componenti principali, con approfondimenti sugli strumenti che GATE offre per l'approccio rule-based a Named Entity Recognition.
Resumo:
La tesi è stata incentrata sul gioco «Indovina chi?» per l’identificazione da parte del robot Nao di un personaggio tramite la sua descrizione. In particolare la descrizione avviene tramite domande e risposte L’obiettivo della tesi è la progettazione di un sistema in grado di capire ed elaborare dei dati comunicati usando un sottoinsieme del linguaggio naturale, estrapolarne le informazioni chiave e ottenere un riscontro con informazioni date in precedenza. Si è quindi programmato il robot Nao in modo che sia in grado di giocare una partita di «Indovina chi?» contro un umano comunicando tramite il linguaggio naturale. Sono state implementate regole di estrazione e categorizzazione per la comprensione del testo utilizzando Cogito, una tecnologia brevettata dall'azienda Expert System. In questo modo il robot è in grado di capire le risposte e rispondere alle domande formulate dall'umano mediante il linguaggio naturale. Per il riconoscimento vocale è stata utilizzata l'API di Google e PyAudio per l'utilizzo del microfono. Il programma è stato implementato in Python e i dati dei personaggi sono memorizzati in un database che viene interrogato e modificato dal robot. L'algoritmo del gioco si basa su calcoli probabilistici di vittoria del robot e sulla scelta delle domande da proporre in base alle risposte precedentemente ricevute dall'umano. Le regole semantiche realizzate danno la possibilità al giocatore di formulare frasi utilizzando il linguaggio naturale, inoltre il robot è in grado di distinguere le informazioni che riguardano il personaggio da indovinare senza farsi ingannare. La percentuale di vittoria del robot ottenuta giocando 20 partite è stata del 50%. Il data base è stato sviluppato in modo da poter realizzare un identikit completo di una persona, oltre a quello dei personaggi del gioco. È quindi possibile ampliare il progetto per altri scopi, oltre a quello del gioco, nel campo dell'identificazione.
Resumo:
We examined aesthetic preference for reproductions of paintings among frontotemporal dementia (FTD) patients, in two sessions separated by 2 weeks. The artworks were in three different styles: representational, quasirepresentational, and abstract. Stability of preference for the paintings was equivalent to that shown by a matched group of Alzheimer's disease patients and a group of healthy controls drawn from an earlier study. We expected that preference for representational art would be affected by disruptions in language processes in the FTD group. However, this was not the case and the FTD patients, despite severe language processing deficits, performed similarly across all three art styles. These data show that FTD patients maintain a sense of aesthetic appraisal despite cognitive impairment and should be amenable to therapies and enrichment activities involving art.
Resumo:
The present study shows that different neural activity during mental imagery and abstract mentation can be assigned to well-defined steps of the brain's information-processing. During randomized visual presentation of single, imagery-type and abstract-type words, 27 channel event-related potential (ERP) field maps were obtained from 25 subjects (sequence-divided into a first and second group for statistics). The brain field map series showed a sequence of typical map configurations that were quasi-stable for brief time periods (microstates). The microstates were concatenated by rapid map changes. As different map configurations must result from different spatial patterns of neural activity, each microstate represents different active neural networks. Accordingly, microstates are assumed to correspond to discrete steps of information-processing. Comparing microstate topographies (using centroids) between imagery- and abstract-type words, significantly different microstates were found in both subject groups at 286–354 ms where imagery-type words were more right-lateralized than abstract-type words, and at 550–606 ms and 606–666 ms where anterior-posterior differences occurred. We conclude that language-processing consists of several, well-defined steps and that the brain-states incorporating those steps are altered by the stimuli's capacities to generate mental imagery or abstract mentation in a state-dependent manner.
Resumo:
Anelis Kaiser is associate researcher at the Center for Cognitive Science at the University of Freiburg, Germany. Dr. Kaiser recently co-edited a special issue of the journal Neuroethics on gender and brain science. She is co-founder (with Isabelle Dussauge) of the interdisciplinary network NeuroGenderings, which brings together experts from the brain sciences, the humanities and science studies (STS) to critically study the sexed brain. She has published on sex and gender as constructed categories in science as well as on the topics of multilingualism and language processing in the brain. Co-sponsored with the Center for Lesbian and Gay Studies. - See more at: http://www.gc.cuny.edu/Page-Elements/Academics-Research-Centers-Initiatives/Centers-and-Institutes/Center-for-the-Study-of-Women-and-Society/Center-Events#sthash.bDeBg5fk.dpuf
Resumo:
The Objective was to describe the contributions of Joseph Jules Dejerine and his wife Augusta Dejerine-Klumpke to our understanding of cerebral association fiber tracts and language processing. The Dejerines (and not Constantin von Monakow) were the first to describe the superior longitudinal fasciculus/arcuate fasciculus (SLF/AF) as an association fiber tract uniting Broca's area, Wernicke's area, and a visual image center in the angular gyrus of a left hemispheric language zone. They were also the first to attribute language-related functions to the fasciculi occipito-frontalis (FOF) and the inferior longitudinal fasciculus (ILF) after describing aphasia patients with degeneration of the SLF/AF, ILF, uncinate fasciculus (UF), and FOF. These fasciculi belong to a functional network known as the Dejerines' language zone, which exceeds the borders of the classically defined cortical language centers. The Dejerines provided the first descriptions of the anatomical pillars of present-day language models (such as the SLF/AF). Their anatomical descriptions of fasciculi in aphasia patients provided a foundation for our modern concept of the dorsal and ventral streams in language processing.
Resumo:
Whereas semantic, logical, and narrative features of verbal humor are well-researched, phonological and prosodic dimensions of verbal funniness are hardly explored. In a 2 × 2 design we varied rhyme and meter in humorous couplets. Rhyme and meter enhanced funniness ratings and supported faster processing. Rhyming couplets also elicited more intense and more positive affective responses, increased subjective comprehensibility and more accurate memory. The humor effect is attributed to special rhyme and meter features distinctive of humoristic poetry in several languages. Verses that employ these formal features make an artful use of typical poetic vices of amateurish poems written for birthday parties or other occasions. Their metrical patterning sounds “mechanical” rather than genuinely “poetic”; they also disregard rules for “good” rhymes. The processing of such verses is discussed in terms of a metacognitive integration of their poetically deviant features into an overall effect of processing ease. The study highlights the importance of nonsemantic rhetorical features in language processing.