32 resultados para Lexical semantics


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Reproducibility of published results is a cornerstone in scientific publishing and progress. Therefore, the scientific community has been encouraging authors and editors to publish their contributions in a verifiable and understandable way. Efforts such as the Reproducibility Initiative [1], or the Reproducibility Projects on Biology [2] and Psychology [3] domains, have been defining standards and patterns to assess whether an experimental result is reproducible.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Semantics Difficulty Model (SDM) is a model that measures the difficult of introducing semantics technology into a company. SDM manages three descriptions of stages, which we will refer to as ?snapshots?: a company semantic snapshot, data snapshot and semantic application snapshot. Understanding a priory the complexity of introducing semantics into a company is important because it allows the organization to take early decisions, thus saving time and money, mitigating risks and improving innovation, time to market and productivity. SDM works by measuring the distance between each initial snapshot and its reference models (the company semantic snapshots reference model, data snapshots reference model, and the semantic application snapshots reference model) with Euclidian distances. The difficulty level will be "not at all difficult" when the distance is small, and becomes "extremely difficult" when the the distance is large. SDM has been tested experimentally with 2000 simulated companies with arrangements and several initial stages. The output is measured by five linguistic values: "not at all difficult, slightly difficult, averagely difficult, very difficult and extremely difficult". As the preliminary results of our SDM simulation model indicate, transforming a search application into integrated data from different sources with semantics is a "slightly difficult", in contrast with data and opinion extraction applications for which it is "very difficult".

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a proposal for a recognition model for the appraisal value of sentences. It is based on splitting the text into independent sentences (full stops) and then analysing the appraisal elements contained in each sentence according to the previous value in the appraisal lexicon. In this lexicon, positive words are assigned a positive coefficient (+1) and negative words a negative coefficient (-1). We take into account word such as ?too?, ?little? (when it is not ?a bit?), ?less?, and ?nothing? than can modify the polarity degree of lexical unit when appear in the nearby environment. If any of these elements are present, then the previous coefficient will be multiplied by (-1), that is, they will change their sign. Our results show a nearly theoretical effectiveness of 90%, despite not achieving the recognition (or misrecognition) of implicit elements. These elements represent approximately 4% of the total of sentences analysed for appraisal and include the errors in the recognition of coordinated sentences. On the one hand, we found that 3.6 % of the sentences could not be recognized because they use different connectors than those included in the model; on the other hand, we found that in 8.6% of the sentences despite using some of the described connectors could not be applied the rules we have developed. The percentage relative to the whole group of appraisal sentences in the corpus was approximately of 5%.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Given the sustained growth that we are experiencing in the number of SPARQL endpoints available, the need to be able to send federated SPARQL queries across these has also grown. To address this use case, the W3C SPARQL working group is defining a federation extension for SPARQL 1.1 which allows for combining graph patterns that can be evaluated over several endpoints within a single query. In this paper, we describe the syntax of that extension and formalize its semantics. Additionally, we describe how a query evaluation system can be implemented for that federation extension, describing some static optimization techniques and reusing a query engine used for data-intensive science, so as to deal with large amounts of intermediate and final results. Finally we carry out a series of experiments that show that our optimizations speed up the federated query evaluation process.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we present a revisited classification of term variation in the light of the Linked Data initiative. Linked Data refers to a set of best practices for publishing and connecting structured data on the Web with the idea of transforming it into a global graph. One of the crucial steps of this initiative is the linking step, in which datasets in one or more languages need to be linked or connected with one another. We claim that the linking process would be facilitated if datasets are enriched with lexical and terminological information. Being that the final aim, we propose a classification of lexical, terminological and semantic variants that will become part of a model of linguistic descriptions that is currently being proposed within the framework of the W3C Ontology- Lexica Community Group to enrich ontologies and Linked Data vocabularies. Examples of modeling solutions of the different types of variants are also provided.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We derive by program transformation Pierre Crégut s full-reducing Krivine machine KN from the structural operational semantics of the normal order reduction strategy in a closure-converted pure lambda calculus. We thus establish the correspondence between the strategy and the machine, and showcase our technique for deriving full-reducing abstract machines. Actually, the machine we obtain is a slightly optimised version that can work with open terms and may be used in implementations of proof assistants.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Olivier Danvy and others have shown the syntactic correspondence between reduction semantics (a small-step semantics) and abstract machines, as well as the functional correspondence between reduction-free normalisers (a big-step semantics) and abstract machines. The correspondences are established by program transformation (so-called interderivation) techniques. A reduction semantics and a reduction-free normaliser are interderivable when the abstract machine obtained from them is the same. However, the correspondences fail when the underlying reduction strategy is hybrid, i.e., relies on another sub-strategy. Hybridisation is an essential structural property of full-reducing and complete strategies. Hybridisation is unproblematic in the functional correspondence. But in the syntactic correspondence the refocusing and inlining-of-iterate-function steps become context sensitive, preventing the refunctionalisation of the abstract machine. We show how to solve the problem and showcase the interderivation of normalisers for normal order, the standard, full-reducing and complete strategy of the pure lambda calculus. Our solution makes it possible to interderive, rather than contrive, full-reducing abstract machines. As expected, the machine we obtain is a variant of Pierre Crégut s full Krivine machine KN.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents an approach to create what we have called a Unified Sentiment Lexicon (USL). This approach aims at aligning, unifying, and expanding the set of sentiment lexicons which are available on the web in order to increase their robustness of coverage. One problem related to the task of the automatic unification of different scores of sentiment lexicons is that there are multiple lexical entries for which the classification of positive, negative, or neutral {P, Z, N} depends on the unit of measurement used in the annotation methodology of the source sentiment lexicon. Our USL approach computes the unified strength of polarity of each lexical entry based on the Pearson correlation coefficient which measures how correlated lexical entries are with a value between 1 and -1, where 1 indicates that the lexical entries are perfectly correlated, 0 indicates no correlation, and -1 means they are perfectly inversely correlated and so is the UnifiedMetrics procedure for CPU and GPU, respectively. Another problem is the high processing time required for computing all the lexical entries in the unification task. Thus, the USL approach computes a subset of lexical entries in each of the 1344 GPU cores and uses parallel processing in order to unify 155802 lexical entries. The results of the analysis conducted using the USL approach show that the USL has 95.430 lexical entries, out of which there are 35.201 considered to be positive, 22.029 negative, and 38.200 neutral. Finally, the runtime was 10 minutes for 95.430 lexical entries; this allows a reduction of the time computing for the UnifiedMetrics by 3 times.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This approach aims at aligning, unifying and expanding the set of sentiment lexicons which are available on the web in order to increase their robustness of coverage. A sentiment lexicon is a critical and essential resource for tagging subjective corpora on the web or elsewhere. In many situations, the multilingual property of the sentiment lexicon is important because the writer is using two languages alternately in the same text, message or post. Our USL approach computes the unified strength of polarity of each lexical entry based on the Pearson correlation coefficient which measures how correlated lexical entries are with a value between 1 and -1, where 1 indicates that the lexical entries are perfectly correlated, 0 indicates no correlation, and -1 means they are perfectly inversely correlated and the UnifiedMetrics procedure for CPU and GPU, respectively.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present a methodology for legacy language resource adaptation that generates domain-specific sentiment lexicons organized around domain entities described with lexical information and sentiment words described in the context of these entities. We explain the steps of the methodology and we give a working example of our initial results. The resulting lexicons are modelled as Linked Data resources by use of established formats for Linguistic Linked Data (lemon, NIF) and for linked sentiment expressions (Marl), thereby contributing and linking to existing Language Resources in the Linguistic Linked Open Data cloud.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Language resources, such as multilingual lexica and multilingual electronic dictionaries, contain collections of lexical entries in several languages. Having access to the corresponding explicit or implicit translation relations between such entries might be of great interest for many NLP-based applications. By using Semantic Web-based techniques, translations can be available on the Web to be consumed by other (semantic enabled) resources in a direct manner, not relying on application-specific formats. To that end, in this paper we propose a model for representing translations as linked data, as an extension of the lemon model. Our translation module represents some core information associated to term translations and does not commit to specific views or translation theories. As a proof of concept, we have extracted the translations of the terms contained in Terminesp, a multilingual terminological database, and represented them as linked data. We have made them accessible on the Web both for humans (via a Web interface) and software agents (with a SPARQL endpoint).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Extracting opinions and emotions from text is becoming increasingly important, especially since the advent of micro-blogging and social networking. Opinion mining is particularly popular and now gathers many public services, datasets and lexical resources. Unfortunately, there are few available lexical and semantic resources for emotion recognition that could foster the development of new emotion aware services and applications. The diversity of theories of emotion and the absence of a common vocabulary are two of the main barriers to the development of such resources. This situation motivated the creation of Onyx, a semantic vocabulary of emotions with a focus on lexical resources and emotion analysis services. It follows a linguistic Linked Data approach, it is aligned with the Provenance Ontology, and it has been integrated with the Lexicon Model for Ontologies (lemon), a popular RDF model for representing lexical entries. This approach also means a new and interesting way to work with different theories of emotion. As part of this work, Onyx has been aligned with EmotionML and WordNet-Affect.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Este Trabajo de Fin de Grado (TFG) tiene el objetivo de aportar un sistema de enseñanza innovador, un sistema de enseñanza mediante el cual se consiga involucrar a los alumnos en tareas y prácticas en las que se adquieran conocimientos a la vez que se siente un ambiente de juego, es decir, que se consiga aprender de forma divertida. Está destinado al sistema educativo de la Escuela Técnica Superior de Ingenieros Informáticos de la Universidad Politécnica de Madrid, en concreto a las asignaturas relacionadas con los Procesadores de Lenguajes. La aplicación desarrollada en este trabajo está destinada tanto a los profesores de las asignaturas de Procesadores de Lenguajes como a los alumnos que tengan alguna relación con esas asignaturas, consiguiendo mayor interacción y diversión a la hora de realizar la tareas y prácticas de las asignaturas. Para los dos tipos de usuarios descritos anteriormente, la aplicación está configurada para que puedan identificarse mediante sus credenciales, comprobándose si los datos introducidos son correctos, y así poder acceder al sistema. Dependiendo de qué tipo de usuario se identifique, tendrá unas opciones u otras dentro del sistema. Los profesores podrán dar de alta, ver, modificar o dar de baja las configuraciones para los analizadores de los lenguajes correspondientes a las diferentes asignaturas que están configurados previamente en el sistema. Además, los profesores pueden dar de alta, ver, modificar o dar de baja los fragmentos de código que formarán los ficheros correspondientes a las plantillas de pruebas del analizador léxico que se les ofrece a los alumnos para realizar comprobaciones de las prácticas. Mediante la aplicación podrán establecer diferentes características y propiedades de los fragmentos que incorporen al sistema. Por otra parte, los alumnos podrán realizar la configuración del lenguaje, definido por los profesores, para la parte del analizador léxico de las prácticas. Esta configuración será guardada para el grupo al que corresponde el alumno, pudiendo realizar modificaciones cualquier miembro del grupo. De esta manera, se podrán posteriormente establecer las relaciones necesarias entre los elementos del lenguaje según la configuración de los profesores y los elementos referentes a las prácticas de los alumnos.Además, los alumnos podrán realizar comprobaciones de la parte léxica de sus prácticas mediante los ficheros que se generan por el sistema en función de sus opciones de práctica y los fragmentos añadidos por los profesores. De esta manera, se informará a los alumnos del éxito de las pruebas o bien de los fallos ocasionados con sus resultados, bien por el formato del archivo subido como resultado de la prueba o bien por el contenido incorrecto de este mismo. Todas las funciones que ofrece esta aplicación son completamente on-line y tendrán una interfaz llamativa y divertida, además de caracterizarse por su facilidad de uso y su comodidad. En el trabajo realizado para este proyecto se cumplen tanto las Pautas de Accesibilidad para Contenidos Web (WCAG 2.0), así como las propiedades de un código HTML 5 y CSS 3 de manera correcta, para así conseguir que los usuarios utilicen una aplicación fácil, cómoda y atractiva.---ABSTRACT---This Final Year Project (TFG) aims to contribute the educational system of the School of Computer Engineering at the Polytechnic University of Madrid, especially in subjects related with Language Processors. This project is an interactive learning system whose goal is to learn in an amusing environment. To realize this target, the system involves students, using environments of games in tasks and practices. The application developed in this project is designed for both professors of the subjects of Language Processors and students who have some relation to these subjects. This perspective achieve more interaction and a funny environment during the subject‘s tasks. The application is configured in order to the users can be identified by their credentials, checking whether the identification data are correct to have access to the system. According on what type of user is identified, they will have different options within the system. Professors will be able to register, modify or delete settings for the scanner of languages for all the subjects preconfigured in the system. Additionally, professors can register, show, modify or remove the code of the templates from scanner tests that are offered to students for testing the practical exercises. The professors may provide also different characteristics and properties of fragments incorporated in the system. Moreover, students can make the configuration of languages, getting in the systems by the administrators, for the scanner module of their practical exercises. This configuration will be saved for the group of the student. This model can also be changed by any group member. The system permits also establish later the relationships between the elements of language fixes by professors and elements developed by the students. Students could check the lexical part of their practical exercises through files that are created according to their practical options and the fragments added by professors. Thus students will be informed of success or failure in the uploaded files format and in the content of them. All functions provide by this application are completely on-line and will have a striking and funny interface, also characterized by its ease of use and comfort.The work reaches both the Web Content Accessibility Guidelines (WCAG 2.0), and the properties of an HTML 5 and CSS 3 code correctly, in order to get the users to get an easy, convenient application and attractive.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Esta tesis presenta un modelo, una metodología, una arquitectura, varios algoritmos y programas para crear un lexicón de sentimientos unificado (LSU) que cubre cuatro lenguas: inglés, español, portugués y chino. El objetivo principal es alinear, unificar, y expandir el conjunto de lexicones de sentimientos disponibles en Internet y los desarrollados a lo largo de esta investigación. Así, el principal problema a resolver es la tarea de unificar de forma automatizada los diferentes lexicones de sentimientos obtenidos por el crawler CSR, porque la unidad de medida para asignar la intensidad de los valores de la polaridad (de forma manual, semiautomática y automática) varía de acuerdo con las diferentes metodologías utilizadas para la construcción de cada lexicón. La representación codificada de la estructura de datos de los términos presenta también una variación en la estructura de lexicón a lexicón. Por lo que al unificar en un lexicón de sentimientos se hace posible la reutilización del conocimiento recopilado por los diferentes grupos de investigación y se incrementa, a la vez, el alcance, la calidad y la robustez de los lexicones. Nuestra metodología LSU calcula un valor unificado de la intensidad de la polaridad para cada entrada léxica que está presente en al menos dos de los lexicones de sentimientos que forman parte de este estudio. En contraste, las entradas léxicas que no son comunes en al menos dos de los lexicones conservan su valor original. El coeficiente de Pearson resultante permite medir la correlación existente entre las entradas léxicas asignándoles un rango de valores de uno a menos uno, donde uno indica que los valores de los términos están perfectamente correlacionados, cero indica que no existe correlación y menos uno significa que están inversamente correlacionados. Este procedimiento se lleva acabo con la función de MetricasUnificadas tanto en la CPU como en la GPU. Otro problema a resolver es el tiempo de procesamiento que se requiere para realizar la tarea de unificación de la intensidad de la polaridad y con ello alcanzar una cobertura mayor de lemas en los lexicones de sentimientos existentes. Asimismo, la metodología LSU utiliza el procesamiento paralelo para unificar los 155 802 términos. El algoritmo LSU procesa mediante cargas iguales el subconjunto de entradas léxicas en cada uno de los 1344 núcleos en la GPU. Los resultados de nuestro análisis arrojaron un total de 95 430 entradas léxicas donde 35 201 obtuvieron valores positivos, 22 029 negativos y 38 200 neutrales. Finalmente, el tiempo de ejecución fue de 2,506 segundos para el total de las entradas léxicas, lo que permitió reducir el procesamiento de cómputo hasta en una tercera parte con respecto al algoritmo secuencial. De estos resultados se concluye que al lograr un lexicón de sentimientos unificado que permite homogeneizar la intensidad de la polaridad de las unidades léxicas (con valores positivos, negativos y neutrales) deriva no sólo en el análisis semántico del corpus basado en los términos con una mayor carga de polaridad, o del resumen de las valoraciones o las tendencias de neuromarketing, sino también en aplicaciones como el etiquetado subjetivo de sitios web o de portales sintácticos y semánticos, por mencionar algunas. ABSTRACT This thesis presents an approach to create what we have called a Unified Sentiment Lexicon (USL). This approach aims at aligning, unifying, and expanding the set of sentiment lexicons which are available on the web in order to increase their robustness of coverage. One problem related to the task of the automatic unification of different scores of sentiment lexicons is that there are multiple lexical entries for which the classification of positive, negative, or neutral P, N, Z depends on the unit of measurement used in the annotation methodology of the source sentiment lexicon. Our USL approach computes the unified strength of polarity of each lexical entry based on the Pearson correlation coefficient which measures how correlated lexical entries are with a value between 1 and - 1 , where 1 indicates that the lexical entries are perfectly correlated, 0 indicates no correlation, and -1 means they are perfectly inversely correlated and so is the UnifiedMetrics procedure for CPU and GPU, respectively. Another problem is the high processing time required for computing all the lexical entries in the unification task. Thus, the USL approach computes a subset of lexical entries in each of the 1344 GPU cores and uses parallel processing in order to unify 155,802 lexical entries. The results of the analysis conducted using the USL approach show that the USL has 95,430 lexical entries, out of which there are 35,201 considered to be positive, 22,029 negative, and 38,200 neutral. Finally, the runtime was 2.505 seconds for 95,430 lexical entries; this allows a reduction of the time computing for the UnifiedMetrics by 3 times with respect to the sequential implementation. A key contribution of this work is that we preserve the use of a unified sentiment lexicon for all tasks. Such lexicon is used to define resources and resource-related properties that can be verified based on the results of the analysis and is powerful, general and extensible enough to express a large class of interesting properties. Some applications of this work include merging, aligning, pruning and extending the current sentiment lexicons.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Esta tesis doctoral, que es la culminación de mis estudios de doctorado impartidos por el Departamento de Lingüística Aplicada a la Ciencia y a la Tecnología de la Universidad Politécnica de Madrid, aborda el análisis del uso de la matización (hedging) en el lenguaje legal inglés siguiendo los postulados y principios de la análisis crítica de género (Bhatia, 2004) y empleando las herramientas de análisis de córpora WordSmith Tools versión 6 (Scott, 2014). Como refleja el título, el estudio se centra en la descripción y en el análisis contrastivo de las variedades léxico-sintácticas de los matizadores del discurso (hedges) y las estrategias discursivas que con ellos se llevan a cabo, además de las funciones que éstas desempeñan en un corpus de sentencias del Tribunal Supremo de EE. UU., y de artículos jurídicos de investigación americanos, relacionando, en la medida posible, éstas con los rasgos determinantes de los dos géneros, desde una perspectiva socio-cognitiva. El elemento innovador que ofrece es que, a pesar de los numerosos estudios que se han podido realizar sobre los matizadores del discurso en el inglés general (Lakoff, 1973; Hübler, 1983; Clemen, 1997; Markkanen and Schröder, 1997; Mauranen, 1997; Fetzer 2010; y Finnegan, 2010 entre otros) académico (Crompton, 1997; Meyer, 1997; Skelton, 1997; Martín Butragueňo, 2003) científico (Hyland, 1996a, 1996c, 1998c, 2007; Grabe and Kaplan, 1997; Salager-Meyer, 1997 Varttala, 2001) médico (Prince, 1982; Salager-Meyer, 1994; Skelton, 1997), y, en menor medida el inglés legal (Toska, 2012), no existe ningún tipo de investigación que vincule los distintos usos de la matización a las características genéricas de las comunicaciones profesionales. Dentro del lenguaje legal, la matización confirma su dependencia tanto de las expectativas a macro-nivel de la comunidad de discurso, como de las intenciones a micro-nivel del escritor de la comunicación, variando en función de los propósitos comunicativos del género ya sean éstos educativos, pedagógicos, interpersonales u operativos. El estudio pone de relieve el uso predominante de los verbos modales epistémicos y de los verbos léxicos como matizadores del discurso, estos últimos divididos en cuatro tipos (Hyland 1998c; Palmer 1986, 1990, 2001) especulativos, citativos, deductivos y sensoriales. La realización léxico-sintáctica del matizador puede señalar una de cuatro estrategias discursivas particulares (Namsaraev, 1997; Salager-Meyer, 1994), la indeterminación, la despersonalización, la subjectivisación, o la matización camuflada (camouflage hedging), cuya incidencia y función varia según género. La identificación y cuantificación de los distintos matizadores y estrategias empleados en los diferentes géneros del discurso legal puede tener implicaciones pedagógicos para los estudiantes de derecho no nativos que tienen que demostrar una competencia adecuada en su uso y procesamiento. ABSTRACT This doctoral thesis, which represents the culmination of my doctoral studies undertaken in the Department of Linguistics Applied to Science and Technology of the Universidad Politécnica de Madrid, focusses on the analysis of hedging in legal English following the principles of Critical Genre Analysis (Bhatia, 2004), and using WordSmith Tools version 6 (Scott, 2014) corpus analysis tools. As the title suggests, this study centers on the description and contrastive analysis of lexico-grammatical realizations of hedges and the discourse strategies which they can indicate, as well as the functions they can carry out, in a corpus of U.S. Supreme Court opinions and American law review articles. The study relates realization, incidence and function of hedging to the predominant generic characteristics of the two genres from a socio-cognitive perspective. While there have been numerous studies on hedging in general English (Lakoff, 1973; Hübler, 1983; Clemen, 1997; Markkanen and Schröder, 1997; Mauranen, 1997; Fetzer 2010; and Finnegan, 2010 among others) academic English (Crompton, 1997; Meyer, 1997; Skelton, 1997; Martín Butragueňo, 2003) scientific English (Hyland, 1996a, 1996c, 1998c, 2007; Grabe and Kaplan, 1997; Salager-Meyer, 1997 Varttala, 2001) medical English (Prince, 1982; Salager-Meyer, 1994; Skelton, 1997), and, to a lesser degree, legal English (Toska, 2012), this study is innovative in that it links the different realizations and functions of hedging to the generic characteristics of a particular professional communication. Within legal English, hedging has been found to depend on not only the macro-level expectations of the discourse community for a specific genre, but also on the micro-level intentions of the author of a communication, varying according to the educational, pedagogical, interpersonal or operative purposes the genre may have. The study highlights the predominance of epistemic modal verbs and lexical verbs as hedges, dividing the latter into four types (Hyland, 1998c; Palmer, 1986, 1990, 2001): speculative, quotative, deductive and sensorial. Lexical-grammatical realizations of hedges can signal one of four discourse strategies (Namsaraev, 1997; Salager-Meyer, 1994), indetermination, depersonalization, subjectivization and camouflage hedging, as well as fulfill a variety of functions. The identification and quantification of the different hedges and hedging strategies and functions in the two genres may have pedagogical implications for non-native law students who must demonstrate adequate competence in the production and interpretation of hedged discourse.