63 resultados para WordNet


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Summary: Using WordNet in information retrieval

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Brazilian Portuguese needs a Wordnet that is open access, downloadable and changeable, so that it can be improved by the community interested in using it for knowledge representation and automated deduction. This kind of resource is also very valuable to linguists and computer scientists interested in extracting and representing knowledge obtained from texts. We discuss briefly the reasons for a Brazilian Portuguese Wordnet and the process we used to get a preliminary version of such a resource. Then we discuss possible steps to improving our preliminary version.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper reports the ongoing project (since 2002) of developing a wordnet for Brazilian Portuguese (Wordnet.Br) from scratch. In particular, it describes the process of constructing the Wordnet.Br core database, which has 44,000 words organized in 18,500 synsets Accordingly, it briefly sketches the project overall methodology, its lexical resourses, the synset compilation process, and the Wordnet.Br editor, a GUI (graphical user interface) which aids the linguist in the compilation and maintenance of the Wordnet.Br. It concludes with the planned further work.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents the overall methodology that has been used to encode both the Brazilian Portuguese WordNet (WordNet.Br) standard language-independent conceptual-semantic relations (hyponymy, co-hyponymy, meronymy, cause, and entailment) and the so-called cross-lingual conceptual-semantic relations between different wordnets. Accordingly, after contextualizing the project and outlining the current lexical database structure and statistics, it describes the WordNet.Br editing GUI that was designed to aid the linguist in carrying out the tasks of building synsets, selecting sample sentences from corpora, writing synset concept glosses, and encoding both language-independent conceptual-semantic relations and cross-lingual conceptual-semantic relations between WordNet.Br and Princeton WordNet © Springer-Verlag Berlin Heidelberg 2006.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Princeton WordNet (WN.Pr) lexical database has motivated efficient compilations of bulky relational lexicons since its inception in the 1980's. The EuroWordNet project, the first multilingual initiative built upon WN.Pr, opened up ways of building individual wordnets, and interrelating them by means of the so-called Inter-Lingual-Index, an unstructured list of the WN.Pr synsets. Other important initiative, relying on a slightly different method of building multilingual wordnets, is the MultiWordNet project, where the key strategy is building language specific wordnets keeping as much as possible of the semantic relations available in the WN.Pr. This paper, in particular, stresses that the additional advantage of using WN.Pr lexical database as a resource for building wordnets for other languages is to explore possibilities of implementing an automatic procedure to map the WN.Pr conceptual relations as hyponymy, co-hyponymy, troponymy, meronymy, cause, and entailment onto the lexical database of the wordnet under construction, a viable possibility, for those are language-independent relations that hold between lexicalized concepts, not between lexical units. Accordingly, combining methods from both initiatives, this paper presents the ongoing implementation of the WN.Br lexical database and the aforementioned automation procedure illustrated with a sample of the automatic encoding of the hyponymy and co-hyponymy relations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Nowadays, there is a significant quantity of linguistic data available on the Web. However, linguistic resources are often published using proprietary formats and, as such, it can be difficult to interface with one another and they end up confined in “data silos”. The creation of web standards for the publishing of data on the Web and projects to create Linked Data have lead to interest in the creation of resources that can be published using Web principles. One of the most important aspects of “Lexical Linked Data” is the sharing of lexica and machine readable dictionaries. It is for this reason, that the lemon format has been proposed, which we briefly describe. We then consider two resources that seem ideal candidates for the Linked Data cloud, namely WordNet 3.0 and Wiktionary, a large document based dictionary. We discuss the challenges of converting both resources to lemon , and in particular for Wiktionary, the challenge of processing the mark-up, and handling inconsistencies and underspecification in the source material. Finally, we turn to the task of creating links between the two resources and present a novel algorithm for linking lexica as lexical Linked Data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we present the enrichment of the Integration of Semantic Resources based in WordNet (ISR-WN Enriched). This new proposal improves the previous one where several semantic resources such as SUMO, WordNet Domains and WordNet Affects were related, adding other semantic resources such as Semantic Classes and SentiWordNet. Firstly, the paper describes the architecture of this proposal explaining the particularities of each integrated resource. After that, we analyze some problems related to the mappings of different versions and how we solve them. Moreover, we show the advantages that this kind of tool can provide to different applications of Natural Language Processing. Related to that question, we can demonstrate that the integration of semantic resources allows acquiring a multidimensional vision in the analysis of natural language.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper outlines the approach adopted by the PLSI research group at University of Alicante in the PASCAL-2006 second Recognising Textual Entailment challenge. Our system is composed of several components. On the one hand, the first component performs the derivation of the logic forms of the text/hypothesis pairs and, on the other hand, the second component provides us with a similarity score given by the semantic relations between the derived logic forms. In order to obtain this score we apply several measures of similitude and relatedness based on the structure and content of WordNet.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we explore the use of semantic classes in an existing information retrieval system in order to improve its results. Thus, we use two different ontologies of semantic classes (WordNet domain and Basic Level Concepts) in order to re-rank the retrieved documents and obtain better recall and precision. Finally, we implement a new method for weighting the expanded terms taking into account the weights of the original query terms and their relations in WordNet with respect to the new ones (which have demonstrated to improve the results). The evaluation of these approaches was carried out in the CLEF Robust-WSD Task, obtaining an improvement of 1.8% in GMAP for the semantic classes approach and 10% in MAP employing the WordNet term weighting approach.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

La presente herramienta informática constituye un software que es capaz concebir una red semántica con los siguientes recursos: WordNet versión 1.6 y 2.0, WordNet Affects versión 1.0 y 1.1, WordNet Domain versión 2.0, SUMO, Semantic Classes y Senti WordNet versión 3.0, todos integrados y relacionados en una única base de conocimiento. Utilizando estos recursos, ISR-WN cuenta con funcionalidades añadidas que permiten la exploración de dicha red de un modo simple aplicando funciones tanto como de recorrido como de búsquedas textuales. Mediante la interrogación de dicha red semántica es posible obtener información para enriquecer textos, como puede ser obtener las definiciones de aquellas palabras que son de uso común en determinados Dominios en general, dominios emocionales, y otras conceptualizaciones, además de conocer de un determinado sentido de una palabra su valoración proporcionada por el recurso SentiWordnet de positividad, negatividad y objetividad sentimental. Toda esta información puede ser utilizada en tareas de procesamiento del lenguaje natural como: • Desambiguación del Sentido de las Palabras, • Detección de la Polaridad Sentimental • Análisis Semántico y Léxico para la obtención de conceptos relevantes en una frase según el tipo de recurso implicado. Esta herramienta tiene como base el idioma inglés y se encuentra disponible como una aplicación de Windows la cual dispone de un archivo de instalación el cual despliega en el ordenador de residencia las librerías necesarias para su correcta utilización. Además de la interfaz de usuario ofrecida, esta herramienta puede ser utilizada como API (Application Programming Interface) por otras aplicaciones.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Some basic points from the automated creation of a Bulgarian WordNet – an analogue of the Princeton WordNet, are treated. The used computer tools, the received results and their estimation are discussed. A side effect from the proposed approach is the receiving of patterns for the Bulgarian syntactic analyzer.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In recent years, there has been an increas-ing interest in learning a distributed rep-resentation of word sense. Traditional context clustering based models usually require careful tuning of model parame-ters, and typically perform worse on infre-quent word senses. This paper presents a novel approach which addresses these lim-itations by first initializing the word sense embeddings through learning sentence-level embeddings from WordNet glosses using a convolutional neural networks. The initialized word sense embeddings are used by a context clustering based model to generate the distributed representations of word senses. Our learned represen-tations outperform the publicly available embeddings on 2 out of 4 metrics in the word similarity task, and 6 out of 13 sub tasks in the analogical reasoning task.