911 resultados para Machine Learning,Natural Language Processing,Descriptive Text Mining,POIROT,Transformer
Resumo:
Community-driven Question Answering (CQA) systems that crowdsource experiential information in the form of questions and answers and have accumulated valuable reusable knowledge. Clustering of QA datasets from CQA systems provides a means of organizing the content to ease tasks such as manual curation and tagging. In this paper, we present a clustering method that exploits the two-part question-answer structure in QA datasets to improve clustering quality. Our method, {\it MixKMeans}, composes question and answer space similarities in a way that the space on which the match is higher is allowed to dominate. This construction is motivated by our observation that semantic similarity between question-answer data (QAs) could get localized in either space. We empirically evaluate our method on a variety of real-world labeled datasets. Our results indicate that our method significantly outperforms state-of-the-art clustering methods for the task of clustering question-answer archives.
Resumo:
Este Trabajo Fin de Grado (TFG) tiene como objetivo la creación de un framework para su uso en sistemas de recomendación. Se ha realizado por dos personas en la modalidad de trabajo en equipo. Las tareas de este TFG están divididas en dos partes, una realizada conjuntamente y la otra de manera individual. La parte conjunta se centra en construir un sistema que sea capaz de, a partir de comentarios y opiniones sobre puntos de interés (POIs) y haciendo uso de la herramienta de procesamiento de lenguaje natural AlchemyAPI, construir contextos formales y contextos formales multivaluados. Para crear este último es necesario hacer uso de ontologías. El context formal multivaluado es el punto de partida de la segunda parte (individual), que consistirá en, haciendo uso del contexto multivaluado, obtener un conjunto de dependencias funcionales mediante la implementación en Java del algoritmo FDMine. Estas dependencias podrán ser usados en un motor de recomendación. El sistema se ha implementado como una aplicación web Java EE versión 6 y una API para trabajar con contextos formales multivaluados. Para el desarrollo web se han empleado tecnologías actuales como Spring y jQuery. Este proyecto se presenta como un trabajo inicial en el que se expondrán, además del sistema construido, diversos problemas relacionados con la creacion de conjuntos de datos validos. Por último, también se propondrán líneas para futuros TFGs.
Resumo:
Dissertação de Mestrado, Ciências da Linguagem, Faculdade de Ciências Humanas e Sociais, Universidade do Algarve, 2014
Resumo:
Ce mémoire tente de répondre à une problématique très importante dans le domaine de recrutement : l’appariement entre offre d’emploi et candidats. Dans notre cas nous disposons de milliers d’offres d’emploi et de millions de profils ramassés sur les sites dédiés et fournis par un industriel spécialisé dans le recrutement. Les offres d’emploi et les profils de candidats sur les réseaux sociaux professionnels sont généralement destinés à des lecteurs humains qui sont les recruteurs et les chercheurs d’emploi. Chercher à effectuer une sélection automatique de profils pour une offre d’emploi se heurte donc à certaines difficultés que nous avons cherché à résoudre dans le présent mémoire. Nous avons utilisé des techniques de traitement automatique de la langue naturelle pour extraire automatiquement les informations pertinentes dans une offre d’emploi afin de construite une requête qui nous permettrait d’interroger notre base de données de profils. Pour valider notre modèle d’extraction de métier, de compétences et de d’expérience, nous avons évalué ces trois différentes tâches séparément en nous basant sur une référence cent offres d’emploi canadiennes que nous avons manuellement annotée. Et pour valider notre outil d’appariement nous avons fait évaluer le résultat de l’appariement de dix offres d’emploi canadiennes par un expert en recrutement.
Resumo:
Ce mémoire tente de répondre à une problématique très importante dans le domaine de recrutement : l’appariement entre offre d’emploi et candidats. Dans notre cas nous disposons de milliers d’offres d’emploi et de millions de profils ramassés sur les sites dédiés et fournis par un industriel spécialisé dans le recrutement. Les offres d’emploi et les profils de candidats sur les réseaux sociaux professionnels sont généralement destinés à des lecteurs humains qui sont les recruteurs et les chercheurs d’emploi. Chercher à effectuer une sélection automatique de profils pour une offre d’emploi se heurte donc à certaines difficultés que nous avons cherché à résoudre dans le présent mémoire. Nous avons utilisé des techniques de traitement automatique de la langue naturelle pour extraire automatiquement les informations pertinentes dans une offre d’emploi afin de construite une requête qui nous permettrait d’interroger notre base de données de profils. Pour valider notre modèle d’extraction de métier, de compétences et de d’expérience, nous avons évalué ces trois différentes tâches séparément en nous basant sur une référence cent offres d’emploi canadiennes que nous avons manuellement annotée. Et pour valider notre outil d’appariement nous avons fait évaluer le résultat de l’appariement de dix offres d’emploi canadiennes par un expert en recrutement.
Resumo:
Neuroimaging research involves analyses of huge amounts of biological data that might or might not be related with cognition. This relationship is usually approached using univariate methods, and, therefore, correction methods are mandatory for reducing false positives. Nevertheless, the probability of false negatives is also increased. Multivariate frameworks have been proposed for helping to alleviate this balance. Here we apply multivariate distance matrix regression for the simultaneous analysis of biological and cognitive data, namely, structural connections among 82 brain regions and several latent factors estimating cognitive performance. We tested whether cognitive differences predict distances among individuals regarding their connectivity pattern. Beginning with 3,321 connections among regions, the 36 edges better predicted by the individuals' cognitive scores were selected. Cognitive scores were related to connectivity distances in both the full (3,321) and reduced (36) connectivity patterns. The selected edges connect regions distributed across the entire brain and the network defined by these edges supports high-order cognitive processes such as (a) (fluid) executive control, (b) (crystallized) recognition, learning, and language processing, and (c) visuospatial processing. This multivariate study suggests that one widespread, but limited number, of regions in the human brain, supports high-level cognitive ability differences. Hum Brain Mapp, 2016. © 2016 Wiley Periodicals, Inc.
Resumo:
Question Answering systems that resort to the Semantic Web as a knowledge base can go well beyond the usual matching words in documents and, preferably, find a precise answer, without requiring user help to interpret the documents returned. In this paper, the authors introduce a Dialogue Manager that, through the analysis of the question and the type of expected answer, provides accurate answers to the questions posed in Natural Language. The Dialogue Manager not only represents the semantics of the questions, but also represents the structure of the discourse, including the user intentions and the questions context, adding the ability to deal with multiple answers and providing justified answers. The authors’ system performance is evaluated by comparing with similar question answering systems. Although the test suite is slight dimension, the results obtained are very promising.
Resumo:
Using Big Data and Natural Language Processing (NLP) tools, this dissertation investigates the narrative strategies that atypical actors can leverage to deal with the adverse reactions they often elicit. Extensive research shows that atypical actors, those who fail to abide by established contextual standards and norms, are subject to skepticism and face a higher risk of rejection. Indeed, atypical actors combine features and behaviors in unconventional ways, thereby generating confusion in the audience and instilling doubts about their propositions' legitimacy. However, the same atypicality is often cited as the precursor to socio-cultural innovation and a strategic act to expand the capacity for delivering valued goods and services. Contextualizing the conditions under which atypicality is celebrated or punished has been a significant theoretical challenge for scholars interested in reconciling this tension. Nevertheless, prior work has focused on audience side factors or on actor-side characteristics that are only scantily under an actor's control (e.g., status and reputation). This dissertation demonstrates that atypical actors can use strategically crafted narratives to mitigate against the audience’s negative response. In particular, when atypical actors evoke conventional features in their story, they are more likely to overcome the illegitimacy discount usually applied to them. Moreover, narratives become successful navigational devices for atypicality when atypical actors use a more abstract language. This simplifies classification and provides the audience with more flexibility to interpret and understand them.
Resumo:
SmartPantry `e un applicazione per Android che si pone come obiettivo quello di rendere semplice e pratica la gestione virtuale delle dispense degli utenti. Oltre a questo implementa un recommender system dedicato al suggerimento di ricette adatte ai prodotti contenuti nella dispensa, per farlo l’algoritmo si avvale della distanza di Damerau-Levenshtein per eseguire Natural Language Processing in modo tale da interpretare gli ingredienti delle dispense degli utenti e poterli mappare ad una collezione di ingredienti mantenuti in un database remoto. All’interno di questo elaborato andremo ad analizzare i dettagli di progetta�zione ed implementativi di SmartPantry e degli algoritmi che la sostengono ponendo particolare attenzione agli aspetti qualitativi degli algoritmi di NLP e raccomandazione raccogliendo dati sufficienti a trarre conclusioni oggettive sulla precisione ed efficacia dei suddetti. Nell’ultimo capitolo vedremo come nonostante la presenza di margini di miglioramento, come versione 1.0, gli algoritmi abbiano restituito dei risultati pi`u che discreti
Resumo:
Sempre più negli ultimi anni si interagisce con i chatbot, software che simulano una conversazione con un essere umano utilizzando il linguaggio naturale. L’elaborato di tesi mira ad uno studio più approfondito della tematica, a partire da come tale tecnologia si è evoluta nel corso degli anni. Si procede analizzando le principali applicazioni dei bot, soffermandosi anche sui cambiamenti apportati dalla pandemia di Covid-19, ed evidenziando le principali ragioni che portano aziende e singoli al loro utilizzo. Inoltre, vengono descritti i diversi tipi di bot esistenti e viene analizzato il Natural Language Processing, ramo dell’Intelligenza Artificiale che mira alla comprensione del linguaggio naturale. Nei capitoli successivi viene descritto il progetto CartBot, un’applicazione di chat mobile per l’e-grocery, implementata come un chatbot che guida il cliente all’acquisto della spesa online. Vengono descritte le tecnologie utilizzate, con particolare riferimento al software di Google Dialogflow, che permette di sviluppare bot; inoltre viene analizzata come è stata effettuata la progettazione, sia lato front-end che back-end, allegando il flowchart, un diagramma di flusso realizzato per definire la sequenza di azioni e passaggi richiesti dal bot per effettuare l’acquisto. Infine, sono descritte le varie sottosezioni di CartBot, che riguardano la visualizzazione dei prodotti e il completamento dell’ordine, allegando screenshot dell’interfaccia finale ottenuta e inserendo il codice di alcune funzioni rilevanti.
Resumo:
In questo elaborato viene trattata l’analisi del problema di soft labeling applicato alla multi-document summarization, in particolare vengono testate varie tecniche per estrarre frasi rilevanti dai documenti presi in dettaglio, al fine di fornire al modello di summarization quelle di maggior rilievo e più informative per il riassunto da generare. Questo problema nasce per far fronte ai limiti che presentano i modelli di summarization attualmente a disposizione, che possono processare un numero limitato di frasi; sorge quindi la necessità di filtrare le informazioni più rilevanti quando il lavoro si applica a documenti lunghi. Al fine di scandire la metrica di importanza, vengono presi come riferimento metodi sintattici, semantici e basati su rappresentazione a grafi AMR. Il dataset preso come riferimento è Multi-LexSum, che include tre granularità di summarization di testi legali. L’analisi in questione si compone quindi della fase di estrazione delle frasi dai documenti, della misurazione delle metriche stabilite e del passaggio al modello stato dell’arte PRIMERA per l’elaborazione del riassunto. Il testo ottenuto viene poi confrontato con il riassunto target già fornito, considerato come ottimale; lavorando in queste condizioni l’obiettivo è di definire soglie ottimali di upper-bound per l’accuratezza delle metriche, che potrebbero ampliare il lavoro ad analisi più dettagliate qualora queste superino lo stato dell’arte attuale.
Resumo:
Ensemble Stream Modeling and Data-cleaning are sensor information processing systems have different training and testing methods by which their goals are cross-validated. This research examines a mechanism, which seeks to extract novel patterns by generating ensembles from data. The main goal of label-less stream processing is to process the sensed events to eliminate the noises that are uncorrelated, and choose the most likely model without over fitting thus obtaining higher model confidence. Higher quality streams can be realized by combining many short streams into an ensemble which has the desired quality. The framework for the investigation is an existing data mining tool. First, to accommodate feature extraction such as a bush or natural forest-fire event we make an assumption of the burnt area (BA*), sensed ground truth as our target variable obtained from logs. Even though this is an obvious model choice the results are disappointing. The reasons for this are two: One, the histogram of fire activity is highly skewed. Two, the measured sensor parameters are highly correlated. Since using non descriptive features does not yield good results, we resort to temporal features. By doing so we carefully eliminate the averaging effects; the resulting histogram is more satisfactory and conceptual knowledge is learned from sensor streams. Second is the process of feature induction by cross-validating attributes with single or multi-target variables to minimize training error. We use F-measure score, which combines precision and accuracy to determine the false alarm rate of fire events. The multi-target data-cleaning trees use information purity of the target leaf-nodes to learn higher order features. A sensitive variance measure such as f-test is performed during each node’s split to select the best attribute. Ensemble stream model approach proved to improve when using complicated features with a simpler tree classifier. The ensemble framework for data-cleaning and the enhancements to quantify quality of fitness (30% spatial, 10% temporal, and 90% mobility reduction) of sensor led to the formation of streams for sensor-enabled applications. Which further motivates the novelty of stream quality labeling and its importance in solving vast amounts of real-time mobile streams generated today.
Resumo:
Hand gestures are a powerful way for human communication, with lots of potential applications in the area of human computer interaction. Vision-based hand gesture recognition techniques have many proven advantages compared with traditional devices, giving users a simpler and more natural way to communicate with electronic devices. This work proposes a generic system architecture based in computer vision and machine learning, able to be used with any interface for human-computer interaction. The proposed solution is mainly composed of three modules: a pre-processing and hand segmentation module, a static gesture interface module and a dynamic gesture interface module. The experiments showed that the core of visionbased interaction systems could be the same for all applications and thus facilitate the implementation. For hand posture recognition, a SVM (Support Vector Machine) model was trained and used, able to achieve a final accuracy of 99.4%. For dynamic gestures, an HMM (Hidden Markov Model) model was trained for each gesture that the system could recognize with a final average accuracy of 93.7%. The proposed solution as the advantage of being generic enough with the trained models able to work in real-time, allowing its application in a wide range of human-machine applications. To validate the proposed framework two applications were implemented. The first one is a real-time system able to interpret the Portuguese Sign Language. The second one is an online system able to help a robotic soccer game referee judge a game in real time.
Analysis and evaluation of techniques for the extraction of classes in the ontology learning process
Resumo:
This paper analyzes and evaluates, in the context of Ontology learning, some techniques to identify and extract candidate terms to classes of a taxonomy. Besides, this work points out some inconsistencies that may be occurring in the preprocessing of text corpus, and proposes techniques to obtain good terms candidate to classes of a taxonomy.
Resumo:
Social network has gained remarkable attention in the last decade. Accessing social network sites such as Twitter, Facebook LinkedIn and Google+ through the internet and the web 2.0 technologies has become more affordable. People are becoming more interested in and relying on social network for information, news and opinion of other users on diverse subject matters. The heavy reliance on social network sites causes them to generate massive data characterised by three computational issues namely; size, noise and dynamism. These issues often make social network data very complex to analyse manually, resulting in the pertinent use of computational means of analysing them. Data mining provides a wide range of techniques for detecting useful knowledge from massive datasets like trends, patterns and rules [44]. Data mining techniques are used for information retrieval, statistical modelling and machine learning. These techniques employ data pre-processing, data analysis, and data interpretation processes in the course of data analysis. This survey discusses different data mining techniques used in mining diverse aspects of the social network over decades going from the historical techniques to the up-to-date models, including our novel technique named TRCM. All the techniques covered in this survey are listed in the Table.1 including the tools employed as well as names of their authors.