982 resultados para emotion analysis


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Sentiment analysis has recently gained popularity in the financial domain thanks to its capability to predict the stock market based on the wisdom of the crowds. Nevertheless, current sentiment indicators are still silos that cannot be combined to get better insight about the mood of different communities. In this article we propose a Linked Data approach for modelling sentiment and emotions about financial entities. We aim at integrating sentiment information from different communities or providers, and complements existing initiatives such as FIBO. The ap- proach has been validated in the semantic annotation of tweets of several stocks in the Spanish stock market, including its sentiment information.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The characterisation of facial expression through landmark-based analysis methods such as FACEM (Pilowsky & Katsikitis, 1994) has a variety of uses in psychiatric and psychological research. In these systems, important structural relationships are extracted from images of facial expressions by the analysis of a pre-defined set of feature points. These relationship measures may then be used, for instance, to assess the degree of variability and similarity between different facial expressions of emotion. FaceXpress is a multimedia software suite that provides a generalised workbench for landmark-based facial emotion analysis and stimulus manipulation. It is a flexible tool that is designed to be specialised at runtime by the user. While FaceXpress has been used to implement the FACEM process, it can also be configured to support any other similar, arbitrary system for quantifying human facial emotion. FaceXpress also implements an integrated set of image processing tools and specialised tools for facial expression stimulus production including facial morphing routines and the generation of expression-representative line drawings from photographs.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The Audio/Visual Emotion Challenge and Workshop (AVEC 2011) is the first competition event aimed at comparison of multimedia processing and machine learning methods for automatic audio, visual and audiovisual emotion analysis, with all participants competing under strictly the same conditions. This paper first describes the challenge participation conditions. Next follows the data used – the SEMAINE corpus – and its partitioning into train, development, and test partitions for the challenge with labelling in four dimensions, namely activity, expectation, power, and valence. Further, audio and video baseline features are introduced as well as baseline results that use these features for the three sub-challenges of audio, video, and audiovisual emotion recognition.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Research in emotion analysis of text suggest that emotion lexicon based features are superior to corpus based n-gram features. However the static nature of the general purpose emotion lexicons make them less suited to social media analysis, where the need to adopt to changes in vocabulary usage and context is crucial. In this paper we propose a set of methods to extract a word-emotion lexicon automatically from an emotion labelled corpus of tweets. Our results confirm that the features derived from these lexicons outperform the standard Bag-of-words features when applied to an emotion classification task. Furthermore, a comparative analysis with both manually crafted lexicons and a state-of-the-art lexicon generated using Point-Wise Mutual Information, show that the lexicons generated from the proposed methods lead to significantly better classi- fication performance.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Sentiment and Emotion Analysis strongly depend on quality language resources, especially sentiment dictionaries. These resources are usually scattered, heterogeneous and limited to specific domains of appli- cation by simple algorithms. The EUROSENTIMENT project addresses these issues by 1) developing a common language resource representation model for sentiment analysis, and APIs for sentiment analysis services based on established Linked Data formats (lemon, Marl, NIF and ONYX) 2) by creating a Language Resource Pool (a.k.a. LRP) that makes avail- able to the community existing scattered language resources and services for sentiment analysis in an interoperable way. In this paper we describe the available language resources and services in the LRP and some sam- ple applications that can be developed on top of the EUROSENTIMENT LRP.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Extracting opinions and emotions from text is becoming increasingly important, especially since the advent of micro-blogging and social networking. Opinion mining is particularly popular and now gathers many public services, datasets and lexical resources. Unfortunately, there are few available lexical and semantic resources for emotion recognition that could foster the development of new emotion aware services and applications. The diversity of theories of emotion and the absence of a common vocabulary are two of the main barriers to the development of such resources. This situation motivated the creation of Onyx, a semantic vocabulary of emotions with a focus on lexical resources and emotion analysis services. It follows a linguistic Linked Data approach, it is aligned with the Provenance Ontology, and it has been integrated with the Lexicon Model for Ontologies (lemon), a popular RDF model for representing lexical entries. This approach also means a new and interesting way to work with different theories of emotion. As part of this work, Onyx has been aligned with EmotionML and WordNet-Affect.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Recent years have witnessed a surge of interest in computational methods for affect, ranging from opinion mining, to subjectivity detection, to sentiment and emotion analysis. This article presents a brief overview of the latest trends in the field and describes the manner in which the articles contained in the special issue contribute to the advancement of the area. Finally, we comment on the current challenges and envisaged developments of the subjectivity and sentiment analysis fields, as well as their application to other Natural Language Processing tasks and related domains.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

No trabalho apresentado realiza-se uma primeira descrição de voz e emoção para o Português Europeu. Estudamos, utilizando como base estudos realizados em diversas línguas (finlandês; inglês; alemão), os parâmetros relacionados com voz e que variam consoante a emoção que expressamos. Analisamos assim os parâmetros relacionados com a frequência Fundamental (F0) com a perturbação (jitter) com a amplitude (shimmer) e com aspectos relacionados com o ruído (HNR). Trata-se de um estudo abrangente que estudando voz e a sua relação/variação de acordo com a emoção o faz em três vertentes: patologia de voz de origem psicogénica (carácter emocional); emoção produzida por actores e a análise de emoção espontânea. Conseguindo, como trabalho pioneiro nesta área, valores para todos estes tipos de produção. Salientamos o facto de no nosso trabalho apenas existir a análise de voz sem recurso a expressão facial ou à postura dos indivíduos. Para que pudéssemos realizar estudos comparativos com os dados que íamos recolhendo em cada corpus (patologia; emoção por actor e emoção espontânea), procurámos utilizar sempre os mesmos métodos de análise (Praat; SFS; SPSS, Hoarseness Diagram – para a análise de voz com patologia - e o sistema Feeltrace - para as emoções espontâneas). Os estudos e análises relativos à emoção produzida por actores são complementados por testes de percepção aplicados a falantes nativos de Inglês Americano e a falantes de Português Europeu. Este teste, juntamente com a análise da emoção espontânea, permitiu-nos retirar dados particulares relativos à língua portuguesa. Apesar de haver tanto na expressão como na percepção de emoções muitas características consideradas universais, em Português percebe-se algo de peculiar. Os valores para a expressão neutra; tristeza e alegria são todos muito próximos, ao contrário do que acontece noutras línguas. Além disso estas três emoções (de famílias distintas) são as que mais dificuldades causam (aos dois grupos de informantes) em termos de distinção no teste de percepção. Poderá ser esta a particularidade da expressão da emoção no Português Europeu, podendo estar ligada a factores culturais. Percebe-se ainda, com este trabalho, que a emoção expressa pelo actor se aproxima da emoção espontânea. No entanto, alguns parâmetros apresentam valores diferentes, isto porque o actor tem a tendência de exagerar a emoção. Com este trabalho foram criados corpora originais que serão um recurso importante a disponibilizar para futuras análises numa área que é ainda deficitária, em termos de investigação científica, em Portugal. Tanto os corpora, como respectivos resultados obtidos poderão vir a ser úteis em áreas como as Ciências da Fala; Robótica e Docência.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Lors d'une intervention conversationnelle, le langage est supporté par une communication non-verbale qui joue un rôle central dans le comportement social humain en permettant de la rétroaction et en gérant la synchronisation, appuyant ainsi le contenu et la signification du discours. En effet, 55% du message est véhiculé par les expressions faciales, alors que seulement 7% est dû au message linguistique et 38% au paralangage. L'information concernant l'état émotionnel d'une personne est généralement inférée par les attributs faciaux. Cependant, on ne dispose pas vraiment d'instruments de mesure spécifiquement dédiés à ce type de comportements. En vision par ordinateur, on s'intéresse davantage au développement de systèmes d'analyse automatique des expressions faciales prototypiques pour les applications d'interaction homme-machine, d'analyse de vidéos de réunions, de sécurité, et même pour des applications cliniques. Dans la présente recherche, pour appréhender de tels indicateurs observables, nous essayons d'implanter un système capable de construire une source consistante et relativement exhaustive d'informations visuelles, lequel sera capable de distinguer sur un visage les traits et leurs déformations, permettant ainsi de reconnaître la présence ou absence d'une action faciale particulière. Une réflexion sur les techniques recensées nous a amené à explorer deux différentes approches. La première concerne l'aspect apparence dans lequel on se sert de l'orientation des gradients pour dégager une représentation dense des attributs faciaux. Hormis la représentation faciale, la principale difficulté d'un système, qui se veut être général, est la mise en œuvre d'un modèle générique indépendamment de l'identité de la personne, de la géométrie et de la taille des visages. La démarche qu'on propose repose sur l'élaboration d'un référentiel prototypique à partir d'un recalage par SIFT-flow dont on démontre, dans cette thèse, la supériorité par rapport à un alignement conventionnel utilisant la position des yeux. Dans une deuxième approche, on fait appel à un modèle géométrique à travers lequel les primitives faciales sont représentées par un filtrage de Gabor. Motivé par le fait que les expressions faciales sont non seulement ambigües et incohérentes d'une personne à une autre mais aussi dépendantes du contexte lui-même, à travers cette approche, on présente un système personnalisé de reconnaissance d'expressions faciales, dont la performance globale dépend directement de la performance du suivi d'un ensemble de points caractéristiques du visage. Ce suivi est effectué par une forme modifiée d'une technique d'estimation de disparité faisant intervenir la phase de Gabor. Dans cette thèse, on propose une redéfinition de la mesure de confiance et introduisons une procédure itérative et conditionnelle d'estimation du déplacement qui offrent un suivi plus robuste que les méthodes originales.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

There are several different standardised and widespread formats to represent emotions. However, there is no standard semantic model yet. This paper presents a new ontology, called Onyx, that aims to become such a standard while adding concepts from the latest Semantic Web models. In particular, the ontology focuses on the representation of Emotion Analysis results. But the model is abstract and inherits from previous standards and formats. It can thus be used as a reference representation of emotions in any future application or ontology. To prove this, we have translated resources from EmotionML representation to Onyx. We also present several ways in which developers could benefit from using this ontology instead of an ad-hoc presentation. Our ultimate goal is to foster the use of semantic technologies for emotion Analysis while following the Linked Data ideals.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Emotion-based analysis has raised a lot of interest, particularly in areas such as forensics, medicine, music, psychology, and human-machine interface. Following this trend, the use of facial analysis (either automatic or human-based) is the most common subject to be investigated once this type of data can easily be collected and is well accepted in the literature as a metric for inference of emotional states. Despite this popularity, due to several constraints found in real world scenarios (e.g. lightning, complex backgrounds, facial hair and so on), automatically obtaining affective information from face accurately is a very challenging accomplishment. This work presents a framework which aims to analyse emotional experiences through naturally generated facial expressions. Our main contribution is a new 4-dimensional model to describe emotional experiences in terms of appraisal, facial expressions, mood, and subjective experiences. In addition, we present an experiment using a new protocol proposed to obtain spontaneous emotional reactions. The results have suggested that the initial emotional state described by the participants of the experiment was different from that described after the exposure to the eliciting stimulus, thus showing that the used stimuli were capable of inducing the expected emotional states in most individuals. Moreover, our results pointed out that spontaneous facial reactions to emotions are very different from those in prototypic expressions due to the lack of expressiveness in the latter.