968 resultados para Facial Expression
Resumo:
Spontaneous facial expressions differ from posed ones in appearance, timing and accompanying head movements. Still images cannot provide timing or head movement information directly. However, indirectly the distances between key points on a face extracted from a still image using active shape models can capture some movement and pose changes. This information is superposed on information about non-rigid facial movement that is also part of the expression. Does geometric information improve the discrimination between spontaneous and posed facial expressions arising from discrete emotions? We investigate the performance of a machine vision system for discrimination between posed and spontaneous versions of six basic emotions that uses SIFT appearance based features and FAP geometric features. Experimental results on the NVIE database demonstrate that fusion of geometric information leads only to marginal improvement over appearance features. Using fusion features, surprise is the easiest emotion (83.4% accuracy) to be distinguished, while disgust is the most difficult (76.1%). Our results find different important facial regions between discriminating posed versus spontaneous version of one emotion and classifying the same emotion versus other emotions. The distribution of the selected SIFT features shows that mouth is more important for sadness, while nose is more important for surprise, however, both the nose and mouth are important for disgust, fear, and happiness. Eyebrows, eyes, nose and mouth are important for anger.
Resumo:
We examine methodologies and methods that apply to multi-level research in the learning sciences. In so doing we describe how multiple theoretical frameworks informs the use of different methods that apply to social levels involving space-time relationships that are not accessible consciously as social life is enacted. Most of the methods involve analyses of video and audio files. Within a framework of interpretive research we present a methodology of event-oriented social science, which employs video ethnography, narrative, conversation analysis, prosody analysis, and facial expression analysis. We illustrate multi-method research in an examination of the role of emotions in teaching and learning. Conversation and prosody analyses augment facial expression analysis and ethnography. We conclude with an exploration of ways in which multi-level studies can be complemented with neural level analyses.
In the pursuit of effective affective computing : the relationship between features and registration
Resumo:
For facial expression recognition systems to be applicable in the real world, they need to be able to detect and track a previously unseen person's face and its facial movements accurately in realistic environments. A highly plausible solution involves performing a "dense" form of alignment, where 60-70 fiducial facial points are tracked with high accuracy. The problem is that, in practice, this type of dense alignment had so far been impossible to achieve in a generic sense, mainly due to poor reliability and robustness. Instead, many expression detection methods have opted for a "coarse" form of face alignment, followed by an application of a biologically inspired appearance descriptor such as the histogram of oriented gradients or Gabor magnitudes. Encouragingly, recent advances to a number of dense alignment algorithms have demonstrated both high reliability and accuracy for unseen subjects [e.g., constrained local models (CLMs)]. This begs the question: Aside from countering against illumination variation, what do these appearance descriptors do that standard pixel representations do not? In this paper, we show that, when close to perfect alignment is obtained, there is no real benefit in employing these different appearance-based representations (under consistent illumination conditions). In fact, when misalignment does occur, we show that these appearance descriptors do work well by encoding robustness to alignment error. For this work, we compared two popular methods for dense alignment-subject-dependent active appearance models versus subject-independent CLMs-on the task of action-unit detection. These comparisons were conducted through a battery of experiments across various publicly available data sets (i.e., CK+, Pain, M3, and GEMEP-FERA). We also report our performance in the recent 2011 Facial Expression Recognition and Analysis Challenge for the subject-independent task.
Resumo:
Image representations derived from simplified models of the primary visual cortex (V1), such as HOG and SIFT, elicit good performance in a myriad of visual classification tasks including object recognition/detection, pedestrian detection and facial expression classification. A central question in the vision, learning and neuroscience communities regards why these architectures perform so well. In this paper, we offer a unique perspective to this question by subsuming the role of V1-inspired features directly within a linear support vector machine (SVM). We demonstrate that a specific class of such features in conjunction with a linear SVM can be reinterpreted as inducing a weighted margin on the Kronecker basis expansion of an image. This new viewpoint on the role of V1-inspired features allows us to answer fundamental questions on the uniqueness and redundancies of these features, and offer substantial improvements in terms of computational and storage efficiency.
Resumo:
Non-rigid face alignment is a very important task in a large range of applications but the existing tracking based non-rigid face alignment methods are either inaccurate or requiring person-specific model. This dissertation has developed simultaneous alignment algorithms that overcome these constraints and provide alignment with high accuracy, efficiency, robustness to varying image condition, and requirement of only generic model.
Resumo:
Sensing the mental, physical and emotional demand of a driving task is of primary importance in road safety research and for effectively designing in-vehicle information systems (IVIS). Particularly, the need of cars capable of sensing and reacting to the emotional state of the driver has been repeatedly advocated in the literature. Algorithms and sensors to identify patterns of human behavior, such as gestures, speech, eye gaze and facial expression, are becoming available by using low cost hardware: This paper presents a new system which uses surrogate measures such as facial expression (emotion) and head pose and movements (intention) to infer task difficulty in a driving situation. 11 drivers were recruited and observed in a simulated driving task that involved several pre-programmed events aimed at eliciting emotive reactions, such as being stuck behind slower vehicles, intersections and roundabouts, and potentially dangerous situations. The resulting system, combining face expressions and head pose classification, is capable of recognizing dangerous events (such as crashes and near misses) and stressful situations (e.g. intersections and way giving) that occur during the simulated drive.
Resumo:
Sociological approaches to inquiry on emotion in educational settings are growing. Despite a long tradition of research and theory in disciplines such as psychology and sociology, the methods and approaches for naturalistic investigation of emotion are in a developmental phase in educational settings. In this article, recent empirical studies on emotion in educational contexts are canvassed. The discussion focuses on the use of multiple methods within research conducted in high school and university classrooms highlighting recent methodological progress. The methods discussed include facial expression analysis, verbal and non-verbal conduct, and self-report methods. Analyses drawn from different studies, informed by perspectives from microsociology, highlight the strengths and limitations of any one method. The power and limitations of multi-method approaches is discussed.
Resumo:
H.Stahl was the last president of the Jewish Community in Berlin. He is seated in a chair with wooden arm supports. He seems a small man, an impression emphasized by a large expanse of plain, tan background. The facial expression is tense, with deeply furrowed brows.
Resumo:
Facial emotions are the most expressive way to display emotions. Many algorithms have been proposed which employ a particular set of people (usually a database) to both train and test their model. This paper focuses on the challenging task of database independent emotion recognition, which is a generalized case of subject-independent emotion recognition. The emotion recognition system employed in this work is a Meta-Cognitive Neuro-Fuzzy Inference System (McFIS). McFIS has two components, a neuro-fuzzy inference system, which is the cognitive component and a self-regulatory learning mechanism, which is the meta-cognitive component. The meta-cognitive component, monitors the knowledge in the neuro-fuzzy inference system and decides on what-to-learn, when-to-learn and how-to-learn the training samples, efficiently. For each sample, the McFIS decides whether to delete the sample without being learnt, use it to add/prune or update the network parameter or reserve it for future use. This helps the network avoid over-training and as a result improve its generalization performance over untrained databases. In this study, we extract pixel based emotion features from well-known (Japanese Female Facial Expression) JAFFE and (Taiwanese Female Expression Image) TFEID database. Two sets of experiment are conducted. First, we study the individual performance of both databases on McFIS based on 5-fold cross validation study. Next, in order to study the generalization performance, McFIS trained on JAFFE database is tested on TFEID and vice-versa. The performance The performance comparison in both experiments against SVNI classifier gives promising results.
Resumo:
Whether facial identity and facial expression was processed independently has long been a controversy. Studies at levels of experimental, neuropsychological, functional imaging and cell-recording all failed to consistently support either independent or interdependent processing. Present study proposed that familiarity and discriminability of facial identity and expression was important variable in mediating the relation between facial identity and facial expression recognition. Effects of familiarity on recognition of facial identity and expression had been examined (e.g. Ganel & Goshen-Gottstein, 2004) but the role of the discriminability in recognition of facial identity and expression has not yet been carefully examined. To examine the role of discriminability of facial identity and expression, 8 experiments were conducted with Garner’s speeded classification task in recognition of identity and expression of unfamiliar faces. The discriminability of facial identity and expression was manipulated, and the measurements of Garner interference and facilitation indicated that: 1. The discriminability of facial identity and expression mediate the relation between facial identity and expression recognition. Four possible discriminability combinations between identity and expression predicted 4 interference patterns between them. Low discriminability accounted for the interference either in facial identity judgment or in facial expression judgment task. 2. The measurements of eye movements indicated that either in facial identity or in facial expression recognition low discriminability led to a narrowly-distributed eye fixation pattern while high discriminability led to a widely-distributed eye fixation pattern. 3. By combining the morphing technique with the Garner paradigm, study 2 successfully demonstrated the linar relation between discriminability and Garner facilitation effects, confirmed the discriminability effects in the measurements of Garner facilitation effects.. 4. By providing the varying information of facial expression, study 2 revealed that varying information improved the discriminability of facial expression, and then enhanced the recognition of facial expression. All the results indicated that the discriminability of facial identity and expression could mediate the independent or interdependent processing between them, and the discriminability effects on recognition of identity and expression of unfamiliar faces was identified. The results from interference effects and facilitation effects both indicated that the dimensional relation between facial identity and expression was separable but not asymmetric claimed by previous studies(Schweinberger et al, 1998, 1999). Absolutedly independent or interdependent processing between facial identity and expression recognition were both impossible, discriminability of identity and expression mediated the relation between them. The discriminability effects revealed in present study could explain the conflicts between existing findings well.
Resumo:
A novel method for 3D head tracking in the presence of large head rotations and facial expression changes is described. Tracking is formulated in terms of color image registration in the texture map of a 3D surface model. Model appearance is recursively updated via image mosaicking in the texture map as the head orientation varies. The resulting dynamic texture map provides a stabilized view of the face that can be used as input to many existing 2D techniques for face recognition, facial expressions analysis, lip reading, and eye tracking. Parameters are estimated via a robust minimization procedure; this provides robustness to occlusions, wrinkles, shadows, and specular highlights. The system was tested on a variety of sequences taken with low quality, uncalibrated video cameras. Experimental results are reported.
Resumo:
For many years psychological research on facial expression of emotion has relied heavily on a recognition paradigm based on posed static photographs. There is growing evidence that there may be fundamental differences between the expressions depicted in such stimuli and the emotional expressions present in everyday life. Affective computing, with its pragmatic emphasis on realism, needs examples of natural emotion. This paper describes a unique database containing recordings of mild to moderate emotionally coloured responses to a series of laboratory based emotion induction tasks. The recordings are accompanied by information on self-report of emotion and intensity, continuous trace-style ratings of valence and intensity, the sex of the participant, the sex of the experimenter, the active or passive nature of the induction task and it gives researchers the opportunity to compare expressions from people from more than one culture.
Resumo:
This paper explores the application of semi-qualitative probabilistic networks (SQPNs) that combine numeric and qualitative information to computer vision problems. Our version of SQPN allows qualitative influences and imprecise probability measures using intervals. We describe an Imprecise Dirichlet model for parameter learning and an iterative algorithm for evaluating posterior probabilities, maximum a posteriori and most probable explanations. Experiments on facial expression recognition and image segmentation problems are performed using real data.
Resumo:
A key assumption of dual process theory is that reasoning is an explicit, effortful, deliberative process. The present study offers evidence for an implicit, possibly intuitive component of reasoning. Participants were shown sentences embedded in logically valid or invalid arguments. Participants were not asked to reason but instead rated the sentences for liking (Experiment 1) and physical brightness (Experiments 2-3). Sentences that followed logically from preceding sentences were judged to be more likable and brighter. Two other factors thought to be linked to implicit processing-sentence believability and facial expression-had similar effects on liking and brightness ratings. The authors conclude that sensitivity to logical structure was implicit, occurring potentially automatically and outside of awareness. They discuss the results within a fluency misattribution framework and make reference to the literature on discourse comprehension.
Resumo:
No trabalho apresentado realiza-se uma primeira descrição de voz e emoção para o Português Europeu. Estudamos, utilizando como base estudos realizados em diversas línguas (finlandês; inglês; alemão), os parâmetros relacionados com voz e que variam consoante a emoção que expressamos. Analisamos assim os parâmetros relacionados com a frequência Fundamental (F0) com a perturbação (jitter) com a amplitude (shimmer) e com aspectos relacionados com o ruído (HNR). Trata-se de um estudo abrangente que estudando voz e a sua relação/variação de acordo com a emoção o faz em três vertentes: patologia de voz de origem psicogénica (carácter emocional); emoção produzida por actores e a análise de emoção espontânea. Conseguindo, como trabalho pioneiro nesta área, valores para todos estes tipos de produção. Salientamos o facto de no nosso trabalho apenas existir a análise de voz sem recurso a expressão facial ou à postura dos indivíduos. Para que pudéssemos realizar estudos comparativos com os dados que íamos recolhendo em cada corpus (patologia; emoção por actor e emoção espontânea), procurámos utilizar sempre os mesmos métodos de análise (Praat; SFS; SPSS, Hoarseness Diagram – para a análise de voz com patologia - e o sistema Feeltrace - para as emoções espontâneas). Os estudos e análises relativos à emoção produzida por actores são complementados por testes de percepção aplicados a falantes nativos de Inglês Americano e a falantes de Português Europeu. Este teste, juntamente com a análise da emoção espontânea, permitiu-nos retirar dados particulares relativos à língua portuguesa. Apesar de haver tanto na expressão como na percepção de emoções muitas características consideradas universais, em Português percebe-se algo de peculiar. Os valores para a expressão neutra; tristeza e alegria são todos muito próximos, ao contrário do que acontece noutras línguas. Além disso estas três emoções (de famílias distintas) são as que mais dificuldades causam (aos dois grupos de informantes) em termos de distinção no teste de percepção. Poderá ser esta a particularidade da expressão da emoção no Português Europeu, podendo estar ligada a factores culturais. Percebe-se ainda, com este trabalho, que a emoção expressa pelo actor se aproxima da emoção espontânea. No entanto, alguns parâmetros apresentam valores diferentes, isto porque o actor tem a tendência de exagerar a emoção. Com este trabalho foram criados corpora originais que serão um recurso importante a disponibilizar para futuras análises numa área que é ainda deficitária, em termos de investigação científica, em Portugal. Tanto os corpora, como respectivos resultados obtidos poderão vir a ser úteis em áreas como as Ciências da Fala; Robótica e Docência.