42 resultados para Facial emotion recognition
em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain
Resumo:
As part of the Affective Computing research field, the development of automatic affective recognition systems can enhance human-computer interactions by allowing the creation of interfaces that react to the user's emotional state. To that end, this Master Thesis brings affect recognition to nowadays most used human computer interface, mobile devices, by developing a facial expression recognition system able to perform detection under the difficult conditions of viewing angle and illumination that entails the interaction with a mobile device. Moreover, this Master Thesis proposes to combine emotional features detected from expression with contextual information of the current situation, to infer a complex and extensive emotional state of the user. Thus, a cognitive computational model of emotion is defined that provides a multicomponential affective state of the user through the integration of the detected emotional features into appraisal processes. In order to account for individual differences in the emotional experience, these processes can be adapted to the culture and personality of the user.
Resumo:
En esta memoria expone el trabajo que se ha llevado a cabo para intentar crear un sistema de reconocimiento facial.
Resumo:
Aquest paper es divideix en 3 parts fonamentals, la primera relata el que pretén mostrar aquest estudi, que és aplicar els sistemes actuals de reconeixement facial en una base de dades d'obres d'art. Explica quins mètodes s'utilitzaran i perquè es interessant realitzar aquest estudi. La segona passa a mostrar el detall de les dades obtingudes en l'experiment, amb imatges i gràfics que facilitaran la comprensió. I en l'última part tenim la discussió dels resultats obtinguts en l'anàlisi i les seves posteriors conclusions.
Resumo:
Estudi i implementació de les tècniques de reconeixement facial més conegudes: eigenfaces, fisherfaces i ASM. El resultat és una aplicació implementada en Matalab que permet realitzar el reconeixement de persones en temps real.
Resumo:
Evaluating other individuals with respect to personality characteristics plays a crucial role in human relations and it is the focus of attention for research in diverse fields such as psychology and interactive computer systems. In psychology, face perception has been recognized as a key component of this evaluation system. Multiple studies suggest that observers use face information to infer personality characteristics. Interactive computer systems are trying to take advantage of these findings and apply them to increase the natural aspect of interaction and to improve the performance of interactive computer systems. Here, we experimentally test whether the automatic prediction of facial trait judgments (e.g. dominance) can be made by using the full appearance information of the face and whether a reduced representation of its structure is sufficient. We evaluate two separate approaches: a holistic representation model using the facial appearance information and a structural model constructed from the relations among facial salient points. State of the art machine learning methods are applied to a) derive a facial trait judgment model from training data and b) predict a facial trait value for any face. Furthermore, we address the issue of whether there are specific structural relations among facial points that predict perception of facial traits. Experimental results over a set of labeled data (9 different trait evaluations) and classification rules (4 rules) suggest that a) prediction of perception of facial traits is learnable by both holistic and structural approaches; b) the most reliable prediction of facial trait judgments is obtained by certain type of holistic descriptions of the face appearance; and c) for some traits such as attractiveness and extroversion, there are relationships between specific structural features and social perceptions.
Resumo:
Research on face recognition and social judgment usually addresses the manipulation of facial features (eyes, nose, mouth, etc.). Using a procedure based on a Stroop-like task, Montepare and Opeyo (J Nonverbal Behav 26(1):43-59, 2002) established a hierarchy of the relative salience of cues based on facial attributes when differentiating faces. Using the same perceptual interference task, we established a hierarchy of facial features. Twenty-three participants (13 men and 10 women) volunteered for the experiment to compare pairs of frontal faces. The participants had to judge if the eyes, nose, mouth and chin in the pair of images were the same or different. The factors manipulated were the target-distractive factor (4 face components 9 3 distractive factors), interference (absent vs. present) and correct answer (the same vs. different). The analysis of reaction times and errors showed that the eyes and mouth were processed before the chin and nose, thus highlighting the critical importance of the eyes and mouth, as shown by previous research.
Resumo:
In this paper, we propose a new supervised linearfeature extraction technique for multiclass classification problemsthat is specially suited to the nearest neighbor classifier (NN).The problem of finding the optimal linear projection matrix isdefined as a classification problem and the Adaboost algorithmis used to compute it in an iterative way. This strategy allowsthe introduction of a multitask learning (MTL) criterion in themethod and results in a solution that makes no assumptions aboutthe data distribution and that is specially appropriated to solvethe small sample size problem. The performance of the methodis illustrated by an application to the face recognition problem.The experiments show that the representation obtained followingthe multitask approach improves the classic feature extractionalgorithms when using the NN classifier, especially when we havea few examples from each class
Resumo:
"Es tracta d'un projecte dividit en dues parts independents però complementàries, realitzades per autors diferents. Aquest document conté originàriament altre material i/o programari només consultable a la Biblioteca de Ciència i Tecnologia"
Resumo:
Report for the scientific sojourn at the Swiss Federal Institute of Technology Zurich, Switzerland, between September and December 2007. In order to make robots useful assistants for our everyday life, the ability to learn and recognize objects is of essential importance. However, object recognition in real scenes is one of the most challenging problems in computer vision, as it is necessary to deal with difficulties. Furthermore, in mobile robotics a new challenge is added to the list: computational complexity. In a dynamic world, information about the objects in the scene can become obsolete before it is ready to be used if the detection algorithm is not fast enough. Two recent object recognition techniques have achieved notable results: the constellation approach proposed by Lowe and the bag of words approach proposed by Nistér and Stewénius. The Lowe constellation approach is the one currently being used in the robot localization project of the COGNIRON project. This report is divided in two main sections. The first section is devoted to briefly review the currently used object recognition system, the Lowe approach, and bring to light the drawbacks found for object recognition in the context of indoor mobile robot navigation. Additionally the proposed improvements for the algorithm are described. In the second section the alternative bag of words method is reviewed, as well as several experiments conducted to evaluate its performance with our own object databases. Furthermore, some modifications to the original algorithm to make it suitable for object detection in unsegmented images are proposed.
Resumo:
Report for the scientific sojourn at the Stanford University from January until June 2007. Music is well known for affecting human emotional states, yet the relationship between specific musical parameters and emotional responses is still not clear. With the advent of new human-computer interaction (HCI) technologies, it is now possible to derive emotion-related information from physiological data and use it as an input to interactive music systems. Providing such implicit musical HCI will be highly relevant for a number of applications including music therapy, diagnosis, nteractive gaming, and physiologically-based musical instruments. A key question in such physiology-based compositions is how sound synthesis parameters can be mapped to emotional states of valence and arousal. We used both verbal and heart rate responses to evaluate the affective power of five musical parameters. Our results show that a significant correlation exists between heart rate and the subjective evaluation of well-defined musical parameters. Brightness and loudness showed to be arousing parameters on subjective scale while harmonicity and even partial attenuation factor resulted in heart rate changes typically associated to valence. This demonstrates that a rational approach to designing emotion-driven music systems for our public installations and music therapy applications is possible.
Resumo:
Este trabajo presenta una metodología para detectar y realizar el seguimiento de características faciales. En el primer paso del procedimiento se detectan caras mediante Adaboost con cascadas de clasificadores débiles. El segundo paso busca las características internas de la cara mediante el CSR, detectando zonas de interés. Una vez que estas características se capturan, un proceso de tracking basado en el descriptor SIFT, que hemos llamado pseudo-SIFT, es capaz de guardar información sobre la evolución de movimiento en las regiones detectadas. Además, un conjunto de datos públicos ha sido desarrollado con el propósito de compartirlo con otras investigaciones sobre detección, clasificación y tracking. Experimentos reales muestran la robustez de este trabajo y su adaptabilidad para trabajos futuros.
Resumo:
Investigación producida a partir de una estancia en la University of Sidney, Australia, entre octubre del 2008 y enero del 2009. Se ha desarrollado el proyecto titulado "Papel de la interleucina 6 (IL6) en la regulación de la expresión de Osteopontina (OPN) y de CD44 tras axotomía del nervio facial". Tras efectuar una transección del nervio facial, se indujo una reactividad glial en el núcleo facial (NF) localizado en el tronco cerebral, utilizando ratones transgénicos que sobrexpresan IL6 bajo promotor GFAP (tg GFAP-IL6), es decir selectivamente en astrocitos. Se han utilizado técnicas histoquímicas e inmunohistoquímicas, así como también se ha completado el estudio utilizando análisis de RPA, western blotting y citometría de flujo para la identificación de poblaciones celulares. Los resultados obtenidos indican que la OPN se expresa constitutivamente en las neuronas del NF. Tras axotomía del nervio facial, la expresión de OPN y CD44 incrementa en los ratones WT, mientras que en los tg GFAP-IL6 disminuye significativamente, sugiriendo que la IL6 podría estar involucrada en la modulación de la expresión de ambas moléculas. Sin embargo, no se ha visto diferencias en otros receptores de OPN como la integrina Alpha-5. La ctometría de flujo corroboró algunos de los resultados histológicos sobre la reactividad microglial y permitió concluir que la proporción de microglía activada (CD11b+/CD45+mid) y macrófagos (CD11b+/CD45+high) que expresan CD44 incrementa en in los tg GFAP-IL6 versus WT donde la mayor parte de microglia activada mostraba un perfil CD11b+/CD45+low.
Resumo:
Aplicación que realiza un entrenamiento de puntos de interés en rostros a partir de una colección de imágenes, posteriormente se puede verificar el resultado. Dada una imagen se comprueba el porcentaje de aciertos.
Resumo:
The automatic interpretation of conventional traffic signs is very complex and time consuming. The paper concerns an automatic warning system for driving assistance. It does not interpret the standard traffic signs on the roadside; the proposal is to incorporate into the existing signs another type of traffic sign whose information will be more easily interpreted by a processor. The type of information to be added is profuse and therefore the most important object is the robustness of the system. The basic proposal of this new philosophy is that the co-pilot system for automatic warning and driving assistance can interpret with greater ease the information contained in the new sign, whilst the human driver only has to interpret the "classic" sign. One of the codings that has been tested with good results and which seems to us easy to implement is that which has a rectangular shape and 4 vertical bars of different colours. The size of these signs is equivalent to the size of the conventional signs (approximately 0.4 m2). The colour information from the sign can be easily interpreted by the proposed processor and the interpretation is much easier and quicker than the information shown by the pictographs of the classic signs