846 resultados para FACIAL EMOTIONS
Resumo:
221 p.
Resumo:
We have developed a novel human facial tracking system that operates in real time at a video frame rate without needing any special hardware. The approach is based on the use of Lie algebra, and uses three-dimensional feature points on the targeted human face. It is assumed that the roughly estimated facial model (relative coordinates of the three-dimensional feature points) is known. First, the initial feature positions of the face are determined using a model fitting technique. Then, the tracking is operated by the following sequence: (1) capture the new video frame and render feature points to the image plane; (2) search for new positions of the feature points on the image plane; (3) get the Euclidean matrix from the moving vector and the three-dimensional information for the points; and (4) rotate and translate the feature points by using the Euclidean matrix, and render the new points on the image plane. The key algorithm of this tracker is to estimate the Euclidean matrix by using a least square technique based on Lie algebra. The resulting tracker performed very well on the task of tracking a human face.
Resumo:
In this paper, a novel algorithm for removing facial makeup disturbances as a face detection preprocess based on high dimensional imaginal geometry is proposed. After simulation and practical application experiments, the algorithm is theoretically analyzed. Its apparent effect of removing facial makeup and the advantages of face detection with this pre-process over face detection without it are discussed. Furthermore, in our experiments with color images, the proposed algorithm even gives some surprises.
Resumo:
Video-based facial expression recognition is a challenging problem in computer vision and human-computer interaction. To target this problem, texture features have been extracted and widely used, because they can capture image intensity changes raised by skin deformation. However, existing texture features encounter problems with albedo and lighting variations. To solve both problems, we propose a new texture feature called image ratio features. Compared with previously proposed texture features, e. g., high gradient component features, image ratio features are more robust to albedo and lighting variations. In addition, to further improve facial expression recognition accuracy based on image ratio features, we combine image ratio features with facial animation parameters (FAPs), which describe the geometric motions of facial feature points. The performance evaluation is based on the Carnegie Mellon University Cohn-Kanade database, our own database, and the Japanese Female Facial Expression database. Experimental results show that the proposed image ratio feature is more robust to albedo and lighting variations, and the combination of image ratio features and FAPs outperforms each feature alone. In addition, we study asymmetric facial expressions based on our own facial expression database and demonstrate the superior performance of our combined expression recognition system.
Resumo:
Purpose - The aim of this study was to investigate whether the presence of a whole-face context during facial composite production facilitates construction of facial composite images. Design/Methodology - In Experiment 1, constructors viewed a celebrity face and then developed a facial composite using PRO-fit in one of two conditions: either the full-face was visible while facial features were selected, or only the feature currently being selected was visible. The composites were named by different participants. We then replicated the study using a more forensically-valid procedure: In Experiment 2 non-football fans viewed an image of a premiership footballer and 24 hours later constructed a composite of the face with a trained software operator. The resulting composites were named by football fans. Findings - In both studies we found that presence of the facial context promoted more identifiable facial composite images. Research limitations/implications – Though this study uses current software in an unconventional way, this was necessary to avoid error arising from between-system differences. Practical implications - Results confirm that composite software should have the whole-face context visible to witnesses throughout construction. Though some software systems do this, there remain others that present features in isolation and these findings show that these systems are unlikely to be optimal. Originality/value - This is the first study to demonstrate the importance of a full-face context for the construction of facial composite images. Results are valuable to police forces and developers of composite software.
Resumo:
Facial features play an important role in expressing grammatical information in signed languages, including American Sign Language(ASL). Gestures such as raising or furrowing the eyebrows are key indicators of constructions such as yes-no questions. Periodic head movements (nods and shakes) are also an essential part of the expression of syntactic information, such as negation (associated with a side-to-side headshake). Therefore, identification of these facial gestures is essential to sign language recognition. One problem with detection of such grammatical indicators is occlusion recovery. If the signer's hand blocks his/her eyebrows during production of a sign, it becomes difficult to track the eyebrows. We have developed a system to detect such grammatical markers in ASL that recovers promptly from occlusion. Our system detects and tracks evolving templates of facial features, which are based on an anthropometric face model, and interprets the geometric relationships of these templates to identify grammatical markers. It was tested on a variety of ASL sentences signed by various Deaf native signers and detected facial gestures used to express grammatical information, such as raised and furrowed eyebrows as well as headshakes.
Resumo:
People often do not realize they are being influenced by an incidental emotional state. As a result, decisions based on a fleeting incidental emotion can become the basis for future decisions and hence outlive the original cause for the behavior (i.e., the emotion itself). Using a sequence of ultimatum and dictator games, we provide empirical evidence for the enduring impact of transient emotions on economic decision making. Behavioral consistency and false consensus are presented as potential underlying processes. © 2009 Elsevier Inc. All rights reserved.
Resumo:
Confronting the rapidly increasing, worldwide reliance on biometric technologies to surveil, manage, and police human beings, my dissertation
Resumo:
Gemstone Team FACE
Resumo:
The intensity and valence of 30 emotion terms, 30 events typical of those emotions, and 30 autobiographical memories cued by those emotions were each rated by different groups of 40 undergraduates. A vector model gave a consistently better account of the data than a circumplex model, both overall and in the absence of high-intensity, neutral valence stimuli. The Positive Activation - Negative Activation (PANA) model could be tested at high levels of activation, where it is identical to the vector model. The results replicated when ratings of arousal were used instead of ratings of intensity for the events and autobiographical memories. A reanalysis of word norms gave further support for the vector and PANA models by demonstrating that neutral valence, high-arousal ratings resulted from the averaging of individual positive and negative valence ratings. Thus, compared to a circumplex model, vector and PANA models provided overall better fits.