990 resultados para Emotions expression
Resumo:
Relatório da Prática Profissional Supervisionada Mestrado em Educação Pré-Escolar
Resumo:
The aim of this thesis was to validate the use of infrared thermography (IRT) to non-invasively measure emotional reactions to different situations in pet dogs (Canis familiaris). A preliminary test, aimed to evaluate the correlation between eye-temperature and rectal temperature in dog, was performed. Then, in three different situations, negative (veterinary visit), positive (palatable food rewards), and mildly stressing followed by mildly positive (separation from and reunion with the owner), variations in heat emitted from lacrimal caruncle (referred to as eye temperature) were measured with an infrared thermographic camera. In addition, heart rate and heart rate variability parameters were collected using a non-invasive heart rate monitor designed for human use and validated on dogs. All experiments were video recorded to allow behavioral coding. During the negative situation dogs’ level of activity and stress related behaviors varied across compared to the baseline and dogs showed an increase in eye temperature despite having a significant decrease in the level of activity. The positive situation was characterized by a peak in eye temperature and mean HR and dogs engaged in behaviors indicating a positive arousal, focusing on food treats and tail wagging but there were not variations in HRV during stimulation but only an increment in SDNN immediately after the stimulus. In the separation from and reunion with the owner dogs’ eye temperature and mean HR did not vary neither in the stressful nor in the positive situations, RMSSD increased after the positive episode, SDNN dropped during the two stimulations and it increased after the stimulations. During the separation from the owner dogs were mainly directed to the door or to the experimenter while during the reunion with the owner dogs were focused mainly on the owner and on the environment, exhibiting safe base effect. A different approach was used to assess the welfare of shelter dogs. Dogs were implanted with a telemeter and after implantation dogs were housed in sequence in four different situations lasting 1 week: alone, alone with toys and a stretch cot for sleeping, with an unknown, spayed, female, and alone with a daily 2-hours interaction with an experimenter. Two different approaches were tried: partially random extracted fragments from every week, behaviors from 8 a.m. to 4 p.m. were continuous during baseline and the female situation. Results showed different reactions by dogs to the different situations and interestingly not all enrichments were enjoyed by the dogs improving their welfare. Overall results suggest that IRT may represent a useful tool to investigate emotional reactions in dogs. Nevertheless, further research is needed to establish the specificity and sensivity of IRT in this context and to assess how different dogs’ characteristics, breed, previous experience and the valence and arousal elicited by the stimulus could influence the magnitude and type of the response. The role of HRV in understanding emotional valence and the one of telemeters in understanding long-term effects on sheltered dogs’ welfare is also discussed.
Resumo:
"Authorized edition."
Resumo:
Spontaneous facial expressions differ from posed ones in appearance, timing and accompanying head movements. Still images cannot provide timing or head movement information directly. However, indirectly the distances between key points on a face extracted from a still image using active shape models can capture some movement and pose changes. This information is superposed on information about non-rigid facial movement that is also part of the expression. Does geometric information improve the discrimination between spontaneous and posed facial expressions arising from discrete emotions? We investigate the performance of a machine vision system for discrimination between posed and spontaneous versions of six basic emotions that uses SIFT appearance based features and FAP geometric features. Experimental results on the NVIE database demonstrate that fusion of geometric information leads only to marginal improvement over appearance features. Using fusion features, surprise is the easiest emotion (83.4% accuracy) to be distinguished, while disgust is the most difficult (76.1%). Our results find different important facial regions between discriminating posed versus spontaneous version of one emotion and classifying the same emotion versus other emotions. The distribution of the selected SIFT features shows that mouth is more important for sadness, while nose is more important for surprise, however, both the nose and mouth are important for disgust, fear, and happiness. Eyebrows, eyes, nose and mouth are important for anger.
Resumo:
Facial expression recognition (FER) algorithms mainly focus on classification into a small discrete set of emotions or representation of emotions using facial action units (AUs). Dimensional representation of emotions as continuous values in an arousal-valence space is relatively less investigated. It is not fully known whether fusion of geometric and texture features will result in better dimensional representation of spontaneous emotions. Moreover, the performance of many previously proposed approaches to dimensional representation has not been evaluated thoroughly on publicly available databases. To address these limitations, this paper presents an evaluation framework for dimensional representation of spontaneous facial expressions using texture and geometric features. SIFT, Gabor and LBP features are extracted around facial fiducial points and fused with FAP distance features. The CFS algorithm is adopted for discriminative texture feature selection. Experimental results evaluated on the publicly accessible NVIE database demonstrate that fusion of texture and geometry does not lead to a much better performance than using texture alone, but does result in a significant performance improvement over geometry alone. LBP features perform the best when fused with geometric features. Distributions of arousal and valence for different emotions obtained via the feature extraction process are compared with those obtained from subjective ground truth values assigned by viewers. Predicted valence is found to have a more similar distribution to ground truth than arousal in terms of covariance or Bhattacharya distance, but it shows a greater distance between the means.
Resumo:
Facial expression is an important channel of human social communication. Facial expression recognition (FER) aims to perceive and understand emotional states of humans based on information in the face. Building robust and high performance FER systems that can work in real-world video is still a challenging task, due to the various unpredictable facial variations and complicated exterior environmental conditions, as well as the difficulty of choosing a suitable type of feature descriptor for extracting discriminative facial information. Facial variations caused by factors such as pose, age, gender, race and occlusion, can exert profound influence on the robustness, while a suitable feature descriptor largely determines the performance. Most present attention on FER has been paid to addressing variations in pose and illumination. No approach has been reported on handling face localization errors and relatively few on overcoming facial occlusions, although the significant impact of these two variations on the performance has been proved and highlighted in many previous studies. Many texture and geometric features have been previously proposed for FER. However, few comparison studies have been conducted to explore the performance differences between different features and examine the performance improvement arisen from fusion of texture and geometry, especially on data with spontaneous emotions. The majority of existing approaches are evaluated on databases with posed or induced facial expressions collected in laboratory environments, whereas little attention has been paid on recognizing naturalistic facial expressions on real-world data. This thesis investigates techniques for building robust and high performance FER systems based on a number of established feature sets. It comprises of contributions towards three main objectives: (1) Robustness to face localization errors and facial occlusions. An approach is proposed to handle face localization errors and facial occlusions using Gabor based templates. Template extraction algorithms are designed to collect a pool of local template features and template matching is then performed to covert these templates into distances, which are robust to localization errors and occlusions. (2) Improvement of performance through feature comparison, selection and fusion. A comparative framework is presented to compare the performance between different features and different feature selection algorithms, and examine the performance improvement arising from fusion of texture and geometry. The framework is evaluated for both discrete and dimensional expression recognition on spontaneous data. (3) Evaluation of performance in the context of real-world applications. A system is selected and applied into discriminating posed versus spontaneous expressions and recognizing naturalistic facial expressions. A database is collected from real-world recordings and is used to explore feature differences between standard database images and real-world images, as well as between real-world images and real-world video frames. The performance evaluations are based on the JAFFE, CK, Feedtum, NVIE, Semaine and self-collected QUT databases. The results demonstrate high robustness of the proposed approach to the simulated localization errors and occlusions. Texture and geometry have different contributions to the performance of discrete and dimensional expression recognition, as well as posed versus spontaneous emotion discrimination. These investigations provide useful insights into enhancing robustness and achieving high performance of FER systems, and putting them into real-world applications.
Resumo:
This pilot study aimed to compare the effect of companion robots (PARO) to participation in an interactive reading group on emotions in people living with moderate to severe dementia in a residential care setting. A randomized crossover design, with PARO and reading control groups, was used. Eighteen residents with mid- to late-stage dementia from one aged care facility in Queensland, Australia, were recruited. Participants were assessed three times using the Quality of Life in Alzheimer’s Disease, Rating Anxiety in Dementia, Apathy Evaluation, Geriatric Depression, and Revised Algase Wandering Scales. PARO had a moderate to large positive influence on participants’ quality of life compared to the reading group. The PARO intervention group had higher pleasure scores when compared to the reading group. Findings suggest PARO may be useful as a treatment option for people with dementia; however, the need for a larger trial was identified.
Resumo:
This research paper explores the impact product personalisation has upon product attachment and aims to develop a deeper understanding of why, how and if consumers choose to do so. The current research in this field is mainly based on attachment theories and is predominantly product specific. This paper researches the link between product attachment and personalisation through in-depth, semi-structured interviews, where the data has been thematically analysed and broken down into three themes, and nine sub-themes. It was found that participants did become more attached to products once they were personalised and the reasons why this occurred varied. The most common reasons that led to personalisation were functionality and usability, the expression of personality through a product and the complexity of personalisation. The reasons why participants felt connected to their products included strong emotions/memories, the amount of time and effort invested into the personalisation, a sense of achievement. Reasons behind the desire for personalisation included co-designing, expression of uniqueness/individualism and having choice for personalisation. Through theme and inter-theme relationships, many correlations were formed, which created the basis for design recommendations. These recommendations demonstrate how a designer could implement the emotions and reasoning for personalisation into the design process.
Resumo:
Facial expression recognition (FER) systems must ultimately work on real data in uncontrolled environments although most research studies have been conducted on lab-based data with posed or evoked facial expressions obtained in pre-set laboratory environments. It is very difficult to obtain data in real-world situations because privacy laws prevent unauthorized capture and use of video from events such as funerals, birthday parties, marriages etc. It is a challenge to acquire such data on a scale large enough for benchmarking algorithms. Although video obtained from TV or movies or postings on the World Wide Web may also contain ‘acted’ emotions and facial expressions, they may be more ‘realistic’ than lab-based data currently used by most researchers. Or is it? One way of testing this is to compare feature distributions and FER performance. This paper describes a database that has been collected from television broadcasts and the World Wide Web containing a range of environmental and facial variations expected in real conditions and uses it to answer this question. A fully automatic system that uses a fusion based approach for FER on such data is introduced for performance evaluation. Performance improvements arising from the fusion of point-based texture and geometry features, and the robustness to image scale variations are experimentally evaluated on this image and video dataset. Differences in FER performance between lab-based and realistic data, between different feature sets, and between different train-test data splits are investigated.
Resumo:
Facial expression recognition (FER) has been dramatically developed in recent years, thanks to the advancements in related fields, especially machine learning, image processing and human recognition. Accordingly, the impact and potential usage of automatic FER have been growing in a wide range of applications, including human-computer interaction, robot control and driver state surveillance. However, to date, robust recognition of facial expressions from images and videos is still a challenging task due to the difficulty in accurately extracting the useful emotional features. These features are often represented in different forms, such as static, dynamic, point-based geometric or region-based appearance. Facial movement features, which include feature position and shape changes, are generally caused by the movements of facial elements and muscles during the course of emotional expression. The facial elements, especially key elements, will constantly change their positions when subjects are expressing emotions. As a consequence, the same feature in different images usually has different positions. In some cases, the shape of the feature may also be distorted due to the subtle facial muscle movements. Therefore, for any feature representing a certain emotion, the geometric-based position and appearance-based shape normally changes from one image to another image in image databases, as well as in videos. This kind of movement features represents a rich pool of both static and dynamic characteristics of expressions, which playa critical role for FER. The vast majority of the past work on FER does not take the dynamics of facial expressions into account. Some efforts have been made on capturing and utilizing facial movement features, and almost all of them are static based. These efforts try to adopt either geometric features of the tracked facial points, or appearance difference between holistic facial regions in consequent frames or texture and motion changes in loca- facial regions. Although achieved promising results, these approaches often require accurate location and tracking of facial points, which remains problematic.
Resumo:
Representation of facial expressions using continuous dimensions has shown to be inherently more expressive and psychologically meaningful than using categorized emotions, and thus has gained increasing attention over recent years. Many sub-problems have arisen in this new field that remain only partially understood. A comparison of the regression performance of different texture and geometric features and investigation of the correlations between continuous dimensional axes and basic categorized emotions are two of these. This paper presents empirical studies addressing these problems, and it reports results from an evaluation of different methods for detecting spontaneous facial expressions within the arousal-valence dimensional space (AV). The evaluation compares the performance of texture features (SIFT, Gabor, LBP) against geometric features (FAP-based distances), and the fusion of the two. It also compares the prediction of arousal and valence, obtained using the best fusion method, to the corresponding ground truths. Spatial distribution, shift, similarity, and correlation are considered for the six basic categorized emotions (i.e. anger, disgust, fear, happiness, sadness, surprise). Using the NVIE database, results show that the fusion of LBP and FAP features performs the best. The results from the NVIE and FEEDTUM databases reveal novel findings about the correlations of arousal and valence dimensions to each of six basic emotion categories.
Resumo:
This study is concerned with men's talk about emotions and with how emotion discourses function in the construction and negotiation of masculine ways of doing emotions and of consonant masculine subject positions. A sample group of 16 men, who were recruited from two social contexts in England, participated in focus groups on 'men and emotions'. Group discussions were transcribed and analysed using discourse analysis. Participants drew upon a range of discursive resources in constructing masculine emotional behaviour and negotiating masculine subject positions. They constructed men as emotional beings, but only within specific, rule-governed contexts, and cited death, a football match and a nightclub scenario as prototypical contexts for the permissible/understandable expression of grief, joy and anger, respectively. However, in the nightclub scenario, the men distanced themselves from the expression of anger as violence, whilst maintaining a masculine subject position. These discursive practices are discussed in terms of the possibilities for effecting change in men's emotional lives.
Resumo:
Qualitative research in the area of eating disorders (eds) has predominantly focused on females,whilst the experiences of males’ remains poorly understood. due to the secretive nature of eating problems/eds it can be difficult to explore the experiences of males with these problems; however, online support groups/message boards, which are common and popular, provide a non-invasive
forum for researchers to conduct research. This study analyzed naturally occurring discussions on an internet message board dedicated to males and eating problems using content analysis. Two major overarching themes of emotional expression (sharing feelings of disturbed eating attitudes and emotions; being secretive) and support (informational and emotional) were identified. The message board provided a vital support system for this group, suggesting that online message boards may be an important avenue for health professionals to provide information, support, and advice.
Resumo:
Age-related changes in the facial expression of pain during the first 18 months of life have important implications for our understanding of pain and pain assessment. We examined facial reactions video recorded during routine immunization injections in 75 infants stratified into 2-, 4-, 6-, 12-, and 18-month age groups. Two facial coding systems differing in the amount of detail extracted were applied to the records. In addition, parents completed a brief questionnaire that assessed child temperament and provided background information. Parents' efforts to soothe the children also were described. While there were consistencies in facial displays over the age groups, there also were differences on both measures of facial activity, indicating systematic variation in the nature and severity of distress. The least pain was expressed by the 4-month age group. Temperament was not related to the degree of pain expressed. Systematic variations in parental soothing behaviour indicated accommodation to the age of the child. Reasons for the differing patterns of facial activity are examined, with attention paid to the development of inhibitory mechanisms and the role of negative emotions such as anger and anxiety.