853 resultados para Facial perception
Resumo:
Relationship Between Facial Prominence of Orkut’s Photos and the Users’ Profile This research aimed to verify the relation between facial prominence index (FPI) and the profile of 600 users of Orkut. The results demonstrated that in all evaluated segments the average of male facial prominence was higher than the female; women increase the prominence while men decrease it as they become older and university graduates women have presented higher prominence than high school graduates. Significant differences were also found in the categories of humor when comparing the FPI between the genders and between persons of the same gender. In summary, the results indicate that the determinant factors for the facial prominence presented at Orkut are probably related to both biological and cultural aspects.
Resumo:
Among nonmotor symptoms observed in Parkinson`s disease (PD) dysfunction in the visual system, including hallucinations, has a significant impact in their quality of life. To further explore the visual system in PD patients we designed two fMRI experiments comparing 18 healthy volunteers with 16 PD patients without visual complaints in two visual fMRI paradigms: the flickering checkerboard task and a facial perception paradigm. PD patients displayed a decreased activity in the primary visual cortex (Broadmann area 17) bilaterally as compared to healthy volunteers during flickering checkerboard task and increased activity in fusiform gyms (Broadmann area 37) during facial perception paradigm. Our findings confirm the notion that PD patients show significant changes in the visual cortex system even before the visual symptoms are clinically evident. Further studies are necessary to evaluate the contribution of these abnormalities to the development visual symptoms in PD. (C) 2010 Movement Disorder Society
Resumo:
The divided visual field technique was used to investigate the pattern of brain asymmetry in the perception of positive/approach and negative/withdrawal facial expressions. A total of 80 undergraduate students (65 female, 15 male) were distributed in five experimental groups in order to investigate separately the perception of expressions of happiness, surprise, fear, sadness, and the neutral face. In each trial a target and a distractor expression were presented simultaneously in a computer screen for 150 ms and participants had to determine the side (left or right) on which the target expression was presented. Results indicated that expressions of happiness and fear were identified faster when presented in the left visual field, suggesting an advantage of the right hemisphere in the perception of these expressions. Fewer judgement errors and faster reaction times were also observed for the matching condition in which emotional faces were presented in the left visual field and neutral faces in the right visual field. Other results indicated that positive expressions (happiness and surprise) were perceived faster and more accurately than negative ones (sadness and fear). Main results tend to support the right hemisphere hypothesis, which predicts a better performance of the right hemisphere to perceive emotions, as opposed to the approach-withdrawal hypothesis.
Resumo:
A large variety of social signals, such as facial expression and body language, are conveyed in everyday interactions and an accurate perception and interpretation of these social cues is necessary in order for reciprocal social interactions to take place successfully and efficiently. The present study was conducted to determine whether impairments in social functioning that are commonly observed following a closed head injury, could at least be partially attributable to disruption in the ability to appreciate social cues. More specifically, an attempt was made to determine whether face processing deficits following a closed head injury (CHI) coincide with changes in electrophysiological responsivity to the presentation of facial stimuli. A number of event-related potentials (ERPs) that have been linked specifically to various aspects of visual processing were examined. These included the N170, an index of structural encoding ability, the N400, an index of the ability to detect differences in serially presented stimuli, and the Late Positivity (LP), an index of the sensitivity to affective content in visually-presented stimuli. Electrophysiological responses were recorded while participants with and without a closed head injury were presented with pairs of faces delivered in a rapid sequence and asked to compare them on the basis of whether they matched with respect to identity or emotion. Other behavioural measures of identity and emotion recognition were also employed, along with a small battery of standard neuropsychological tests used to determine general levels of cognitive impairment. Participants in the CHI group were impaired in a number of cognitive domains that are commonly affected following a brain injury. These impairments included reduced efficiency in various aspects of encoding verbal information into memory, general slower rate of information processing, decreased sensitivity to smell, and greater difficulty in the regulation of emotion and a limited awareness of this impairment. Impairments in face and emotion processing were clearly evident in the CHI group. However, despite these impairments in face processing, there were no significant differences between groups in the electrophysiological components examined. The only exception was a trend indicating delayed N170 peak latencies in the CHI group (p = .09), which may reflect inefficient structural encoding processes. In addition, group differences were noted in the region of the N100, thought to reflect very early selective attention. It is possible, then, that facial expression and identity processing deficits following CHI are secondary to (or exacerbated by) an underlying disruption of very early attentional processes. Alternately the difficulty may arise in the later cognitive stages involved in the interpretation of the relevant visual information. However, the present data do not allow these alternatives to be distinguished. Nonetheless, it was clearly evident that individuals with CHI are more likely than controls to make face processing errors, particularly for the more difficult to discriminate negative emotions. Those working with individuals who have sustained a head injury should be alerted to this potential source of social monitoring difficulties which is often observed as part of the sequelae following a CHI.
Resumo:
The present set of experiments was designed to investigate the development of children's sensitivity of facial expressions observed within emotional contexts. Past research investigating both adults' and children's perception of facial expressions has been limited primarily to the presentation of isolated faces. During daily social interactions, however, facial expressions are encountered within contexts conveying emotions (e.g., background scenes, body postures, gestures). Recently, research has shown that adults' perception of facial expressions is influenced by these contexts. When emotional faces are shown in incongruent contexts (e.g., when an angry face is presented in a context depicting fear) adults' accuracy decreases and their reaction times increase (e.g., Meeren et a1. 2005). To examine the influence of emotional body postures on children's perception of facial expressions, in each of the experiments in the current study adults and 8-year-old children made two-alternative forced choice decisions about facial expressions presented in congruent (e.g., a face displayed sadness on a body displaying sadness) and incongruent (e.g., a face displaying fear on a body displaying sadness) contexts. Consistent with previous studies, a congruency effect (better performance on congruent than incongruent trials) was found for both adults and 8-year-olds when the emotions displayed by the face and body were similar to each other (e.g., fear and sad, Experiment l a ) ; the influence of context was greater for 8-year-olds than adults for these similar expressions. To further investigate why the congruency effect was larger for children than adults in Experiment 1 a, Experiment 1 b was conducted to examine if increased task difficulty would increase the magnitude of adults' congruency effects. Adults were presented with subtle facial and despite successfully increasing task difficulty the magnitude of the. congruency effect did not increase suggesting that the difference between children's and adults' congruency effects in Experiment l a cannot be explained by 8-year-olds finding the task difficult. In contrast, congruency effects were not found when the expressions displayed by the face and body were dissimilar (e.g., sad and happy, see Experiment 2). The results of the current set of studies are examined with respect to the Dimensional theory and the Emotional Seed model and the developmental timeline of children's sensitivity to facial expressions. A secondary aim of the series of studies was to examine one possible mechanism underlying congruency effe cts-holistic processing. To examine the influence of holistic processing, participants completed both aligned trials and misaligned trials in which the faces were detached from the body (designed to disrupt holistic processing). Based on the principles of holistic face processing we predicted that participants would benefit from misalignment of the face and body stimuli on incongruent trials but not on congruent trials. Collectively, our results provide some evidence that both adults and children may process emotional faces and bodies holistically. Consistent with the pattern of results for congruency effects, the magnitude of the effect of misalignment varied with the similarity between emotions. Future research is required to further investigate whether or not facial expressions and emotions conveyed by the body are perceived holistically.
Resumo:
Very little is known about the neural structures involved in the perception of realistic dynamic facial expressions. In the present study, a unique set of naturalistic dynamic facial emotional expressions was created. Through fMRI and connectivity analysis, a dynamic face perception network was identified, which is demonstrated to extend Haxby et al.'s [Haxby, J. V., Hoffman, E. A., & Gobbini, M. I. The distributed human neural system for face perception. Trends in Cognitive Science, 4, 223–233, 2000] distributed neural system for face perception. This network includes early visual regions, such as the inferior occipital gyrus, which is identified as insensitive to motion or affect but sensitive to the visual stimulus, the STS, identified as specifically sensitive to motion, and the amygdala, recruited to process affect. Measures of effective connectivity between these regions revealed that dynamic facial stimuli were associated with specific increases in connectivity between early visual regions, such as the inferior occipital gyrus and the STS, along with coupling between the STS and the amygdala, as well as the inferior frontal gyrus. These findings support the presence of a distributed network of cortical regions that mediate the perception of different dynamic facial expressions.
Resumo:
Harmony is one of the main objectives in surgical and orthodontic treatment and this harmony must be present in the smile, as well as in the face. The aim of the present study was to assess the perceptions of professionals and laypersons in relation to the harmony of the smile of patients with or without vertical maxillary alterations. Sixty observers (oral and maxillofacial surgeons, orthodontists and laypersons) reported the degree of harmony of six smiles using an objective questionnaire and the participants indicated if there was a need for corrective surgery or not. The classification of observers was recorded on a Likert scale from 1 to 5. Mixed regression was used to determine differences between the three groups. Statistically significant differences were found only for the harmony of the smile between the oral and maxillofacial surgeons and laypersons, with laypersons being more critical when assessing the smile. There was no statistical difference between the other groups for the harmony of the smile or the indication of corrective surgery. The patterns of greater or lesser harmony determined by observers during the smile were similar to those found in the literature as the ideal standard in relation to vertical maxillary positioning. Laypersons had a tendency to be more critical in relation to facial harmony than surgeons, although no statistical differences were found in the other groups in relation to the harmony of the smile or indication for the corrective surgery. In addition, the patterns of greater or lesser harmony of the smile determined by the participants were similar to those found in the literature as the ideal standard in relation to vertical maxillary positioning. Overall, the present study demonstrates that adequate interaction between surgeons, orthodontists and laypersons is essential in order to achieve facial harmony with orthodontic and/or surgical treatment. Opinion of specialists and laypersons about the smile in relation to the vertical positioning of the maxilla.
Resumo:
This article details the author’s attempts to improve understanding of organisational behaviour through investigation of the cognitive and affective processes that underlie attitudes and behaviour. To this end, the paper describes the author’s earlier work on the attribution theory of leadership and, more recently, in three areas of emotion research: affective events theory, emotional intelligence, and the effect of supervisors’ facial expression on employees’ perceptions of leader-member exchange quality. The paper summarises the author’s research on these topics, shows how they have contributed to furthering our understanding of organisational behaviour, suggests where research in these areas are going, and draws some conclusions for management practice.
Resumo:
Evaluating other individuals with respect to personality characteristics plays a crucial role in human relations and it is the focus of attention for research in diverse fields such as psychology and interactive computer systems. In psychology, face perception has been recognized as a key component of this evaluation system. Multiple studies suggest that observers use face information to infer personality characteristics. Interactive computer systems are trying to take advantage of these findings and apply them to increase the natural aspect of interaction and to improve the performance of interactive computer systems. Here, we experimentally test whether the automatic prediction of facial trait judgments (e.g. dominance) can be made by using the full appearance information of the face and whether a reduced representation of its structure is sufficient. We evaluate two separate approaches: a holistic representation model using the facial appearance information and a structural model constructed from the relations among facial salient points. State of the art machine learning methods are applied to a) derive a facial trait judgment model from training data and b) predict a facial trait value for any face. Furthermore, we address the issue of whether there are specific structural relations among facial points that predict perception of facial traits. Experimental results over a set of labeled data (9 different trait evaluations) and classification rules (4 rules) suggest that a) prediction of perception of facial traits is learnable by both holistic and structural approaches; b) the most reliable prediction of facial trait judgments is obtained by certain type of holistic descriptions of the face appearance; and c) for some traits such as attractiveness and extroversion, there are relationships between specific structural features and social perceptions.
Resumo:
Emotion regulation is crucial for successfully engaging in social interactions. Yet, little is known about the neural mechanisms controlling behavioral responses to emotional expressions perceived in the face of other people, which constitute a key element of interpersonal communication. Here, we investigated brain systems involved in social emotion perception and regulation, using functional magnetic resonance imaging (fMRI) in 20 healthy participants. The latter saw dynamic facial expressions of either happiness or sadness, and were asked to either imitate the expression or to suppress any expression on their own face (in addition to a gender judgment control task). fMRI results revealed higher activity in regions associated with emotion (e.g., the insula), motor function (e.g., motor cortex), and theory of mind (e.g., [pre]cuneus) during imitation. Activity in dorsal cingulate cortex was also increased during imitation, possibly reflecting greater action monitoring or conflict with own feeling states. In addition, premotor regions were more strongly activated during both imitation and suppression, suggesting a recruitment of motor control for both the production and inhibition of emotion expressions. Expressive suppression (eSUP) produced increases in dorsolateral and lateral prefrontal cortex typically related to cognitive control. These results suggest that voluntary imitation and eSUP modulate brain responses to emotional signals perceived from faces, by up- and down-regulating activity in distributed subcortical and cortical networks that are particularly involved in emotion, action monitoring, and cognitive control.
Resumo:
Research on face recognition and social judgment usually addresses the manipulation of facial features (eyes, nose, mouth, etc.). Using a procedure based on a Stroop-like task, Montepare and Opeyo (J Nonverbal Behav 26(1):43-59, 2002) established a hierarchy of the relative salience of cues based on facial attributes when differentiating faces. Using the same perceptual interference task, we established a hierarchy of facial features. Twenty-three participants (13 men and 10 women) volunteered for the experiment to compare pairs of frontal faces. The participants had to judge if the eyes, nose, mouth and chin in the pair of images were the same or different. The factors manipulated were the target-distractive factor (4 face components 9 3 distractive factors), interference (absent vs. present) and correct answer (the same vs. different). The analysis of reaction times and errors showed that the eyes and mouth were processed before the chin and nose, thus highlighting the critical importance of the eyes and mouth, as shown by previous research.