705 resultados para Recognizing facial identity


Relevância:

100.00% 100.00%

Publicador:

Resumo:

A large variety of social signals, such as facial expression and body language, are conveyed in everyday interactions and an accurate perception and interpretation of these social cues is necessary in order for reciprocal social interactions to take place successfully and efficiently. The present study was conducted to determine whether impairments in social functioning that are commonly observed following a closed head injury, could at least be partially attributable to disruption in the ability to appreciate social cues. More specifically, an attempt was made to determine whether face processing deficits following a closed head injury (CHI) coincide with changes in electrophysiological responsivity to the presentation of facial stimuli. A number of event-related potentials (ERPs) that have been linked specifically to various aspects of visual processing were examined. These included the N170, an index of structural encoding ability, the N400, an index of the ability to detect differences in serially presented stimuli, and the Late Positivity (LP), an index of the sensitivity to affective content in visually-presented stimuli. Electrophysiological responses were recorded while participants with and without a closed head injury were presented with pairs of faces delivered in a rapid sequence and asked to compare them on the basis of whether they matched with respect to identity or emotion. Other behavioural measures of identity and emotion recognition were also employed, along with a small battery of standard neuropsychological tests used to determine general levels of cognitive impairment. Participants in the CHI group were impaired in a number of cognitive domains that are commonly affected following a brain injury. These impairments included reduced efficiency in various aspects of encoding verbal information into memory, general slower rate of information processing, decreased sensitivity to smell, and greater difficulty in the regulation of emotion and a limited awareness of this impairment. Impairments in face and emotion processing were clearly evident in the CHI group. However, despite these impairments in face processing, there were no significant differences between groups in the electrophysiological components examined. The only exception was a trend indicating delayed N170 peak latencies in the CHI group (p = .09), which may reflect inefficient structural encoding processes. In addition, group differences were noted in the region of the N100, thought to reflect very early selective attention. It is possible, then, that facial expression and identity processing deficits following CHI are secondary to (or exacerbated by) an underlying disruption of very early attentional processes. Alternately the difficulty may arise in the later cognitive stages involved in the interpretation of the relevant visual information. However, the present data do not allow these alternatives to be distinguished. Nonetheless, it was clearly evident that individuals with CHI are more likely than controls to make face processing errors, particularly for the more difficult to discriminate negative emotions. Those working with individuals who have sustained a head injury should be alerted to this potential source of social monitoring difficulties which is often observed as part of the sequelae following a CHI.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

L’objectif de cette recherche est la création d’une plateforme en ligne qui permettrait d’examiner les différences individuelles de stratégies de traitement de l’information visuelle dans différentes tâches de catégorisation des visages. Le but d’une telle plateforme est de récolter des données de participants géographiquement dispersés et dont les habiletés en reconnaissance des visages sont variables. En effet, de nombreuses études ont montré qu’il existe de grande variabilité dans le spectre des habiletés à reconnaître les visages, allant de la prosopagnosie développementale (Susilo & Duchaine, 2013), un trouble de reconnaissance des visages en l’absence de lésion cérébrale, aux super-recognizers, des individus dont les habiletés en reconnaissance des visages sont au-dessus de la moyenne (Russell, Duchaine & Nakayama, 2009). Entre ces deux extrêmes, les habiletés en reconnaissance des visages dans la population normale varient. Afin de démontrer la faisabilité de la création d’une telle plateforme pour des individus d’habiletés très variables, nous avons adapté une tâche de reconnaissance de l’identité des visages de célébrités utilisant la méthode Bubbles (Gosselin & Schyns, 2001) et avons recruté 14 sujets contrôles et un sujet présentant une prosopagnosie développementale. Nous avons pu mettre en évidence l’importance des yeux et de la bouche dans l’identification des visages chez les sujets « normaux ». Les meilleurs participants semblent, au contraire, utiliser majoritairement le côté gauche du visage (l’œil gauche et le côté gauche de la bouche).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Objective: It was the aim of this study to investigate facial emotion recognition (FER) in the elderly with cognitive impairment. Method: Twelve patients with Alzheimer's disease (AD) and 12 healthy control subjects were asked to name dynamic or static pictures of basic facial emotions using the Multimodal Emotion Recognition Test and to assess the degree of their difficulty in the recognition task, while their electrodermal conductance was registered as an unconscious processing measure. Results: AD patients had lower objective recognition performances for disgust and fear, but only disgust was accompanied by decreased subjective FER in AD patients. The electrodermal response was similar in all groups. No significant effect of dynamic versus static emotion presentation on FER was found. Conclusion: Selective impairment in recognizing facial expressions of disgust and fear may indicate a nonlinear decline in FER capacity with increasing cognitive impairment and result from progressive though specific damage to neural structures engaged in emotional processing and facial emotion identification. Although our results suggest unchanged unconscious FER processing with increasing cognitive impairment, further investigations on unconscious FER and self-awareness of FER capacity in neurodegenerative disorders are required.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

To investigate the perception of emotional facial expressions, researchers rely on shared sets of photos or videos, most often generated by actor portrayals. The drawback of such standardized material is a lack of flexibility and controllability, as it does not allow the systematic parametric manipulation of specific features of facial expressions on the one hand, and of more general properties of the facial identity (age, ethnicity, gender) on the other. To remedy this problem, we developed FACSGen: a novel tool that allows the creation of realistic synthetic 3D facial stimuli, both static and dynamic, based on the Facial Action Coding System. FACSGen provides researchers with total control over facial action units, and corresponding informational cues in 3D synthetic faces. We present four studies validating both the software and the general methodology of systematically generating controlled facial expression patterns for stimulus presentation.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Autism is a chronic pervasive neurodevelopmental disorder characterized by the early onset of social and communicative impairments as well as restricted, ritualized, stereotypic behavior. The endophenotype of autism includes neuropsychological deficits, for instance a lack of "Theory of Mind" and problems recognizing facial affect. In this study, we report the development and evaluation of a computer-based program to teach and test the ability to identify basic facially expressed emotions. 10 adolescent or adult subjects with high-functioning autism or Asperger-syndrome were included in the investigation. A priori the facial affect recognition test had shown good psychometric properties in a normative sample (internal consistency: rtt=.91-.95; retest reliability: rtt=.89-.92). In a prepost design, one half of the sample was randomly assigned to receive computer treatment while the other half of the sample served as control group. The training was conducted for five weeks, consisting of two hours training a week. The trained individuals improved significantly on the affect recognition task, but not on any other measure. Results support the usefulness of the program to teach the detection of facial affect. However, the improvement found is limited to a circumscribed area of social-communicative function and generalization is not ensured.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

L’aptitude à reconnaitre les expressions faciales des autres est cruciale au succès des interactions sociales. L’information visuelle nécessaire à la catégorisation des expressions faciales d’émotions de base présentées de manière statique est relativement bien connue. Toutefois, l’information utilisée pour discriminer toutes les expressions faciales de base entre elle demeure encore peu connue, et ce autant pour les expressions statiques que dynamiques. Plusieurs chercheurs assument que la région des yeux est particulièrement importante pour arriver à « lire » les émotions des autres. Le premier article de cette thèse vise à caractériser l’information utilisée par le système visuel pour discriminer toutes les expressions faciales de base entre elles, et à vérifier l’hypothèse selon laquelle la région des yeux est cruciale pour cette tâche. La méthode des Bulles (Gosselin & Schyns, 2001) est utilisée avec des expressions faciales statiques (Exp. 1) et dynamiques (Exp. 2) afin de trouver quelles régions faciales sont utilisées (Exps. 1 et 2), ainsi que l’ordre temporel dans lequel elles sont utilisées (Exp. 2). Les résultats indiquent que, contrairement à la croyance susmentionnée, la région de la bouche est significativement plus utile que la région des yeux pour discriminer les expressions faciales de base. Malgré ce rôle prépondérant de la bouche, c’est toute de même la région des yeux qui est sous-utilisée chez plusieurs populations cliniques souffrant de difficultés à reconnaitre les expressions faciales. Cette observation pourrait suggérer que l’utilisation de la région des yeux varie en fonction de l’habileté pour cette tâche. Le deuxième article de cette thèse vise donc à vérifier comment les différences individuelles en reconnaissance d’expressions faciales sont reliées aux stratégies d’extraction de l’information visuelle pour cette tâche. Les résultats révèlent une corrélation positive entre l’utilisation de la région de la bouche et l’habileté, suggérant la présence de différences qualitatives entre la stratégie des patients et celle des normaux. De plus, une corrélation positive est retrouvée entre l’utilisation de l’œil gauche et l’habileté des participants, mais aucune corrélation n’est retrouvée entre l’utilisation de l’œil droit et l’habileté. Ces résultats indiquent que la stratégie des meilleurs participants ne se distingue pas de celle des moins bons participants simplement par une meilleure utilisation de l’information disponible dans le stimulus : des différences qualitatives semblent exister même au sein des stratégies des participants normaux.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The central challenge in face recognition lies in understanding the role different facial features play in our judgments of identity. Notable in this regard are the relative contributions of the internal (eyes, nose and mouth) and external (hair and jaw-line) features. Past studies that have investigated this issue have typically used high-resolution images or good-quality line drawings as facial stimuli. The results obtained are therefore most relevant for understanding the identification of faces at close range. However, given that real-world viewing conditions are rarely optimal, it is also important to know how image degradations, such as loss of resolution caused by large viewing distances, influence our ability to use internal and external features. Here, we report experiments designed to address this issue. Our data characterize how the relative contributions of internal and external features change as a function of image resolution. While we replicated results of previous studies that have shown internal features of familiar faces to be more useful for recognition than external features at high resolution, we found that the two feature sets reverse in importance as resolution decreases. These results suggest that the visual system uses a highly non-linear cue-fusion strategy in combining internal and external features along the dimension of image resolution and that the configural cues that relate the two feature sets play an important role in judgments of facial identity.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Este artigo tem como objetivo buscar reconhecer a identidade do profissional de Educação Física que está sendo construída e se os Cursos Superiores de Educação Física no Pará vão ao encontro da aspiração dos discentes desses cursos. Foram analisados os Projetos Político-Pedagógicos da Universidade do Estado do Pará (UEPA) e da Universidade Federal Pará (UFPA – campus Belém e campus Castanhal) e entrevistados os alunos do último semestre desses cursos, usando-se como técnica de análise, a Técnica de Elaboração e Análise de Significado (MOREIRA; SIMÕES, PORTO, 2005) para serem identificadas as aspirações dos discentes. Pode-se concluir, com esse estudo, que os Projetos Político-Pedagógicos dos Cursos de Educação alunos entrevistados, o curso não atende às suas expectativas quanto à sua formação processo de formação seja revisto.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

It is believed that the way of being and the communicative-relational skills of every individual have multifactorial origins, including the quality of primary relationships with caregivers. For some time, the need for health care professionals to possess specific communicative and interpersonal skills has been highlighted. To the degree course in Nursing, like to all other degree programs related to health, access is granted to students who have large individual differences, both in terms of personality, and in terms of relational skills. Each academic year, therefore, the people responsible for the didactic organization of every course, are faced with having to prepare a training plan capable of addressing communicative-relational aspects and, at the same time, of being adequate to the real attitudes of incoming students. Thus, the need for appropriate tools for measuring the personological and vocational traits considered specific to health professions was born. This study has a twofold objective. On one hand, it aims at selecting a battery of psychological tests to detect psychological and attitudinal patterns, to facilitate the coordinators of graduate courses in their didactic organization and planning of educational training; on the other hand, it seeks to assess the correlations between communicative-relational skills (Relational-Communicative style, according to the model of patient-centered medicine-TRS) (Mucchielli’s Test of Spontaneous Attitudes – usual kind of attitude in dual relationships), personality traits (Alexithymia), styles of attachment to parental figures (PBI), and the capability of recognizing facial emotions, in a sample of students enrolled in the first year of a degree in Nursing.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The body is represented in the brain at levels that incorporate multisensory information. This thesis focused on interactions between vision and cutaneous sensations (i.e., touch and pain). Experiment 1 revealed that there are partially dissociable pathways for visual enhancement of touch (VET) depending upon whether one sees one’s own body or the body of another person. This indicates that VET, a seeming low-level effect on spatial tactile acuity, is actually sensitive to body identity. Experiments 2-4 explored the effect of viewing one’s own body on pain perception. They demonstrated that viewing the body biases pain intensity judgments irrespective of actual stimulus intensity, and, more importantly, reduces the discriminative capacities of the nociceptive pathway encoding noxious stimulus intensity. The latter effect only occurs if the pain-inducing event itself is not visible, suggesting that viewing the body alone and viewing a stimulus event on the body have distinct effects on cutaneous sensations. Experiment 5 replicated an enhancement of visual remapping of touch (VRT) when viewing fearful human faces being touched, and further demonstrated that VRT does not occur for observed touch on non-human faces, even fearful ones. This suggests that the facial expressions of non-human animals may not be simulated within the somatosensory system of the human observer in the same way that the facial expressions of other humans are. Finally, Experiment 6 examined the enfacement illusion, in which synchronous visuo-tactile inputs cause another’s face to be assimilated into the mental self-face representation. The strength of enfacement was not affected by the other’s facial expression, supporting an asymmetric relationship between processing of facial identity and facial expressions. Together, these studies indicate that multisensory representations of the body in the brain link low-level perceptual processes with the perception of emotional cues and body/face identity, and interact in complex ways depending upon contextual factors.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Several studies investigated the role of featural and configural information when processing facial identity. A lot less is known about their contribution to emotion recognition. In this study, we addressed this issue by inducing either a featural or a configural processing strategy (Experiment 1) and by investigating the attentional strategies in response to emotional expressions (Experiment 2). In Experiment 1, participants identified emotional expressions in faces that were presented in three different versions (intact, blurred, and scrambled) and in two orientations (upright and inverted). Blurred faces contain mainly configural information, and scrambled faces contain mainly featural information. Inversion is known to selectively hinder configural processing. Analyses of the discriminability measure (A′) and response times (RTs) revealed that configural processing plays a more prominent role in expression recognition than featural processing, but their relative contribution varies depending on the emotion. In Experiment 2, we qualified these differences between emotions by investigating the relative importance of specific features by means of eye movements. Participants had to match intact expressions with the emotional cues that preceded the stimulus. The analysis of eye movements confirmed that the recognition of different emotions rely on different types of information. While the mouth is important for the detection of happiness and fear, the eyes are more relevant for anger, fear, and sadness.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Because faces and bodies share some abstract perceptual features, we hypothesised that similar recognition processes might be used for both. We investigated whether similar caricature effects to those found in facial identity and expression recognition could be found in the recognition of individual bodies and socially meaningful body positions. Participants were trained to name four body positions (anger, fear, disgust, sadness) and four individuals (in a neutral position). We then tested their recognition of extremely caricatured, moderately caricatured, anticaricatured, and undistorted images of each stimulus. Consistent with caricature effects found in face recognition, moderately caricatured representations of individuals' bodies were recognised more accurately than undistorted and extremely caricatured representations. No significant difference was found between participants' recognition of extremely caricatured, moderately caricatured, or undistorted body position line-drawings. AU anti-caricatured representations were named significandy less accurately than the veridical stimuli. Similar mental representations may be used for both bodies and faces.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The study aimed to determine if the memory bias for negative faces previously demonstrated in depression and dysphoria generalises from long- to short-term memory. A total of 29 dysphoric (DP) and22 non-dysphoric (ND) participants were presented with a series of faces and asked to identify the emotion portrayed (happiness, sadness, anger, or neutral affect). Following a delay, four faces were presented (the original plus three distractors) and participants were asked to identify the target face. Half of the trials assessed memory for facial emotion, and the remaining trials examined memory for facial identity. At encoding, no group differences were apparent. At memory testing, relative to ND participants, DP participants exhibited impaired memory for all types of facial emotion and for facial identity when the faces featured happiness, anger, or neutral affect, but not sadness. DP participants exhibited impaired identity memory for happy faces relative to angry, sad, and neutral, whereas ND participants exhibited enhanced facial identity memory when faces were angry. In general, memory for faces was not related to performance at encoding. However, in DP participants only, memory for sad faces was related to sadness recognition at encoding. The results suggest that the negative memory bias for faces in dysphoria does not generalise from long- to short-term memory.