10 resultados para Face Perception
em BORIS: Bern Open Repository and Information System - Berna - Suiça
Resumo:
The precise role of the fusiform face area (FFA) in face processing remains controversial. In this study, we investigated to what degree FFA activation reflects additional functions beyond face perception. Seven volunteers underwent rapid event-related functional magnetic resonance imaging while they performed a face-encoding and a face-recognition task. During face encoding, activity in the FFA for individual faces predicted whether the individual face was subsequently remembered or forgotten. However, during face recognition, no difference in FFA activity between consciously remembered and forgotten faces was observed, but the activity of FFA differentiated if a face had been seen previously or not. This demonstrated a dissociation between overt recognition and unconscious discrimination of stimuli, suggesting that physiological processes of face recognition can take place, even if not all of its operations are made available to consciousness.
Resumo:
In this article we focus on the emotional basis of face perception. In addition, the most important findings concerning epidemiology and etiology of body dysmorphic disorder (BDD) will be reviewed and related to face perception. BDD can be seen as an emotional disorder in which fundamental errors in terms of information processing, especially concerning faces occur. Emotional information is misinterpreted. Both, emotional misinterpretation as well as errors in face perception and recognition are part of the disorder. The relevance of BDD respective to esthetic surgery is discussed. Alternative options for patients such as psychotherapy or pharmacotherapy for this disorder are also related to.
Resumo:
Background: Visuoperceptual deficits in dementia are common and can reduce quality of life. Testing of visuoperceptual function is often confounded by impairments in other cognitive domains and motor dysfunction. We aimed to develop, pilot, and test a novel visuocognitive prototype test battery which addressed these issues, suitable for both clinical and functional imaging use. Methods: We recruited 23 participants (14 with dementia, 6 of whom had extrapyramidal motor features, and 9 age-matched controls). The novel Newcastle visual perception prototype battery (NEVIP-B-Prototype) included angle, color, face, motion and form perception tasks, and an adapted response system. It allows for individualized task difficulties. Participants were tested outside and inside the 3T functional magnetic resonance imaging (fMRI) scanner. Functional magnetic resonance imaging data were analyzed using SPM8. Results: All participants successfully completed the task inside and outside the scanner. Functional magnetic resonance imaging analysis showed activation regions corresponding well to the regional specializations of the visual association cortex. In both groups, there was significant activity in the ventral occipital-temporal region in the face and color tasks, whereas the motion task activated the V5 region. In the control group, the angle task activated the occipitoparietal cortex. Patients and controls showed similar levels of activation, except on the angle task for which occipitoparietal activation was lower in patients than controls. Conclusion: Distinct visuoperceptual functions can be tested in patients with dementia and extrapyramidal motor features when tests use individualized thresholds, adapted tasks, and specialized response systems.
Resumo:
OBJECTIVE To analyze speech reading through Internet video calls by profoundly hearing-impaired individuals and cochlear implant (CI) users. METHODS Speech reading skills of 14 deaf adults and 21 CI users were assessed using the Hochmair Schulz Moser (HSM) sentence test. We presented video simulations using different video resolutions (1280 × 720, 640 × 480, 320 × 240, 160 × 120 px), frame rates (30, 20, 10, 7, 5 frames per second (fps)), speech velocities (three different speakers), webcameras (Logitech Pro9000, C600 and C500) and image/sound delays (0-500 ms). All video simulations were presented with and without sound and in two screen sizes. Additionally, scores for live Skype™ video connection and live face-to-face communication were assessed. RESULTS Higher frame rate (>7 fps), higher camera resolution (>640 × 480 px) and shorter picture/sound delay (<100 ms) were associated with increased speech perception scores. Scores were strongly dependent on the speaker but were not influenced by physical properties of the camera optics or the full screen mode. There is a significant median gain of +8.5%pts (p = 0.009) in speech perception for all 21 CI-users if visual cues are additionally shown. CI users with poor open set speech perception scores (n = 11) showed the greatest benefit under combined audio-visual presentation (median speech perception +11.8%pts, p = 0.032). CONCLUSION Webcameras have the potential to improve telecommunication of hearing-impaired individuals.
Resumo:
BACKGROUND Co-speech gestures are part of nonverbal communication during conversations. They either support the verbal message or provide the interlocutor with additional information. Furthermore, they prompt as nonverbal cues the cooperative process of turn taking. In the present study, we investigated the influence of co-speech gestures on the perception of dyadic dialogue in aphasic patients. In particular, we analysed the impact of co-speech gestures on gaze direction (towards speaker or listener) and fixation of body parts. We hypothesized that aphasic patients, who are restricted in verbal comprehension, adapt their visual exploration strategies. METHODS Sixteen aphasic patients and 23 healthy control subjects participated in the study. Visual exploration behaviour was measured by means of a contact-free infrared eye-tracker while subjects were watching videos depicting spontaneous dialogues between two individuals. Cumulative fixation duration and mean fixation duration were calculated for the factors co-speech gesture (present and absent), gaze direction (to the speaker or to the listener), and region of interest (ROI), including hands, face, and body. RESULTS Both aphasic patients and healthy controls mainly fixated the speaker's face. We found a significant co-speech gesture × ROI interaction, indicating that the presence of a co-speech gesture encouraged subjects to look at the speaker. Further, there was a significant gaze direction × ROI × group interaction revealing that aphasic patients showed reduced cumulative fixation duration on the speaker's face compared to healthy controls. CONCLUSION Co-speech gestures guide the observer's attention towards the speaker, the source of semantic input. It is discussed whether an underlying semantic processing deficit or a deficit to integrate audio-visual information may cause aphasic patients to explore less the speaker's face.
Resumo:
OBJECTIVE To test whether sleep-deprived, healthy subjects who do not always signal spontaneously perceived sleepiness (SPS) before falling asleep during the Maintenance of Wakefulness Test (MWT) would do so in a driving simulator. METHODS Twenty-four healthy subjects (20-26 years old) underwent a MWT for 40 min and a driving simulator test for 1 h, before and after one night of sleep deprivation. Standard electroencephalography, electrooculography, submental electromyography, and face videography were recorded simultaneously to score wakefulness and sleep. Subjects were instructed to signal SPS as soon as they subjectively felt sleepy and to try to stay awake for as long as possible in every test. They were rewarded for both "appropriate" perception of SPS and staying awake for as long as possible. RESULTS After sleep deprivation, seven subjects (29%) did not signal SPS before falling asleep in the MWT, but all subjects signalled SPS before falling asleep in the driving simulator (p <0.004). CONCLUSIONS The previous results of an "inaccurate" SPS in the MWT were confirmed, and a perfect SPS was shown in the driving simulator. It was hypothesised that SPS is more accurate for tasks involving continuous feedback of performance, such as driving, compared to the less active situation of the MWT. Spontaneously perceived sleepiness in the MWT cannot be used to judge sleepiness perception while driving. Further studies are needed to define the accuracy of SPS in working tasks or occupations with minimal or no performance feedback.
Resumo:
Background: Co-speech gestures are part of nonverbal communication during conversations. They either support the verbal message or provide the interlocutor with additional information. Furthermore, they prompt as nonverbal cues the cooperative process of turn taking. In the present study, we investigated the influence of co-speech gestures on the perception of dyadic dialogue in aphasic patients. In particular, we analysed the impact of co-speech gestures on gaze direction (towards speaker or listener) and fixation of body parts. We hypothesized that aphasic patients, who are restricted in verbal comprehension, adapt their visual exploration strategies. Methods: Sixteen aphasic patients and 23 healthy control subjects participated in the study. Visual exploration behaviour was measured by means of a contact-free infrared eye-tracker while subjects were watching videos depicting spontaneous dialogues between two individuals. Cumulative fixation duration and mean fixation duration were calculated for the factors co-speech gesture (present and absent), gaze direction (to the speaker or to the listener), and region of interest (ROI), including hands, face, and body. Results: Both aphasic patients and healthy controls mainly fixated the speaker’s face. We found a significant co-speech gesture x ROI interaction, indicating that the presence of a co-speech gesture encouraged subjects to look at the speaker. Further, there was a significant gaze direction x ROI x group interaction revealing that aphasic patients showed reduced cumulative fixation duration on the speaker’s face compared to healthy controls. Conclusion: Co-speech gestures guide the observer’s attention towards the speaker, the source of semantic input. It is discussed whether an underlying semantic processing deficit or a deficit to integrate audio-visual information may cause aphasic patients to explore less the speaker’s face. Keywords: Gestures, visual exploration, dialogue, aphasia, apraxia, eye movements
Resumo:
Perceptual accuracy is known to be influenced by stimuli location within the visual field. In particular, it seems to be enhanced in the lower visual hemifield (VH) for motion and space processing, and in the upper VH for object and face processing. The origins of such asymmetries are attributed to attentional biases across the visual field, and in the functional organization of the visual system. In this article, we tested content-dependent perceptual asymmetries in different regions of the visual field. Twenty-five healthy volunteers participated in this study. They performed three visual tests involving perception of shapes, orientation and motion, in the four quadrants of the visual field. The results of the visual tests showed that perceptual accuracy was better in the lower than in the upper visual field for motion perception, and better in the upper than in the lower visual field for shape perception. Orientation perception did not show any vertical bias. No difference was found when comparing right and left VHs. The functional organization of the visual system seems to indicate that the dorsal and the ventral visual streams, responsible for motion and shape perception, respectively, show a bias for the lower and upper VHs, respectively. Such a bias depends on the content of the visual information.