9 resultados para Speech emotion recognition

em BORIS: Bern Open Repository and Information System - Berna - Suiça


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Several studies investigated the role of featural and configural information when processing facial identity. A lot less is known about their contribution to emotion recognition. In this study, we addressed this issue by inducing either a featural or a configural processing strategy (Experiment 1) and by investigating the attentional strategies in response to emotional expressions (Experiment 2). In Experiment 1, participants identified emotional expressions in faces that were presented in three different versions (intact, blurred, and scrambled) and in two orientations (upright and inverted). Blurred faces contain mainly configural information, and scrambled faces contain mainly featural information. Inversion is known to selectively hinder configural processing. Analyses of the discriminability measure (A′) and response times (RTs) revealed that configural processing plays a more prominent role in expression recognition than featural processing, but their relative contribution varies depending on the emotion. In Experiment 2, we qualified these differences between emotions by investigating the relative importance of specific features by means of eye movements. Participants had to match intact expressions with the emotional cues that preceded the stimulus. The analysis of eye movements confirmed that the recognition of different emotions rely on different types of information. While the mouth is important for the detection of happiness and fear, the eyes are more relevant for anger, fear, and sadness.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Emotional processing in essential hypertension beyond self-report questionnaire has hardly been investigated. The aim of this study is to examine associations between hypertension status and recognition of facial affect. Methods: 25 healthy, non-smoking, medication-free men including 13 hypertensive subjects aged between 20 and 65 years completed a computer-based task in order to examine sensitivity of recognition of facial affect. Neutral faces gradually changed to a specific emotion in a pseudo-continuous manner. Slides of the six basic emotions (fear, sadness, disgust, happiness, anger, surprise) were chosen from the „NimStim Set“. Pictures of three female and three male faces were electronically morphed in 1% steps of intensity from 0% to 100% (36 sets of faces with 100 pictures each). Each picture of a set was presented for one second, ranging from 0% to 100% of intensity. Participants were instructed to press a stop button as soon as they recognized the expression of the face. After stopping a forced choice between the six basic emotions was required. As dependent variables, we recorded the emotion intensity at which the presentation was stopped and the number of errors (error rate). Recognition sensitivity was calculated as emotion intensity of correctly identified emotions. Results: Mean arterial pressure was associated with a significantly increased recognition sensitivity of facial affect for the emotion anger (ß = - .43, p = 0.03*, Δ R2= .110). There was no association with the emotions fear, sadness, disgust, happiness, and surprise (p’s > .0.41). Mean arterial pressure did not relate to the mean number of errors for any of the facial emotions. Conclusions: Our findings suggest that an increased blood pressure is associated with increased recognition sensitivity of facial affect for the emotion anger, if a face shows anger. Hypertensives perceive facial anger expression faster than normotensives, if anger is shown.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Identification of emotional facial expression and emotional prosody (i.e. speech melody) is often impaired in schizophrenia. For facial emotion identification, a recent study suggested that the relative deficit in schizophrenia is enhanced when the presented emotion is easier to recognize. It is unclear whether this effect is specific to face processing or part of a more general emotion recognition deficit.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Empathy is a core prerequisite for human social behavior. Relatively, little is known about how empathy is influenced by social stress and its associated neuroendocrine alterations. The current study was designed to test the impact of acute stress on emotional and cognitive empathy. Healthy male participants were exposed to a psychosocial laboratory stressor (trier social stress test, (TSST)) or a well-matched control condition (Placebo-TSST). Afterwards they participated in an empathy test measuring emotional and cognitive empathy (multifaceted empathy test, (MET)). Stress exposure caused an increase in negative affect, a rise in salivary alpha amylase and a rise in cortisol. Participants exposed to stress reported more emotional empathy in response to pictures displaying both positive and negative emotional social scenes. Cognitive empathy (emotion recognition) in contrast did not differ between the stress and the control group. The current findings provide initial evidence for enhanced emotional empathy after acute psychosocial stress.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Gamma-hydroxybutyrate (GHB) is a GHB-/GABAB-receptor agonist. Reports from GHB abusers indicate euphoric, prosocial, and empathogenic effects of the drug. We measured the effects of GHB on mood, prosocial behavior, social and non-social cognition and assessed potential underlying neuroendocrine mechanisms. GHB (20mg/kg) was tested in 16 healthy males, using a randomized, placebo-controlled, cross-over design. Subjective effects on mood were assessed by visual-analogue-scales and the GHB-Specific-Questionnaire. Prosocial behavior was examined by the Charity Donation Task, the Social Value Orientation test, and the Reciprocity Task. Reaction time, memory, empathy, and theory-of-mind were also tested. Blood plasma levels of GHB, oxytocin, testosterone, progesterone, dehydroepiandrosterone (DHEA), cortisol, aldosterone, and adrenocorticotropic-hormone (ACTH) were determined. GHB showed stimulating and sedating effects, and elicited euphoria, disinhibition, and enhanced vitality. In participants with low prosociality, the drug increased donations and prosocial money distributions. In contrast, social cognitive abilities such as emotion recognition, empathy, and theory-of-mind, and basal cognitive functions were not affected. GHB increased plasma progesterone, while oxytocin and testosterone, cortisol, aldosterone, DHEA, and ACTH levels remained unaffected. GHB has mood-enhancing and prosocial effects without affecting social hormones such as oxytocin and testosterone. These data suggest a potential involvement of GHB-/GABAB-receptors and progesterone in mood and prosocial behavior.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE To evaluate the speech intelligibility in noise with a new cochlear implant (CI) processor that uses a pinna effect imitating directional microphone system. STUDY DESIGN Prospective experimental study. SETTING Tertiary referral center. PATIENTS Ten experienced, unilateral CI recipients with bilateral severe-to-profound hearing loss. INTERVENTION All participants performed speech in noise tests with the Opus 2 processor (omnidirectional microphone mode only) and the newer Sonnet processor (omnidirectional and directional microphone mode). MAIN OUTCOME MEASURE The speech reception threshold (SRT) in noise was measured in four spatial settings. The test sentences were always presented from the front. The noise was arriving either from the front (S0N0), the ipsilateral side of the CI (S0NIL), the contralateral side of the CI (S0NCL), or the back (S0N180). RESULTS The directional mode improved the SRTs by 3.6 dB (p < 0.01), 2.2 dB (p < 0.01), and 1.3 dB (p < 0.05) in the S0N180, S0NIL, and S0NCL situations, when compared with the Sonnet in the omnidirectional mode. There was no statistically significant difference in the S0N0 situation. No differences between the Opus 2 and the Sonnet in the omnidirectional mode were observed. CONCLUSION Speech intelligibility with the Sonnet system was statistically different to speech recognition with the Opus 2 system suggesting that CI users might profit from the pinna effect imitating directionality mode in noisy environments.