30 resultados para Speech emotion recognition
em Aston University Research Archive
Resumo:
The primary aim of this study was to investigate facial emotion recognition (FER) in patients with somatoform disorders (SFD). Also of interest was the extent to which concurrent alexithymia contributed to any changes in emotion recognition accuracy. Twenty patients with SFD and 20 healthy, age, sex and education matched, controls were assessed with the Facially Expressed Emotion Labelling Test of FER and the 26-item Toronto Alexithymia Scale. Patients withSFD exhibited elevated alexithymia symptoms relative to healthy controls.Patients with SFD also recognized significantly fewer emotional expressions than did the healthy controls. However, the group difference in emotion recognition accuracy became nonsignificant once the influence of alexithymia was controlled for statistically. This suggests that the deficit in FER observed in the patients with SFD was most likely a consequence of concurrent alexithymia. It should be noted that neither depression nor anxiety was significantly related to emotion recognition accuracy, suggesting that these variables did not contribute the emotion recognition deficit. Impaired FER observed in the patients with SFD could plausibly have a negative influence on these individuals’ social functioning.
Resumo:
Significant facial emotion recognition (FER) deficits have been observed in participants exhibiting high levels of eating psychopathology. The current study aimed to determine if the pattern of FER deficits is influenced by intensity of facial emotion and to establish if eating psychopathology is associated with a specific pattern of emotion recognition errors that is independent of other psychopathological or personality factors. Eighty females, 40 high and 40 low scorers on the Eating Disorders Inventory (EDI) were presented with a series of faces, each featuring one of five emotional expressions at one of four intensities, and were asked to identify the emotion portrayed. Results revealed that, in comparison to Low EDI scorers, high scorers correctly recognised significantly fewer expressions, particularly of fear and anger. There was also a trend for this deficit to be more evident for subtle displays of emotion (50% intensity). Deficits in anger recognition were related specifically to scores on the body dissatisfaction subscale of the EDI. Error analyses revealed that, in comparison to Low EDI scorers, high scorers made significantly more and fear-as-anger errors. Also, a tendency to label anger expressions as sadness was related to body dissatisfaction. Current findings confirm FER deficits in subclinical eating psychopathology and extend these findings to subtle expressions of emotion. Furthermore, this is the first study to establish that these deficits are related to a specific pattern of recognition errors. Impaired FER could disrupt normal social functioning and might represent a risk factor for the development of more severe psychopathology.
Resumo:
Four patients that had received an anterior cingulotomy (ACING) and five patients that had received both an ACING and an anterior capsulotomy (ACAPS) as an intervention for chronic, treatment refractory depression were presented with a series of dynamic emotional stimuli and invited to identify the emotion portrayed. Their performance was compared with that of a group of non-surgically treated patients with major depression (n = 17) and with a group of matched, never-depressed controls (n = 22). At the time of testing, four of the nine neurosurgery patients had recovered from their depressive episode, whereas five remained depressed. Analysis of emotion recognition accuracy revealed no significant differences between depressed and non-depressed neurosurgically treated patients. Similarly, no significant differences were observed between the patients treated with ACING alone and those treated with both ACING and ACAPS. Comparison of the emotion recognition accuracy of the neurosurgically treated patients and the depressed and healthy control groups revealed that the surgically treated patients exhibited a general impairment in their recognition accuracy compared to healthy controls. Regression analysis revealed that participants' emotion recognition accuracy was predicted by the number of errors they made on the Stroop colour-naming task. It is plausible that the observed deficit in emotion recognition accuracy was a consequence of impaired attentional control, which may have been a result of the surgical lesions to the anterior cingulate cortex. © 2007 Elsevier Ltd. All rights reserved.
Resumo:
Objectives: The aims were to determine if emotion recognition deficits observed in eating disorders generalise to non-clinical disordered eating and to establish if other psychopathological and personality factors contributed to, or accounted for, these deficits. Design: Females with high (n=23) and low (n=22) scores on the Eating Disorder Inventory (EDI) were assessed on their ability to recognise emotion from videotaped social interactions. Participants also completed a face memory task, a Stroop task, and self-report measures of alexithymia, depression and anxiety. Results: Relative to the low EDI group, high EDI participants exhibited a general deficit in recognition of emotion, which was related to their scores on the alexithymia measure and the bulimia subscale of the EDI. They also exhibited a specific deficit in the recognition of anger, which was related to their scores on the body dissatisfaction subscale of the EDI. Conclusions: In line with clinical eating disorders, non-clinical disordered eating is associated with emotion recognition deficits. However, the nature of these deficits appears to be dependent upon the type of eating psychopathology and the degree of co-morbid alexithymia.
Resumo:
This study aimed to: i) determine if the attention bias towards angry faces reported in eating disorders generalises to a non-clinical sample varying in eating disorder-related symptoms; ii) examine if the bias occurs during initial orientation or later strategic processing; and iii) confirm previous findings of impaired facial emotion recognition in non-clinical disordered eating. Fifty-two females viewed a series of face-pairs (happy or angry paired with neutral) whilst their attentional deployment was continuously monitored using an eye-tracker. They subsequently identified the emotion portrayed in a separate series of faces. The highest (n=18) and lowest scorers (n=17) on the Eating Disorders Inventory (EDI) were compared on the attention and facial emotion recognition tasks. Those with relatively high scores exhibited impaired facial emotion recognition, confirming previous findings in similar non-clinical samples. They also displayed biased attention away from emotional faces during later strategic processing, which is consistent with previously observed impairments in clinical samples. These differences were related to drive-for-thinness. Although we found no evidence of a bias towards angry faces, it is plausible that the observed impairments in emotion recognition and avoidance of emotional faces could disrupt social functioning and act as a risk factor for the development of eating disorders.
Resumo:
BACKGROUND: Glue ear or otitis media with effusion (OME) is common in children and may be associated with hearing loss (HL). For most children it has no long lasting effects on cognitive development but it is unclear whether there are subgroups at higher risk of sequelae. OBJECTIVES: To examine the association between a score comprising the number of times a child had OME and HL (OME/HL score) in the first four/five years of life and IQ at age 4 and 8. To examine whether any association between OME/HL and IQ is moderated by socioeconomic, child or family factors. METHODS: Prospective, longitudinal cohort study: the Avon Longitudinal Study of Parents and Children (ALSPAC). 1155 children tested using tympanometry on up to nine occasions and hearing for speech (word recognition) on up to three occasions between age 8 months and 5 years. An OME/HL score was created and associations with IQ at ages 4 and 8 were examined. Potential moderators included a measure of the child's cognitive stimulation at home (HOME score). RESULTS: For the whole sample at age 4 the group with the highest 10% OME/HL scores had performance IQ 5 points lower [95% CI -9, -1] and verbal IQ 6 points lower [95% CI -10, -3] than the unaffected group. By age 8 the evidence for group differences was weak. There were significant interactions between OME/HL and the HOME score: those with high OME/HL scores and low 18 month HOME scores had lower IQ at age 4 and 8 than those with high OME/HL scores and high HOME scores. Adjusted mean differences ranged from 5 to 8 IQ points at age 4 and 8. CONCLUSIONS: The cognitive development of children from homes with lower levels of cognitive stimulation is susceptible to the effects of glue ear and hearing loss.
Resumo:
This thesis addresses the viability of automatic speech recognition for control room systems; with careful system design, automatic speech recognition (ASR) devices can be useful means for human computer interaction in specific types of task. These tasks can be defined as complex verbal activities, such as command and control, and can be paired with spatial tasks, such as monitoring, without detriment. It is suggested that ASR use be confined to routine plant operation, as opposed the critical incidents, due to possible problems of stress on the operators' speech. It is proposed that using ASR will require operators to adapt a commonly used skill to cater for a novel use of speech. Before using the ASR device, new operators will require some form of training. It is shown that a demonstration by an experienced user of the device can lead to superior performance than instructions. Thus, a relatively cheap and very efficient form of operator training can be supplied by demonstration by experienced ASR operators. From a series of studies into speech based interaction with computers, it is concluded that the interaction be designed to capitalise upon the tendency of operators to use short, succinct, task specific styles of speech. From studies comparing different types of feedback, it is concluded that operators be given screen based feedback, rather than auditory feedback, for control room operation. Feedback will take two forms: the use of the ASR device will require recognition feedback, which will be best supplied using text; the performance of a process control task will require task feedback integrated into the mimic display. This latter feedback can be either textual or symbolic, but it is suggested that symbolic feedback will be more beneficial. Related to both interaction style and feedback is the issue of handling recognition errors. These should be corrected by simple command repetition practices, rather than use error handling dialogues. This method of error correction is held to be non intrusive to primary command and control operations. This thesis also addresses some of the problems of user error in ASR use, and provides a number of recommendations for its reduction.
Resumo:
The research presented in this paper is part of an ongoing investigation into how best to incorporate speech-based input within mobile data collection applications. In our previous work [1], we evaluated the ability of a single speech recognition engine to support accurate, mobile, speech-based data input. Here, we build on our previous research to compare the achievable speaker-independent accuracy rates of a variety of speech recognition engines; we also consider the relative effectiveness of different speech recognition engine and microphone pairings in terms of their ability to support accurate text entry under realistic mobile conditions of use. Our intent is to provide some initial empirical data derived from mobile, user-based evaluations to support technological decisions faced by developers of mobile applications that would benefit from, or require, speech-based data entry facilities.
Resumo:
Speech recognition technology is regarded as a key enabler for increasing the usability of applications deployed on mobile devices -- devices which are becoming increasingly prevalent in modern hospital-based healthcare. Although the use of speech recognition is not new to the hospital-based healthcare domain, its use with mobile devices has thus far been limited. This paper presents the results of a literature review we conducted in order to observe the manner in which speech recognition technology has been used in hospital-based healthcare and to gain an understanding of how this technology is being evaluated, in terms of its dependability and reliability, in healthcare settings. Our intent is that this review will help identify scope for future uses of speech recognition technologies in the healthcare domain, as well as to identify implications for the meaningful evaluation of such technologies given the specific context of use.
Resumo:
The research presented in this paper is part of an ongoing investigation into how best to incorporate speech-based input within mobile data collection applications. In our previous work [1], we evaluated the ability of a single speech recognition engine to support accurate, mobile, speech-based data input. Here, we build on our previous research to compare the achievable speaker-independent accuracy rates of a variety of speech recognition engines; we also consider the relative effectiveness of different speech recognition engine and microphone pairings in terms of their ability to support accurate text entry under realistic mobile conditions of use. Our intent is to provide some initial empirical data derived from mobile, user-based evaluations to support technological decisions faced by developers of mobile applications that would benefit from, or require, speech-based data entry facilities.
Resumo:
Speech recognition technology is regarded as a key enabler for increasing the usability of applications deployed on mobile devices -- devices which are becoming increasingly prevalent in modern hospital-based healthcare. Although the use of speech recognition is not new to the hospital-based healthcare domain, its use with mobile devices has thus far been limited. This paper presents the results of a literature review we conducted in order to observe the manner in which speech recognition technology has been used in hospital-based healthcare and to gain an understanding of how this technology is being evaluated, in terms of its dependability and reliability, in healthcare settings. Our intent is that this review will help identify scope for future uses of speech recognition technologies in the healthcare domain, as well as to identify implications for the meaningful evaluation of such technologies given the specific context of use.
Resumo:
OBJECTIVE: Cochlear implantation (CI) is a standard treatment for severe-profound sensorineural hearing loss (SNHL). However, consensus has yet to be reached on its effectiveness for hearing loss caused by auditory neuropathy spectrum disorder (ANSD). This review aims to summarize and synthesize current evidence of the effectiveness of CI in improving speech recognition in children with ANSD. DESIGN: Systematic review. STUDY SAMPLE: A total of 27 studies from an initial selection of 237. RESULTS: All selected studies were observational in design, including case studies, cohort studies, and comparisons between children with ANSD and SNHL. Most children with ANSD achieved open-set speech recognition with their CI. Speech recognition ability was found to be equivalent in CI users (who previously performed poorly with hearing aids) and hearing-aid users. Outcomes following CI generally appeared similar in children with ANSD and SNHL. Assessment of study quality, however, suggested substantial methodological concerns, particularly in relation to issues of bias and confounding, limiting the robustness of any conclusions around effectiveness. CONCLUSIONS: Currently available evidence is compatible with favourable outcomes from CI in children with ANSD. However, this evidence is weak. Stronger evidence is needed to support cost-effective clinical policy and practice in this area.