83 resultados para AUDITORY


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Despite the importance of laughter in social interactions it remains little studied in affective computing. Respiratory, auditory, and facial laughter signals have been investigated but laughter-related body movements have received almost no attention. The aim of this study is twofold: first an investigation into observers' perception of laughter states (hilarious, social, awkward, fake, and non-laughter) based on body movements alone, through their categorization of avatars animated with natural and acted motion capture data. Significant differences in torso and limb movements were found between animations perceived as containing laughter and those perceived as nonlaughter. Hilarious laughter also differed from social laughter in the amount of bending of the spine, the amount of shoulder rotation and the amount of hand movement. The body movement features indicative of laughter differed between sitting and standing avatar postures. Based on the positive findings in this perceptual study, the second aim is to investigate the possibility of automatically predicting the distributions of observer's ratings for the laughter states. The findings show that the automated laughter recognition rates approach human rating levels, with the Random Forest method yielding the best performance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Context Medical students can have difficulty in distinguishing left from right. Many infamous medical errors have occurred when a procedure has been performed on the wrong side, such as in the removal of the wrong kidney. Clinicians encounter many distractions during their work. There is limited information on how these affect performance. 
Objectives Using a neuropsychological paradigm, we aim to elucidate the impacts of different types of distraction on left–right (LR) discrimination ability. 
Methods Medical students were recruited to a study with four arms: (i) control arm (no distraction); (ii) auditory distraction arm (continuous ambient ward noise); (iii) cognitive distraction arm (interruptions with clinical cognitive tasks), and (iv) auditory and cognitive distraction arm. Participants’ LR discrimination ability was measured using the validated Bergen Left–Right Discrimination Test (BLRDT). Multivariate analysis of variance was used to analyse the impacts of the different forms of distraction on participants’ performance on the BLRDT. Additional analyses looked at effects of demographics on performance and correlated participants’ self-perceived LR discrimination ability and their actual performance. 
Results A total of 234 students were recruited. Cognitive distraction had a greater negative impact on BLRDT performance than auditory distraction. Combined auditory and cognitive distraction had a negative impact on performance, but only in the most difficult LR task was this negative impact found to be significantly greater than that of cognitive distraction alone. There was a significant medium-sized correlation between perceived LR discrimination ability and actual overall BLRDT performance. 
Conclusions
Distraction has a significant impact on performance and multifaceted approaches are required to reduce LR errors. Educationally, greater emphasis on the linking of theory and clinical application is required to support patient safety and human factor training in medical school curricula. Distraction has the potential to impair an individual's ability to make accurate LR decisions and students should be trained from undergraduate level to be mindful of this.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Previous research has shown that Parkinson's disease (PD) patients can increase the speed of their movement when catching a moving ball compared to when reaching for a static ball (Majsak et al., 1998). A recent model proposed by Redgrave et al. (2010) explains this phenomenon with regard to the dichotomic organization of motor loops in the basal ganglia circuitry and the role of sensory micro-circuitries in the control of goal-directed actions. According to this model, external visual information that is relevant to the required movement can induce a switch from a habitual control of movement toward an externally-paced, goal-directed form of guidance, resulting in augmented motor performance (Bienkiewicz et al., 2013). In the current study, we investigated whether continuous acoustic information generated by an object in motion can enhance motor performance in an arm reaching task in a similar way to that observed in the studies of Majsak et al. (1998, 2008). In addition, we explored whether the kinematic aspects of the movement are regulated in accordance with time to arrival information generated by the ball's motion as it reaches the catching zone. A group of 7 idiopathic PD (6 male, 1 female) patients performed a ball-catching task where the acceleration (and hence ball velocity) was manipulated by adjusting the angle of the ramp. The type of sensory information (visual and/or auditory) specifying the ball's arrival at the catching zone was also manipulated. Our results showed that patients with PD demonstrate improved motor performance when reaching for a ball in motion, compared to when stationary. We observed how PD patients can adjust their movement kinematics in accordance with the speed of a moving target, even if vision of the target is occluded and patients have to rely solely on auditory information. We demonstrate that the availability of dynamic temporal information is crucial for eliciting motor improvements in PD. Furthermore, these effects appear independent from the sensory modality through-which the information is conveyed. 

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Despite its importance in social interactions, laughter remains little studied in affective computing. Intelligent virtual agents are often blind to users’ laughter and unable to produce convincing laughter themselves. Respiratory, auditory, and facial laughter signals have been investigated but laughter-related body movements have received less attention. The aim of this study is threefold. First, to probe human laughter perception by analyzing patterns of categorisations of natural laughter animated on a minimal avatar. Results reveal that a low dimensional space can describe perception of laughter “types”. Second, to investigate observers’ perception of laughter (hilarious, social, awkward, fake, and non-laughter) based on animated avatars generated from natural and acted motion-capture data. Significant differences in torso and limb movements are found between animations perceived as laughter and those perceived as non-laughter. Hilarious laughter also differs from social laughter. Different body movement features were indicative of laughter in sitting and standing avatar postures. Third, to investigate automatic recognition of laughter to the same level of certainty as observers’ perceptions. Results show recognition rates of the Random Forest model approach human rating levels. Classification comparisons and feature importance analyses indicate an improvement in recognition of social laughter when localized features and nonlinear models are used.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In recent years, sonification of movement has emerged as a viable method for the provision of feedback in motor learning. Despite some experimental validation of its utility, controlled trials to test the usefulness of sonification in a motor learning context are still rare. As such, there are no accepted conventions for dealing with its implementation. This article addresses the question of how continuous movement information should be best presented as sound to be fed back to the learner. It is proposed that to establish effective approaches to using sonification in this context, consideration must be given to the processes that underlie motor learning, in particular the nature of the perceptual information available to the learner for performing the task at hand. Although sonification has much potential in movement performance enhancement, this potential is largely unrealised as of yet, in part due to the lack of a clear framework for sonification mapping: the relationship between movement and sound. By grounding mapping decisions in a firmer understanding of how perceptual information guides learning, and an embodied cognition stance in general, it is hoped that greater advances in use of sonification to enhance motor learning can be achieved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Language experience clearly affects the perception of speech, but little is known about whether these differences in perception extend to non-speech sounds. In this study, we investigated rhythmic perception of non-linguistic sounds in speakers of French and German using a grouping task, in which complexity (variability in sounds, presence of pauses) was manipulated. In this task, participants grouped sequences of auditory chimeras formed from musical instruments. These chimeras mimic the complexity of speech without being speech. We found that, while showing the same overall grouping preferences, the German speakers showed stronger biases than the French speakers in grouping complex sequences. Sound variability reduced all participants' biases, resulting in the French group showing no grouping preference for the most variable sequences, though this reduction was attenuated by musical experience. In sum, this study demonstrates that linguistic experience, musical experience, and complexity affect rhythmic grouping of non-linguistic sounds and suggests that experience with acoustic cues in a meaningful context (language or music) is necessary for developing a robust grouping preference that survives acoustic variability.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Experience continuously imprints on the brain at all stages of life. The traces it leaves behind can produce perceptual learning [1], which drives adaptive behavior to previously encountered stimuli. Recently, it has been shown that even random noise, a type of sound devoid of acoustic structure, can trigger fast and robust perceptual learning after repeated exposure [2]. Here, by combining psychophysics, electroencephalography (EEG), and modeling, we show that the perceptual learning of noise is associated with evoked potentials, without any salient physical discontinuity or obvious acoustic landmark in the sound. Rather, the potentials appeared whenever a memory trace was observed behaviorally. Such memory-evoked potentials were characterized by early latencies and auditory topographies, consistent with a sensory origin. Furthermore, they were generated even on conditions of diverted attention. The EEG waveforms could be modeled as standard evoked responses to auditory events (N1-P2) [3], triggered by idiosyncratic perceptual features acquired through learning. Thus, we argue that the learning of noise is accompanied by the rapid formation of sharp neural selectivity to arbitrary and complex acoustic patterns, within sensory regions. Such a mechanism bridges the gap between the short-term and longer-term plasticity observed in the learning of noise [2, 4-6]. It could also be key to the processing of natural sounds within auditory cortices [7], suggesting that the neural code for sound source identification will be shaped by experience as well as by acoustics.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Individuals with autism spectrum disorders (ASD) are reported to allocate less spontaneous attention to voices. Here, we investigated how vocal sounds are processed in ASD adults, when those sounds are attended. Participants were asked to react as fast as possible to target stimuli (either voices or strings) while ignoring distracting stimuli. Response times (RTs) were measured. Results showed that, similar to neurotypical (NT) adults, ASD adults were faster to recognize voices compared to strings. Surprisingly, ASD adults had even shorter RTs for voices than the NT adults, suggesting a faster voice recognition process. To investigate the acoustic underpinnings of this effect, we created auditory chimeras that retained only the temporal or the spectral features of voices. For the NT group, no RT advantage was found for the chimeras compared to strings: both sets of features had to be present to observe an RT advantage. However, for the ASD group, shorter RTs were observed for both chimeras. These observations indicate that the previously observed attentional deficit to voices in ASD individuals could be due to a failure to combine acoustic features, even though such features may be well represented at a sensory level.