90 resultados para auditory hallucinations
Resumo:
Thomas De Quincey’s terrifying oriental nightmares, reported to sensational acclaim in his Confessions of an English Opium-Eater (1821), have become a touchstone of romantic imperialism in recent studies of the literature of the period (Leask 1991; Barrell 1992 et al). De Quincey’s collocation of “all creatures, birds, beasts, reptiles, all trees and plants, usages and appearances, that are found in all tropical regions” in the hypnagogic hallucinations that characterized what he called “the pains of opium” seems to anticipate neatly Said’s theory of orientalism, whereby the orient was supplied by the west with “a mentality, a genealogy, an atmosphere,” the attitudinal basis as he argues for the continuing march of imperialism from the late eighteenth century. Yet, as Thomas Trautmann (1997) has pointed out, orientalist scholarship based in India and led by the influential Asiatic Society of Bengal in the late eighteenth century was extremely enthusiastic about Indian classical antiquity. The early orientalist scholarship posited ethnic, linguistic, cultural and religious links between Europe and India, while recognizing the greater antiquity of Indian civilization. This favourable attitude (which Trautmann calls “Indomania”) was overtaken in the nineteenth century by disavowal of that scholarship and repugnance (which he calls “Indophobia”), influenced by utilitarian and evangelical attitudes to colonialism. De Quincey’s lifespan covers this crucial period of change. My paper examines his evangelical upbringing and interest in biblical and orientalist scholarship to suggest his anxious investment in these modes of thinking. I will suggest that the bizarre orientalist fusions of his dreams can be better understood in the context of changing attitudes to the imperialism during the period. An examination of his work provides a far more dynamic understanding of the processes of orientalism than the binary model suggested by Said. The transformation implied from imperial scholarship to governance, I will suggest, is not irrelevant to a world which continues to pull apart on various grounds of race and ethnicity, and reflects on our own role in the academy today.
Resumo:
Human listeners seem to have an impressive ability to recognize a wide variety of natural sounds. However, there is surprisingly little quantitative evidence to characterize this fundamental ability. Here the speed and accuracy of musical-sound recognition were measured psychophysically with a rich but acoustically balanced stimulus set. The set comprised recordings of notes from musical instruments and sung vowels. In a first experiment, reaction times were collected for three target categories: voice, percussion, and strings. In a go/no-go task, listeners reacted as quickly as possible to members of a target category while withholding responses to distractors (a diverse set of musical instruments). Results showed near-perfect accuracy and fast reaction times, particularly for voices. In a second experiment, voices were recognized among strings and vice-versa. Again, reaction times to voices were faster. In a third experiment, auditory chimeras were created to retain only spectral or temporal features of the voice. Chimeras were recognized accurately, but not as quickly as natural voices. Altogether, the data suggest rapid and accurate neural mechanisms for musical-sound recognition based on selectivity to complex spectro-temporal signatures of sound sources.
Resumo:
Both embodied and symbolic accounts of conceptual organization would predict partial sharing and partial differentiation between the neural activations seen for concepts activated via different stimulus modalities. But cross-participant and cross-session variability in BOLD activity patterns makes analyses of such patterns with MVPA methods challenging. Here, we examine the effect of cross-modal and individual variation on the machine learning analysis of fMRI data recorded during a word property generation task. We present the same set of living and non-living concepts (land-mammals, or work tools) to a cohort of Japanese participants in two sessions: the first using auditory presentation of spoken words; the second using visual presentation of words written in Japanese characters. Classification accuracies confirmed that these semantic categories could be detected in single trials, with within-session predictive accuracies of 80-90%. However cross-session prediction (learning from auditory-task data to classify data from the written-word-task, or vice versa) suffered from a performance penalty, achieving 65-75% (still individually significant at p « 0.05). We carried out several follow-on analyses to investigate the reason for this shortfall, concluding that distributional differences in neither time nor space alone could account for it. Rather, combined spatio-temporal patterns of activity need to be identified for successful cross-session learning, and this suggests that feature selection strategies could be modified to take advantage of this.
Resumo:
“Sounds of the City” is a large-scale community project and exhibition commissioned by the Metropolitan Arts Centre (MAC) and led by artists from the Sonic Arts Research Centre (SARC), Queen’s University Belfast. Over a four-month period, the artists worked together with two intergenerational groups in Belfast with the aim of addressing specific sound qualities of places, events and stories. Themes that surfaced from this process constitute the basis for the exhibition which promotes listening as a form of intersecting daily life, identity and memory. Five installations address aural contexts ranging from Belfast’s industrial heritage to the local family home. These are shaped by present and past experiences of workshop participants at Dee Street Community Centre in East Belfast and Tar Isteach in North Belfast. The themes and contents of these installations center upon the relationship between sound and memory, sound and place, and the documentation of everyday personal auditory experience.
All materials exhibited have emerged through workshops, interviews and field-recording sessions. Workshops acted as a basis from which to inform each group about the project’s aims, methods of listening, methods of documenting sound and the wider areas of soundscape studies and acoustic ecology. They also provided a central point allowing participants to organize outside activities and share material for exhibition.
“Sounds of the City” explores the relationship between sound and community through everyday life and presents a dynamic and ever-changing soundscape that shapes Belfast’s identity.
Sounds of the City has been exhibited at the MAC, Belfast 2012 and at Espaço Ecco, Brasilia 2013.
Resumo:
A functional polymorphism (Val-158-Met) at the Catechol-O-methyltransferase (COMT) locus has been identified as a potential etiological factor in schizophrenia. Yet the association has not been convincingly replicated across independent samples. We hypothesized that phenotypic heterogeneity might be diluting the COMT effect. To clarify the putative association, we performed an exploratory analysis to test for association between COMT and five psychosis symptom scales. These were derived through factor analysis of the Operational Criteria Checklist for Psychiatric Illness. Our sample was the Irish Study of High Density Schizophrenia Families, a large collection consisting of 268 multiplex families. This sample has previously shown a small but significant effect of the COMT Val allele in conferring risk for schizophrenia. We tested for preferential transmission of COMT alleles from parent to affected offspring (n = 749) for each of the five factor-derived scales (negative symptoms, delusions, hallucinations, mania, and depression). Significant overtransmission of the Val allele was found for mania (P <0.05) and depression (P = 0.01) scales. Examination of odds ratios (ORs) revealed a heterogeneous effect of COMT, whereby it had no effect on Negative Symptoms, but largest impact on Depression (OR = 1.4). These results suggest a modest affective vulnerability conferred by this allele in psychosis, but will require replication.
Resumo:
Managing gait disturbances in people with Parkinson’s disease is a pressing challenge, as symptoms can contribute to injury and morbidity through an increased risk of falls. While drug-based interventions have limited efficacy in alleviating gait impairments, certain non-pharmacological methods, such as cueing, can also induce transient improvements to gait. The approach adopted here is to use computationally-generated sounds to help guide and improve walking actions. The first method described uses recordings of force data taken from the steps of a healthy adult which in turn were used to synthesize realistic gravel-footstep sounds that represented different spatio-temporal parameters of gait, such as step duration and step length. The second method described involves a novel method of sonifying, in real time, the swing phase of gait using real-time motion-capture data to control a sound synthesis engine. Both approaches explore how simple but rich auditory representations of action based events can be used by people with Parkinson’s to guide and improve the quality of their walking, reducing the risk of falls and injury. Studies with Parkinson’s disease patients are reported which show positive results for both techniques in reducing step length variability. Potential future directions for how these sound approaches can be used to manage gait disturbances in Parkinson’s are also discussed.
Resumo:
Despite the importance of laughter in social interactions it remains little studied in affective computing. Respiratory, auditory, and facial laughter signals have been investigated but laughter-related body movements have received almost no attention. The aim of this study is twofold: first an investigation into observers' perception of laughter states (hilarious, social, awkward, fake, and non-laughter) based on body movements alone, through their categorization of avatars animated with natural and acted motion capture data. Significant differences in torso and limb movements were found between animations perceived as containing laughter and those perceived as nonlaughter. Hilarious laughter also differed from social laughter in the amount of bending of the spine, the amount of shoulder rotation and the amount of hand movement. The body movement features indicative of laughter differed between sitting and standing avatar postures. Based on the positive findings in this perceptual study, the second aim is to investigate the possibility of automatically predicting the distributions of observer's ratings for the laughter states. The findings show that the automated laughter recognition rates approach human rating levels, with the Random Forest method yielding the best performance.
Resumo:
Context Medical students can have difficulty in distinguishing left from right. Many infamous medical errors have occurred when a procedure has been performed on the wrong side, such as in the removal of the wrong kidney. Clinicians encounter many distractions during their work. There is limited information on how these affect performance.
Objectives Using a neuropsychological paradigm, we aim to elucidate the impacts of different types of distraction on left–right (LR) discrimination ability.
Methods Medical students were recruited to a study with four arms: (i) control arm (no distraction); (ii) auditory distraction arm (continuous ambient ward noise); (iii) cognitive distraction arm (interruptions with clinical cognitive tasks), and (iv) auditory and cognitive distraction arm. Participants’ LR discrimination ability was measured using the validated Bergen Left–Right Discrimination Test (BLRDT). Multivariate analysis of variance was used to analyse the impacts of the different forms of distraction on participants’ performance on the BLRDT. Additional analyses looked at effects of demographics on performance and correlated participants’ self-perceived LR discrimination ability and their actual performance.
Results A total of 234 students were recruited. Cognitive distraction had a greater negative impact on BLRDT performance than auditory distraction. Combined auditory and cognitive distraction had a negative impact on performance, but only in the most difficult LR task was this negative impact found to be significantly greater than that of cognitive distraction alone. There was a significant medium-sized correlation between perceived LR discrimination ability and actual overall BLRDT performance.
Conclusions
Distraction has a significant impact on performance and multifaceted approaches are required to reduce LR errors. Educationally, greater emphasis on the linking of theory and clinical application is required to support patient safety and human factor training in medical school curricula. Distraction has the potential to impair an individual's ability to make accurate LR decisions and students should be trained from undergraduate level to be mindful of this.
Resumo:
Previous research has shown that Parkinson's disease (PD) patients can increase the speed of their movement when catching a moving ball compared to when reaching for a static ball (Majsak et al., 1998). A recent model proposed by Redgrave et al. (2010) explains this phenomenon with regard to the dichotomic organization of motor loops in the basal ganglia circuitry and the role of sensory micro-circuitries in the control of goal-directed actions. According to this model, external visual information that is relevant to the required movement can induce a switch from a habitual control of movement toward an externally-paced, goal-directed form of guidance, resulting in augmented motor performance (Bienkiewicz et al., 2013). In the current study, we investigated whether continuous acoustic information generated by an object in motion can enhance motor performance in an arm reaching task in a similar way to that observed in the studies of Majsak et al. (1998, 2008). In addition, we explored whether the kinematic aspects of the movement are regulated in accordance with time to arrival information generated by the ball's motion as it reaches the catching zone. A group of 7 idiopathic PD (6 male, 1 female) patients performed a ball-catching task where the acceleration (and hence ball velocity) was manipulated by adjusting the angle of the ramp. The type of sensory information (visual and/or auditory) specifying the ball's arrival at the catching zone was also manipulated. Our results showed that patients with PD demonstrate improved motor performance when reaching for a ball in motion, compared to when stationary. We observed how PD patients can adjust their movement kinematics in accordance with the speed of a moving target, even if vision of the target is occluded and patients have to rely solely on auditory information. We demonstrate that the availability of dynamic temporal information is crucial for eliciting motor improvements in PD. Furthermore, these effects appear independent from the sensory modality through-which the information is conveyed.
Resumo:
Despite its importance in social interactions, laughter remains little studied in affective computing. Intelligent virtual agents are often blind to users’ laughter and unable to produce convincing laughter themselves. Respiratory, auditory, and facial laughter signals have been investigated but laughter-related body movements have received less attention. The aim of this study is threefold. First, to probe human laughter perception by analyzing patterns of categorisations of natural laughter animated on a minimal avatar. Results reveal that a low dimensional space can describe perception of laughter “types”. Second, to investigate observers’ perception of laughter (hilarious, social, awkward, fake, and non-laughter) based on animated avatars generated from natural and acted motion-capture data. Significant differences in torso and limb movements are found between animations perceived as laughter and those perceived as non-laughter. Hilarious laughter also differs from social laughter. Different body movement features were indicative of laughter in sitting and standing avatar postures. Third, to investigate automatic recognition of laughter to the same level of certainty as observers’ perceptions. Results show recognition rates of the Random Forest model approach human rating levels. Classification comparisons and feature importance analyses indicate an improvement in recognition of social laughter when localized features and nonlinear models are used.
Resumo:
In recent years, sonification of movement has emerged as a viable method for the provision of feedback in motor learning. Despite some experimental validation of its utility, controlled trials to test the usefulness of sonification in a motor learning context are still rare. As such, there are no accepted conventions for dealing with its implementation. This article addresses the question of how continuous movement information should be best presented as sound to be fed back to the learner. It is proposed that to establish effective approaches to using sonification in this context, consideration must be given to the processes that underlie motor learning, in particular the nature of the perceptual information available to the learner for performing the task at hand. Although sonification has much potential in movement performance enhancement, this potential is largely unrealised as of yet, in part due to the lack of a clear framework for sonification mapping: the relationship between movement and sound. By grounding mapping decisions in a firmer understanding of how perceptual information guides learning, and an embodied cognition stance in general, it is hoped that greater advances in use of sonification to enhance motor learning can be achieved.
Resumo:
Language experience clearly affects the perception of speech, but little is known about whether these differences in perception extend to non-speech sounds. In this study, we investigated rhythmic perception of non-linguistic sounds in speakers of French and German using a grouping task, in which complexity (variability in sounds, presence of pauses) was manipulated. In this task, participants grouped sequences of auditory chimeras formed from musical instruments. These chimeras mimic the complexity of speech without being speech. We found that, while showing the same overall grouping preferences, the German speakers showed stronger biases than the French speakers in grouping complex sequences. Sound variability reduced all participants' biases, resulting in the French group showing no grouping preference for the most variable sequences, though this reduction was attenuated by musical experience. In sum, this study demonstrates that linguistic experience, musical experience, and complexity affect rhythmic grouping of non-linguistic sounds and suggests that experience with acoustic cues in a meaningful context (language or music) is necessary for developing a robust grouping preference that survives acoustic variability.
Resumo:
Experience continuously imprints on the brain at all stages of life. The traces it leaves behind can produce perceptual learning [1], which drives adaptive behavior to previously encountered stimuli. Recently, it has been shown that even random noise, a type of sound devoid of acoustic structure, can trigger fast and robust perceptual learning after repeated exposure [2]. Here, by combining psychophysics, electroencephalography (EEG), and modeling, we show that the perceptual learning of noise is associated with evoked potentials, without any salient physical discontinuity or obvious acoustic landmark in the sound. Rather, the potentials appeared whenever a memory trace was observed behaviorally. Such memory-evoked potentials were characterized by early latencies and auditory topographies, consistent with a sensory origin. Furthermore, they were generated even on conditions of diverted attention. The EEG waveforms could be modeled as standard evoked responses to auditory events (N1-P2) [3], triggered by idiosyncratic perceptual features acquired through learning. Thus, we argue that the learning of noise is accompanied by the rapid formation of sharp neural selectivity to arbitrary and complex acoustic patterns, within sensory regions. Such a mechanism bridges the gap between the short-term and longer-term plasticity observed in the learning of noise [2, 4-6]. It could also be key to the processing of natural sounds within auditory cortices [7], suggesting that the neural code for sound source identification will be shaped by experience as well as by acoustics.
Resumo:
Individuals with autism spectrum disorders (ASD) are reported to allocate less spontaneous attention to voices. Here, we investigated how vocal sounds are processed in ASD adults, when those sounds are attended. Participants were asked to react as fast as possible to target stimuli (either voices or strings) while ignoring distracting stimuli. Response times (RTs) were measured. Results showed that, similar to neurotypical (NT) adults, ASD adults were faster to recognize voices compared to strings. Surprisingly, ASD adults had even shorter RTs for voices than the NT adults, suggesting a faster voice recognition process. To investigate the acoustic underpinnings of this effect, we created auditory chimeras that retained only the temporal or the spectral features of voices. For the NT group, no RT advantage was found for the chimeras compared to strings: both sets of features had to be present to observe an RT advantage. However, for the ASD group, shorter RTs were observed for both chimeras. These observations indicate that the previously observed attentional deficit to voices in ASD individuals could be due to a failure to combine acoustic features, even though such features may be well represented at a sensory level.