83 resultados para AUDITORY
Resumo:
Before a natural sound can be recognized, an auditory signature of its source must be learned through experience. Here we used random waveforms to probe the formation of new memories for arbitrary complex sounds. A behavioral measure was designed, based on the detection of repetitions embedded in noises up to 4 s long. Unbeknownst to listeners, some noise samples reoccurred randomly throughout an experimental block. Results showed that repeated exposure induced learning for otherwise totally unpredictable and meaningless sounds. The learning was unsupervised and resilient to interference from other task-relevant noises. When memories were formed, they emerged rapidly, performance became abruptly near-perfect, and multiple noises were remembered for several weeks. The acoustic transformations to which recall was tolerant suggest that the learned features were local in time. We propose that rapid sensory plasticity could explain how the auditory brain creates useful memories from the ever-changing, but sometimes repeating, acoustical world. © 2010 Elsevier Inc.
Resumo:
Sounds such as the voice or musical instruments can be recognized on the basis of timbre alone. Here, sound recognition was investigated with severely reduced timbre cues. Short snippets of naturally recorded sounds were extracted from a large corpus. Listeners were asked to report a target category (e.g., sung voices) among other sounds (e.g., musical instruments). All sound categories covered the same pitch range, so the task had to be solved on timbre cues alone. The minimum duration for which performance was above chance was found to be short, on the order of a few milliseconds, with the best performance for voice targets. Performance was independent of pitch and was maintained when stimuli contained less than a full waveform cycle. Recognition was not generally better when the sound snippets were time-aligned with the sound onset compared to when they were extracted with a random starting time. Finally, performance did not depend on feedback or training, suggesting that the cues used by listeners in the artificial gating task were similar to those relevant for longer, more familiar sounds. The results show that timbre cues for sound recognition are available at a variety of time scales, including very short ones.
Resumo:
Objectives: A common behavioural symptom of Parkinson’s disease (PD) is reduced step length (SL). Whilst sensory cueing strategies can be effective in increasing SL and reducing gait variability, current cueing strategies conveying spatial or temporal information are generally confined to the use of either visual or auditory cue modalities, respectively. We describe a novel cueing strategy using ecologically-valid ‘action-related’ sounds (footsteps on gravel) that convey both spatial and temporal parameters of a specific action within a single cue.
Methods: The current study used a real-time imitation task to examine whether PD affects the ability to re-enact changes in spatial characteristics of stepping actions, based solely on auditory information. In a second experimental session, these procedures were repeated using synthesized sounds derived from recordings of the kinetic interactions between the foot and walking surface. A third experimental session examined whether adaptations observed when participants walked to action-sounds were preserved when participants imagined either real recorded or synthesized sounds.
Results: Whilst healthy control participants were able to re-enact significant changes in SL in all cue conditions, these adaptations, in conjunction with reduced variability of SL were only observed in the PD group when walking to, or imagining the recorded sounds.
Conclusions: The findings show that while recordings of stepping sounds convey action information to allow PD patients to re-enact and imagine spatial characteristics of gait, synthesis of sounds purely from gait kinetics is insufficient to evoke similar changes in behaviour, perhaps indicating that PD patients have a higher threshold to cue sensorimotor resonant responses.
Resumo:
Human listeners seem to be remarkably able to recognise acoustic sound sources based on timbre cues. Here we describe a psychophysical paradigm to estimate the time it takes to recognise a set of complex sounds differing only in timbre cues: both in terms of the minimum duration of the sounds and the inferred neural processing time. Listeners had to respond to the human voice while ignoring a set of distractors. All sounds were recorded from natural sources over the same pitch range and equalised to the same duration and power. In a first experiment, stimuli were gated in time with a raised-cosine window of variable duration and random onset time. A voice/non-voice (yes/no) task was used. Performance, as measured by d', remained above chance for the shortest sounds tested (2 ms); d's above 1 were observed for durations longer than or equal to 8 ms. Then, we constructed sequences of short sounds presented in rapid succession. Listeners were asked to report the presence of a single voice token that could occur at a random position within the sequence. This method is analogous to the "rapid sequential visual presentation" paradigm (RSVP), which has been used to evaluate neural processing time for images. For 500-ms sequences made of 32-ms and 16-ms sounds, d' remained above chance for presentation rates of up to 30 sounds per second. There was no effect of the pitch relation between successive sounds: identical for all sounds in the sequence or random for each sound. This implies that the task was not determined by streaming or forward masking, as both phenomena would predict better performance for the random pitch condition. Overall, the recognition of familiar sound categories such as the voice seems to be surprisingly fast, both in terms of the acoustic duration required and of the underlying neural time constants.
Resumo:
Previous behavioural studies have shown that repeated presentation of a randomly chosen acoustic pattern leads to the unsupervised learning of some of its specific acoustic features. The objective of our study was to determine the neural substrate for the representation of freshly learnt acoustic patterns. Subjects first performed a behavioural task that resulted in the incidental learning of three different noise-like acoustic patterns. During subsequent high-resolution functional magnetic resonance imaging scanning, subjects were then exposed again to these three learnt patterns and to others that had not been learned. Multi-voxel pattern analysis was used to test if the learnt acoustic patterns could be 'decoded' from the patterns of activity in the auditory cortex and medial temporal lobe. We found that activity in planum temporale and the hippocampus reliably distinguished between the learnt acoustic patterns. Our results demonstrate that these structures are involved in the neural representation of specific acoustic patterns after they have been learnt.
Resumo:
Traditionally, audio-motor timing processes have been understood as motor output from an internal clock, the speed of which is set by heard sound pulses. In contrast, this paper proposes a more ecologically-grounded approach, arguing that audio-motor processes are better characterized as performed actions on the perceived structure of auditory events. This position is explored in the context of auditory sensorimotor synchronization and continuation timing. Empirical research shows that the structure of sounds as auditory events can lead to marked differences in movement timing performance. The nature of these effects is discussed in the context of perceived action-relevance of auditory event structure. It is proposed that different forms of sound invite or support different patterns of sensorimotor timing. Hence, the temporal information in looped auditory signals is more than just the interval durations between onsets: all metronomes are not created equal. The potential implications for auditory guides in motor performance enhancement are also described.
Resumo:
The comparator account holds that processes of motor prediction contribute to the sense of agency by attenuating incoming sensory information and that disruptions to this process contribute to misattributions of agency in schizophrenia. Over the last 25 years this simple and powerful model has gained widespread support not only as it relates to bodily actions but also as an account of misattributions of agency for inner speech, potentially explaining the etiology of auditory verbal hallucination (AVH). In this paper we provide a detailed analysis of the traditional comparator account for inner speech, pointing out serious problems with the specification of inner speech on which it is based and highlighting inconsistencies in the interpretation of the electrophysiological evidence commonly cited in its favor. In light of these analyses we propose a new comparator account of misattributed inner speech. The new account follows leading models of motor imagery in proposing that inner speech is not attenuated by motor prediction, but rather derived directly from it. We describe how failures of motor prediction would therefore directly affect the phenomenology of inner speech and trigger a mismatch in the comparison between motor prediction and motor intention, contributing to abnormal feelings of agency. We argue that the new account fits with the emerging phenomenological evidence that AVHs are both distinct from ordinary inner speech and heterogeneous. Finally, we explore the possibility that the new comparator account may extend to explain disruptions across a range of imagistic modalities, and outline avenues for future research.
Resumo:
We live in a richly structured auditory environment. From the sounds of cars charging towards us on the street to the sounds of music filling a dancehall, sounds like these are generally seen as being instances of things we hear but can also be understood as opportunities for action. In some circumstances, the sound of a car approaching towards us can provide critical information for the avoidance of harm. In the context of a concert venue, sociocultural practices like music can equally afford coordinated activities of movement, such as dancing or music making. Despite how evident the behavioral effects of sound are in our everyday experience, they have been sparsely accounted for within the field of psychology. Instead, most theories of auditory perception have been more concerned with understanding how sounds are passively processed and represented or how they convey information of world, neglecting how this information can be used for anything. Here, we argue against these previous rationalizations, suggesting instead that information is instantiated through use and, therefore, is an emergent effect of a perceiver’s interaction with their environment. Drawing on theory from psychology, philosophy and anthropology, we contend that by thinking of sounds as materials, theorists and researchers alike can get to grips with the vast array of auditory affordances that we purposefully bring into use when interacting with the environment.
Resumo:
Gait disturbances are a common feature of Parkinson’s disease, one of the most severe being freezing of gait. Sensory cueing is a common method used to facilitate stepping in people with Parkinson’s. Recent work has shown that, compared to walking to a metronome, Parkinson’s patients without freezing of gait (nFOG) showed reduced gait variability when imitating recorded sounds of footsteps made on gravel. However, it is not known if these benefits are realised through the continuity of the acoustic information or the action-relevance. Furthermore, no study has examined if these benefits extend to PD with freezing of gait. We prepared four different auditory cues (varying in action-relevance and acoustic continuity) and asked 19 Parkinson’s patients (10 nFOG, 9 with freezing of gait (FOG)) to step in place to each cue. Results showed a superiority of action-relevant cues (regardless of cue-continuity) for inducing reductions in Step coefficient of variation (CV). Acoustic continuity was associated with a significant reduction in Swing CV. Neither cue-continuity nor action-relevance was independently sufficient to increase the time spent stepping before freezing. However, combining both attributes in the same cue did yield significant improvements. This study demonstrates the potential of using action-sounds as sensory cues for Parkinson’s patients with freezing of gait. We suggest that the improvements shown might be considered audio-motor ‘priming’ (i.e., listening to the sounds of footsteps will engage sensorimotor circuitry relevant to the production of that same action, thus effectively bypassing the defective basal ganglia).
Resumo:
Anyone who has ever played a musical instrument will certify the development of a particular type of relationship between the instrument and the performer. This relationship goes beyond a convenient coupling that is optimized for sound production. Every musical instrument defines ways in which to be touched, felt, activated. Music performance is dependent on bodily involvement that goes beyond the auditory and the sense of hearing. This article investigates the role of haptic sensation in the context of the performer-instrument relationship and draws on the writings of Georges Bataille to illuminate a discussion of the erotic in performance.
Resumo:
In this paper, we propose a theoretical framework for the design of tangible interfaces for musical expression. The main insight for the proposed approach is the importance and utility of familiar sensorimotor experiences for the creation of engaging and playable new musical instruments. In particular, we suggest exploiting the commonalities between different natural interactions by varying the auditory response or tactile details of the instrument within certain limits. Using this principle, devices for classes of sounds such as coarse grain collision interactions or friction interactions can be designed. The designs we propose retain the familiar tactile aspect of the interaction so that the performer can take advantage of tacit knowledge gained through experiences with such phenomena in the real world.
Resumo:
Latent inhibition (LI) is a measure of reduced learning about a stimulus to which there has been prior exposure without any consequence. It therefore requires a comparison between a pre-exposed (PE) and a non-pre-exposed (NPE) condition. Since, in animals, LI is disrupted by amphetamines and enhanced by antipsychotics, LI disruption has been proposed as a measure of the characteristic attentional deficit in schizophrenia: the inability to ignore irrelevant stimuli. The findings in humans are, however, inconsistent. In particular, a recent investigation suggested that since haloperidol disrupted LI in healthy volunteers, and LI was normal in non-medicated patients with schizophrenia, the previous findings in schizophrenic patients were entirely due to the negative effects of their medication on LI (Williams et al., 1998). We conducted two studies of antipsychotic drug effects on auditory LI using a within-subject, parallel group design in healthy volunteers. In the first of these, single doses of haloperidol (1 mg. i.v.) were compared with paroxetine (20 mg p.o.) and placebo, and in the second, chlorpromazine (100 mg p.o.) was compared with lorazepam (2 mg. p.o.) and placebo. Eye movements, neuropsychological test performance (spatial working memory (SWM), Tower of London and intra/extra dimensional shift, from the CANTAB test battery) and visual analogue rating scales, were also included as other measures of attention and frontal lobe function. Haloperidol was associated with a non-significant reduction in LI scores, and dysphoria/akathisia (Barnes Akathisia Rating Scale) in three-quarters of the subjects. The LI finding may be explained by increased distractibility which was indicated by an increase in antisaccade directional errors in this group. In contrast, LI was significantly increased by chlorpromazine but not by an equally sedative dose of lorazepam (both drugs causing marked decreases in peak saccadic velocity). Paroxetine had no effect on LI, eye movements or CANTAB neuropsychological test performance. Haloperidol was associated with impaired SWM, which correlated with the degree of dysphoria/akathisia, but no other drug effects on CANTAB measures were detected. We conclude that the effect of antipsychotics on LI is both modality and pharmacologically dependent and that further research using a wider range of antipsychotic compounds is necessary to clarify the cognitive effects of these drugs, and to determine whether there are important differences between them.
Resumo:
The authors investigated how the intention to passively perform a behavior and the intention to persist with a behavior impact upon the spatial and temporal properties of bimanual coordination. Participants (N = 30) were asked to perform a bimanual coordination task that demanded the continuous rhythmic extension-flexion of the wrists. The frequency of movement was scaled by an auditory metronome beat from 1.5 Hz, increasing to 3.25 Hz in .25-Hz increments. The task was further defined by the requirement that the movements be performed initially in a prescribed pattern of coordination (in-phase or antiphase) while the participants assumed one of two different intentional states: stay with the prescribed pattern should it become unstable or do not intervene should the pattern begin to change. Transitions away from the initially prescribed pattern were observed only in trials conducted in the antiphase mode of coordination. The time at which the antiphase pattern of coordination became unstable was not found to be influenced by the intentional state. In addition, the do-not-intervene set led to a switch to an in-phase pattern of coordination whereas the stay set led to phase wandering. Those findings are discussed within the framework of a dynamic account of bimanual coordination.
Resumo:
An experiment was performed to characterise the movement kinematics and the electromyogram (EMG) during rhythmic voluntary flexion and extension of the wrist against different compliant (elastic-viscous-inertial) loads. Three levels of each type of load, and an unloaded condition, were employed. The movements were paced at a frequency of I Hz by an auditory metronome, and visual feedback of wrist displacement in relation to a target amplitude of 100degrees was provided. Electro-myographic recordings were obtained from flexor carpi radialis (FCR) and extensor carpi radialis brevis (ECR). The movement profiles generated in the ten experimental conditions were indistinguishable, indicating that the CNS was able to compensate completely for the imposed changes in the task dynamics. When the level of viscous load was elevated, this compensation took the form of an increase in the rate of initial rise of the flexor and the extensor EMG burst. In response to increases in inertial load, the flexor and extensor EMG bursts commenced and terminated earlier in the movement cycle, and tended to be of greater duration. When the movements were performed in opposition to an elastic load, both the onset and offset of EMG activity occurred later than in the unloaded condition. There was also a net reduction in extensor burst duration with increases in elastic load, and an increase in the rate of initial rise of the extensor burst. Less pronounced alterations in the rate of initial rise of the flexor EMG burst were also observed. In all instances, increases in the magnitude of the external load led to elevations in the overall level of muscle activation. These data reveal that the elements of the central command that are modified in response to the imposition of a compliant load are contingent, not only upon the magnitude, but also upon the character of the load.