51 resultados para auditory EEG


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Event-related potentials (ERPs) and other electroencephalographic (EEG) evidence show that frontal brain areas of higher and lower socioeconomic status (SES) children are recruited differently during selective attention tasks. We assessed whether multiple variables related to self-regulation (perceived mental effort) emotional states (e.g., anxiety, stress, etc.) and motivational states (e.g., boredom, engagement, etc.) may co-occur or interact with frontal attentional processing probed in two matched-samples of fourteen lower-SES and higher-SES adolescents. ERP and EEG activation were measured during a task probing selective attention to sequences of tones. Pre- and post-task salivary cortisol and self-reported emotional states were also measured. At similar behavioural performance level, the higher-SES group showed a greater ERP differentiation between attended (relevant) and unattended (irrelevant) tones than the lower-SES group. EEG power analysis revealed a cross-over interaction, specifically, lower-SES adolescents showed significantly higher theta power when ignoring rather than attending to tones, whereas, higher-SES adolescents showed the opposite pattern. Significant theta asymmetry differences were also found at midfrontal electrodes indicating left hypo-activity in lower-SES adolescents. The attended vs. unattended difference in right midfrontal theta increased with individual SES rank, and (independently from SES) with lower cortisol task reactivity and higher boredom. Results suggest lower-SES children used additional compensatory resources to monitor/control response inhibition to distracters, perceiving also more mental effort, as compared to higher-SES counterparts. Nevertheless, stress, boredom and other task-related perceived states were unrelated to SES. Ruling out presumed confounds, this study confirms the midfrontal mechanisms responsible for the SES effects on selective attention reported previously and here reflect genuine cognitive differences.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Cognitive and neurophysiological correlates of arithmetic calculation, concepts, and applications were examined in 41 adolescents, ages 12-15 years. Psychological and task-related EEG measures which correctly distinguished children who scored low vs. high (using a median split) in each arithmetic subarea were interpreted as indicative of processes involved. Calculation was related to visual-motor sequencing, spatial visualization, theta activity measured during visual-perceptual and verbal tasks at right- and left-hemisphere locations, and right-hemisphere alpha activity measured during a verbal task. Performance on arithmetic word problems was related to spatial visualization and perception, vocabulary, and right-hemisphere alpha activity measured during a verbal task. Results suggest a complex interplay of spatial and sequential operations in arithmetic performance, consistent with processing model concepts of lateralized brain function.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Before a natural sound can be recognized, an auditory signature of its source must be learned through experience. Here we used random waveforms to probe the formation of new memories for arbitrary complex sounds. A behavioral measure was designed, based on the detection of repetitions embedded in noises up to 4 s long. Unbeknownst to listeners, some noise samples reoccurred randomly throughout an experimental block. Results showed that repeated exposure induced learning for otherwise totally unpredictable and meaningless sounds. The learning was unsupervised and resilient to interference from other task-relevant noises. When memories were formed, they emerged rapidly, performance became abruptly near-perfect, and multiple noises were remembered for several weeks. The acoustic transformations to which recall was tolerant suggest that the learned features were local in time. We propose that rapid sensory plasticity could explain how the auditory brain creates useful memories from the ever-changing, but sometimes repeating, acoustical world. © 2010 Elsevier Inc.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Achieving a clearer picture of categorial distinctions in the brain is essential for our understanding of the conceptual lexicon, but much more fine-grained investigations are required in order for this evidence to contribute to lexical research. Here we present a collection of advanced data-mining techniques that allows the category of individual concepts to be decoded from single trials of EEG data. Neural activity was recorded while participants silently named images of mammals and tools, and category could be detected in single trials with an accuracy well above chance, both when considering data from single participants, and when group-training across participants. By aggregating across all trials, single concepts could be correctly assigned to their category with an accuracy of 98%. The pattern of classifications made by the algorithm confirmed that the neural patterns identified are due to conceptual category, and not any of a series of processing-related confounds. The time intervals, frequency bands and scalp locations that proved most informative for prediction permit physiological interpretation: the widespread activation shortly after appearance of the stimulus (from 100. ms) is consistent both with accounts of multi-pass processing, and distributed representations of categories. These methods provide an alternative to fMRI for fine-grained, large-scale investigations of the conceptual lexicon. © 2010 Elsevier Inc.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Sounds such as the voice or musical instruments can be recognized on the basis of timbre alone. Here, sound recognition was investigated with severely reduced timbre cues. Short snippets of naturally recorded sounds were extracted from a large corpus. Listeners were asked to report a target category (e.g., sung voices) among other sounds (e.g., musical instruments). All sound categories covered the same pitch range, so the task had to be solved on timbre cues alone. The minimum duration for which performance was above chance was found to be short, on the order of a few milliseconds, with the best performance for voice targets. Performance was independent of pitch and was maintained when stimuli contained less than a full waveform cycle. Recognition was not generally better when the sound snippets were time-aligned with the sound onset compared to when they were extracted with a random starting time. Finally, performance did not depend on feedback or training, suggesting that the cues used by listeners in the artificial gating task were similar to those relevant for longer, more familiar sounds. The results show that timbre cues for sound recognition are available at a variety of time scales, including very short ones.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objectives: A common behavioural symptom of Parkinson’s disease (PD) is reduced step length (SL). Whilst sensory cueing strategies can be effective in increasing SL and reducing gait variability, current cueing strategies conveying spatial or temporal information are generally confined to the use of either visual or auditory cue modalities, respectively. We describe a novel cueing strategy using ecologically-valid ‘action-related’ sounds (footsteps on gravel) that convey both spatial and temporal parameters of a specific action within a single cue.
Methods: The current study used a real-time imitation task to examine whether PD affects the ability to re-enact changes in spatial characteristics of stepping actions, based solely on auditory information. In a second experimental session, these procedures were repeated using synthesized sounds derived from recordings of the kinetic interactions between the foot and walking surface. A third experimental session examined whether adaptations observed when participants walked to action-sounds were preserved when participants imagined either real recorded or synthesized sounds.
Results: Whilst healthy control participants were able to re-enact significant changes in SL in all cue conditions, these adaptations, in conjunction with reduced variability of SL were only observed in the PD group when walking to, or imagining the recorded sounds.
Conclusions: The findings show that while recordings of stepping sounds convey action information to allow PD patients to re-enact and imagine spatial characteristics of gait, synthesis of sounds purely from gait kinetics is insufficient to evoke similar changes in behaviour, perhaps indicating that PD patients have a higher threshold to cue sensorimotor resonant responses.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Human listeners seem to be remarkably able to recognise acoustic sound sources based on timbre cues. Here we describe a psychophysical paradigm to estimate the time it takes to recognise a set of complex sounds differing only in timbre cues: both in terms of the minimum duration of the sounds and the inferred neural processing time. Listeners had to respond to the human voice while ignoring a set of distractors. All sounds were recorded from natural sources over the same pitch range and equalised to the same duration and power. In a first experiment, stimuli were gated in time with a raised-cosine window of variable duration and random onset time. A voice/non-voice (yes/no) task was used. Performance, as measured by d', remained above chance for the shortest sounds tested (2 ms); d's above 1 were observed for durations longer than or equal to 8 ms. Then, we constructed sequences of short sounds presented in rapid succession. Listeners were asked to report the presence of a single voice token that could occur at a random position within the sequence. This method is analogous to the "rapid sequential visual presentation" paradigm (RSVP), which has been used to evaluate neural processing time for images. For 500-ms sequences made of 32-ms and 16-ms sounds, d' remained above chance for presentation rates of up to 30 sounds per second. There was no effect of the pitch relation between successive sounds: identical for all sounds in the sequence or random for each sound. This implies that the task was not determined by streaming or forward masking, as both phenomena would predict better performance for the random pitch condition. Overall, the recognition of familiar sound categories such as the voice seems to be surprisingly fast, both in terms of the acoustic duration required and of the underlying neural time constants.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Previous behavioural studies have shown that repeated presentation of a randomly chosen acoustic pattern leads to the unsupervised learning of some of its specific acoustic features. The objective of our study was to determine the neural substrate for the representation of freshly learnt acoustic patterns. Subjects first performed a behavioural task that resulted in the incidental learning of three different noise-like acoustic patterns. During subsequent high-resolution functional magnetic resonance imaging scanning, subjects were then exposed again to these three learnt patterns and to others that had not been learned. Multi-voxel pattern analysis was used to test if the learnt acoustic patterns could be 'decoded' from the patterns of activity in the auditory cortex and medial temporal lobe. We found that activity in planum temporale and the hippocampus reliably distinguished between the learnt acoustic patterns. Our results demonstrate that these structures are involved in the neural representation of specific acoustic patterns after they have been learnt.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Traditionally, audio-motor timing processes have been understood as motor output from an internal clock, the speed of which is set by heard sound pulses. In contrast, this paper proposes a more ecologically-grounded approach, arguing that audio-motor processes are better characterized as performed actions on the perceived structure of auditory events. This position is explored in the context of auditory sensorimotor synchronization and continuation timing. Empirical research shows that the structure of sounds as auditory events can lead to marked differences in movement timing performance. The nature of these effects is discussed in the context of perceived action-relevance of auditory event structure. It is proposed that different forms of sound invite or support different patterns of sensorimotor timing. Hence, the temporal information in looped auditory signals is more than just the interval durations between onsets: all metronomes are not created equal. The potential implications for auditory guides in motor performance enhancement are also described.