172 resultados para auditory EEG
Resumo:
In this article, we will link neuroimaging, data analysis, and intervention methods in an important psychiatric condition: auditory verbal hallucinations (AVH). The clinical and phenomenological background as well as neurophysiological findings will be covered and discussed with respect to noninvasive brain stimulation. Additionally, methods of noninvasive brain stimulation will be presented as ways to intervene with AVH. Finally, preliminary conclusions and possible future perspectives will be proposed.
Resumo:
In humans, theta band (5-7 Hz) power typically increases when performing cognitively demanding working memory (WM) tasks, and simultaneous EEG-fMRI recordings have revealed an inverse relationship between theta power and the BOLD (blood oxygen level dependent) signal in the default mode network during WM. However, synchronization also plays a fundamental role in cognitive processing, and the level of theta and higher frequency band synchronization is modulated during WM. Yet, little is known about the link between BOLD, EEG power, and EEG synchronization during WM, and how these measures develop with human brain maturation or relate to behavioral changes. We examined EEG-BOLD signal correlations from 18 young adults and 15 school-aged children for age-dependent effects during a load-modulated Sternberg WM task. Frontal load (in-)dependent EEG theta power was significantly enhanced in children compared to adults, while adults showed stronger fMRI load effects. Children demonstrated a stronger negative correlation between global theta power and the BOLD signal in the default mode network relative to adults. Therefore, we conclude that theta power mediates the suppression of a task-irrelevant network. We further conclude that children suppress this network even more than adults, probably from an increased level of task-preparedness to compensate for not fully mature cognitive functions, reflected in lower response accuracy and increased reaction time. In contrast to power, correlations between instantaneous theta global field synchronization and the BOLD signal were exclusively positive in both age groups but only significant in adults in the frontal-parietal and posterior cingulate cortices. Furthermore, theta synchronization was weaker in children and was--in contrast to EEG power--positively correlated with response accuracy in both age groups. In summary we conclude that theta EEG-BOLD signal correlations differ between spectral power and synchronization and that these opposite correlations with different distributions undergo similar and significant neuronal developments with brain maturation.
Resumo:
Human behavior and psychological functioning is motivated and guided by individual goals. Motivational incongruence refers to states of insufficient goal satisfaction and is tightly related to psychological problems and even psychopathology. In the present study, individual levels of motivational incongruence were assessed with the incongruence-questionnaire (INC) in a healthy sample. In addition, multi-channel resting-state EEG was measured. Individual variations of EEG synchronization and spectral power were related to individual levels of motivational incongruence. For significant correlations, the relation to intracerebral sources of electrical brain activity was investigated with sLORETA. The results indicate that, even in a healthy sample with rather low degrees of motivational incongruence, this insufficient goal satisfaction is related to consistent changes in resting state brain activity. Upper Alpha band attenuation seems to be most indicative of increased levels of motivational incongruence. This is reflected not only in significantly reduced functional connectivity, but also in changes regarding the level of brain activation, as indicated by significant effects in the spectral power and LORETA analyses. Results are related to research investigating the upper Alpha band and are discussed in the framework of Grawe's consistency theory.
Resumo:
Among other auditory operations, the analysis of different sound levels received at both ears is fundamental for the localization of a sound source. These so-called interaural level differences, in animals, are coded by excitatory-inhibitory neurons yielding asymmetric hemispheric activity patterns with acoustic stimuli having maximal interaural level differences. In human auditory cortex, the temporal blood oxygen level-dependent (BOLD) response to auditory inputs, as measured by functional magnetic resonance imaging (fMRI), consists of at least two independent components: an initial transient and a subsequent sustained signal, which, on a different time scale, are consistent with electrophysiological human and animal response patterns. However, their specific functional role remains unclear. Animal studies suggest these temporal components being based on different neural networks and having specific roles in representing the external acoustic environment. Here we hypothesized that the transient and sustained response constituents are differentially involved in coding interaural level differences and therefore play different roles in spatial information processing. Healthy subjects underwent monaural and binaural acoustic stimulation and BOLD responses were measured using high signal-to-noise-ratio fMRI. In the anatomically segmented Heschl's gyrus the transient response was bilaterally balanced, independent of the side of stimulation, while in opposite the sustained response was contralateralized. This dissociation suggests a differential role at these two independent temporal response components, with an initial bilateral transient signal subserving rapid sound detection and a subsequent lateralized sustained signal subserving detailed sound characterization.
Resumo:
The auditory cortex is anatomically segregated into a central core and a peripheral belt region, which exhibit differences in preference to bandpassed noise and in temporal patterns of response to acoustic stimuli. While it has been shown that visual stimuli can modify response magnitude in auditory cortex, little is known about differential patterns of multisensory interactions in core and belt. Here, we used functional magnetic resonance imaging and examined the influence of a short visual stimulus presented prior to acoustic stimulation on the spatial pattern of blood oxygen level-dependent signal response in auditory cortex. Consistent with crossmodal inhibition, the light produced a suppression of signal response in a cortical region corresponding to the core. In the surrounding areas corresponding to the belt regions, however, we found an inverse modulation with an increasing signal in centrifugal direction. Our data suggest that crossmodal effects are differentially modulated according to the hierarchical core-belt organization of auditory cortex.
Resumo:
Auditory neuroscience has not tapped fMRI's full potential because of acoustic scanner noise emitted by the gradient switches of conventional echoplanar fMRI sequences. The scanner noise is pulsed, and auditory cortex is particularly sensitive to pulsed sounds. Current fMRI approaches to avoid stimulus-noise interactions are temporally inefficient. Since the sustained BOLD response to pulsed sounds decreases with repetition rate and becomes minimal with unpulsed sounds, we developed an fMRI sequence emitting continuous rather than pulsed gradient sound by implementing a novel quasi-continuous gradient switch pattern. Compared to conventional fMRI, continuous-sound fMRI reduced auditory cortex BOLD baseline and increased BOLD amplitude with graded sound stimuli, short sound events, and sounds as complex as orchestra music with preserved temporal resolution. Response in subcortical auditory nuclei was enhanced, but not the response to light in visual cortex. Finally, tonotopic mapping using continuous-sound fMRI demonstrates that enhanced functional signal-to-noise in BOLD response translates into improved spatial separability of specific sound representations.
Resumo:
The early detection of subjects with probable Alzheimer's disease (AD) is crucial for effective appliance of treatment strategies. Here we explored the ability of a multitude of linear and non-linear classification algorithms to discriminate between the electroencephalograms (EEGs) of patients with varying degree of AD and their age-matched control subjects. Absolute and relative spectral power, distribution of spectral power, and measures of spatial synchronization were calculated from recordings of resting eyes-closed continuous EEGs of 45 healthy controls, 116 patients with mild AD and 81 patients with moderate AD, recruited in two different centers (Stockholm, New York). The applied classification algorithms were: principal component linear discriminant analysis (PC LDA), partial least squares LDA (PLS LDA), principal component logistic regression (PC LR), partial least squares logistic regression (PLS LR), bagging, random forest, support vector machines (SVM) and feed-forward neural network. Based on 10-fold cross-validation runs it could be demonstrated that even tough modern computer-intensive classification algorithms such as random forests, SVM and neural networks show a slight superiority, more classical classification algorithms performed nearly equally well. Using random forests classification a considerable sensitivity of up to 85% and a specificity of 78%, respectively for the test of even only mild AD patients has been reached, whereas for the comparison of moderate AD vs. controls, using SVM and neural networks, values of 89% and 88% for sensitivity and specificity were achieved. Such a remarkable performance proves the value of these classification algorithms for clinical diagnostics.
Resumo:
Edges are important cues defining coherent auditory objects. As a model of auditory edges, sound on- and offset are particularly suitable to study their neural underpinnings because they contrast a specific physical input against no physical input. Change from silence to sound, that is onset, has extensively been studied and elicits transient neural responses bilaterally in auditory cortex. However, neural activity associated with sound onset is not only related to edge detection but also to novel afferent inputs. Edges at the change from sound to silence, that is offset, are not confounded by novel physical input and thus allow to examine neural activity associated with sound edges per se. In the first experiment, we used silent acquisition functional magnetic resonance imaging and found that the offset of pulsed sound activates planum temporale, superior temporal sulcus and planum polare of the right hemisphere. In the planum temporale and the superior temporal sulcus, offset response amplitudes were related to the pulse repetition rate of the preceding stimulation. In the second experiment, we found that these offset-responsive regions were also activated by single sound pulses, onset of sound pulse sequences and single sound pulse omissions within sound pulse sequences. However, they were not active during sustained sound presentation. Thus, our data show that circumscribed areas in right temporal cortex are specifically involved in identifying auditory edges. This operation is crucial for translating acoustic signal time series into coherent auditory objects.
Resumo:
RATIONALE: Both psychotropic drugs and mental disorders have typical signatures in quantitative electroencephalography (EEG). Previous studies found that some psychotropic drugs had EEG effects opposite to the EEG effects of the mental disorders treated with these drugs (key-lock principle). OBJECTIVES: We performed a placebo-controlled pharmaco-EEG study on two conventional antipsychotics (chlorpromazine and haloperidol) and four atypical antipsychotics (olanzapine, perospirone, quetiapine, and risperidone) in healthy volunteers. We investigated differences between conventional and atypical drug effects and whether the drug effects were compatible with the key-lock principle. METHODS: Fourteen subjects underwent seven EEG recording sessions, one for each drug (dosage equivalent of 1 mg haloperidol). In a time-domain analysis, we quantified the EEG by identifying clusters of transiently stable EEG topographies (microstates). Frequency-domain analysis used absolute power across electrodes and the location of the center of gravity (centroid) of the spatial distribution of power in different frequency bands. RESULTS: Perospirone increased duration of a microstate class typically shortened in schizophrenics. Haloperidol increased mean microstate duration of all classes, increased alpha 1 and beta 1 power, and tended to shift the beta 1 centroid posterior. Quetiapine decreased alpha 1 power and shifted the centroid anterior in both alpha bands. Olanzapine shifted the centroid anterior in alpha 2 and beta 1. CONCLUSIONS: The increased microstate duration under perospirone and haloperidol was opposite to effects previously reported in schizophrenic patients, suggesting a key-lock mechanism. The opposite centroid changes induced by olanzapine and quetiapine compared to haloperidol might characterize the difference between conventional and atypical antipsychotics.
Resumo:
BACKGROUND: It is well known that there are specific peripheral activation patterns associated with the emotional valence of sounds. However, it is unclear how these effects adapt over time. The personality traits influencing these processes are also not clear. Anxiety disorders influence the autonomic activation related to emotional processing. However, personality anxiety traits have never been studied in the context of affective auditory stimuli. METHODS: Heart rate, skin conductance, zygomatic muscle activity and subjective rating of emotional valence and arousal were recorded in healthy subjects during the presentation of pleasant, unpleasant, and neutral sounds. Recordings were repeated 1 week later to examine possible time-dependent changes related to habituation and sensitization processes. RESULTS AND CONCLUSION: There was not a generalized habituation or sensitization process related to the repeated presentation of affective sounds, but rather, specific adaptation processes for each physiological measure. These observations are consistent with previous studies performed with affective pictures and simple tones. Thus, the measures of skin conductance activity showed the strongest changes over time, including habituation during the first presentation session and sensitization at the end of the second presentation session, whereas the facial electromyographic activity habituated only for the neutral stimuli and the heart rate did not habituate at all. Finally, we showed that the measure of personality trait anxiety influenced the orienting reaction to affective sounds, but not the adaptation processes related to the repeated presentation of these sounds.
Resumo:
In this work, we present a multichannel EEG decomposition model based on an adaptive topographic time-frequency approximation technique. It is an extension of the Matching Pursuit algorithm and called dependency multichannel matching pursuit (DMMP). It takes the physiologically explainable and statistically observable topographic dependencies between the channels into account, namely the spatial smoothness of neighboring electrodes that is implied by the electric leadfield. DMMP decomposes a multichannel signal as a weighted sum of atoms from a given dictionary where the single channels are represented from exactly the same subset of a complete dictionary. The decomposition is illustrated on topographical EEG data during different physiological conditions using a complete Gabor dictionary. Further the extension of the single-channel time-frequency distribution to a multichannel time-frequency distribution is given. This can be used for the visualization of the decomposition structure of multichannel EEG. A clustering procedure applied to the topographies, the vectors of the corresponding contribution of an atom to the signal in each channel produced by DMMP, leads to an extremely sparse topographic decomposition of the EEG.
Resumo:
Edges are crucial for the formation of coherent objects from sequential sensory inputs within a single modality. Moreover, temporally coincident boundaries of perceptual objects across different sensory modalities facilitate crossmodal integration. Here, we used functional magnetic resonance imaging in order to examine the neural basis of temporal edge detection across modalities. Onsets of sensory inputs are not only related to the detection of an edge but also to the processing of novel sensory inputs. Thus, we used transitions from input to rest (offsets) as convenient stimuli for studying the neural underpinnings of visual and acoustic edge detection per se. We found, besides modality-specific patterns, shared visual and auditory offset-related activity in the superior temporal sulcus and insula of the right hemisphere. Our data suggest that right hemispheric regions known to be involved in multisensory processing are crucial for detection of edges in the temporal domain across both visual and auditory modalities. This operation is likely to facilitate cross-modal object feature binding based on temporal coincidence. Hum Brain Mapp, 2008. (c) 2008 Wiley-Liss, Inc.
Resumo:
Functional magnetic resonance imaging (fMRI) studies can provide insight into the neural correlates of hallucinations. Commonly, such studies require self-reports about the timing of the hallucination events. While many studies have found activity in higher-order sensory cortical areas, only a few have demonstrated activity of the primary auditory cortex during auditory verbal hallucinations. In this case, using self-reports as a model of brain activity may not be sensitive enough to capture all neurophysiological signals related to hallucinations. We used spatial independent component analysis (sICA) to extract the activity patterns associated with auditory verbal hallucinations in six schizophrenia patients. SICA decomposes the functional data set into a set of spatial maps without the use of any input function. The resulting activity patterns from auditory and sensorimotor components were further analyzed in a single-subject fashion using a visualization tool that allows for easy inspection of the variability of regional brain responses. We found bilateral auditory cortex activity, including Heschl's gyrus, during hallucinations of one patient, and unilateral auditory cortex activity in two more patients. The associated time courses showed a large variability in the shape, amplitude, and time of onset relative to the self-reports. However, the average of the time courses during hallucinations showed a clear association with this clinical phenomenon. We suggest that detection of this activity may be facilitated by examining hallucination epochs of sufficient length, in combination with a data-driven approach.