883 resultados para auditory EEG
Resumo:
Pavlovian fear conditioning is an evolutionary conserved and extensively studied form of associative learning and memory. In mammals, the lateral amygdala (LA) is an essential locus for Pavlovian fear learning and memory. Despite significant progress unraveling the cellular mechanisms responsible for fear conditioning, very little is known about the anatomical organization of neurons encoding fear conditioning in the LA. One key question is how fear conditioning to different sensory stimuli is organized in LA neuronal ensembles. Here we show that Pavlovian fear conditioning, formed through either the auditory or visual sensory modality, activates a similar density of LA neurons expressing a learning-induced phosphorylated extracellular signal-regulated kinase (p-ERK1/2). While the size of the neuron population specific to either memory was similar, the anatomical distribution differed. Several discrete sites in the LA contained a small but significant number of p-ERK1/2-expressing neurons specific to either sensory modality. The sites were anatomically localized to different levels of the longitudinal plane and were independent of both memory strength and the relative size of the activated neuronal population, suggesting some portion of the memory trace for auditory and visually cued fear conditioning is allocated differently in the LA. Presenting the visual stimulus by itself did not activate the same p-ERK1/2 neuron density or pattern, confirming the novelty of light alone cannot account for the specific pattern of activated neurons after visual fear conditioning. Together, these findings reveal an anatomical distribution of visual and auditory fear conditioning at the level of neuronal ensembles in the LA.
Resumo:
Weta possess typical Ensifera ears. Each ear comprises three functional parts: two equally sized tympanal membranes, an underlying system of modified tracheal chambers, and the auditory sensory organ, the crista acustica. This organ sits within an enclosed fluid-filled channel-previously presumed to be hemolymph. The role this channel plays in insect hearing is unknown. We discovered that the fluid within the channel is not actually hemolymph, but a medium composed principally of lipid from a new class. Three-dimensional imaging of this lipid channel revealed a previously undescribed tissue structure within the channel, which we refer to as the olivarius organ. Investigations into the function of the olivarius reveal de novo lipid synthesis indicating that it is producing these lipids in situ from acetate. The auditory role of this lipid channel was investigated using Laser Doppler vibrometry of the tympanal membrane, which shows that the displacement of the membrane is significantly increased when the lipid is removed from the auditory system. Neural sensitivity of the system, however, decreased upon removal of the lipid-a surprising result considering that in a typical auditory system both the mechanical and auditory sensitivity are positively correlated. These two results coupled with 3D modelling of the auditory system lead us to hypothesize a model for weta audition, relying strongly on the presence of the lipid channel. This is the first instance of lipids being associated with an auditory system outside of the Odentocete cetaceans, demonstrating convergence for the use of lipids in hearing.
Resumo:
Kiwi are rare and strictly protected birds of iconic status in New Zealand. Yet, perhaps due to their unusual, nocturnal lifestyle, surprisingly little is known about their behaviour or physiology. In the present study, we exploited known correlations between morphology and physiology in the avian inner ear and brainstem to predict the frequency range of best hearing in the North Island brown kiwi. The mechanosensitive hair bundles of the sensory hair cells in the basilar papilla showed the typical change from tall bundles with few stereovilli to short bundles with many stereovilli along the apical-to-basal tonotopic axis. In contrast to most birds, however, the change was considerably less in the basal half of the epithelium. Dendritic lengths in the brainstem nucleus laminaris also showed the typical change along the tonotopic axis. However, as in the basilar papilla, the change was much less pronounced in the presumed high-frequency regions. Together, these morphological data suggest a fovea-like overrepresentation of a narrow high-frequency band in kiwi. Based on known correlations of hair-cell microanatomy and physiological responses in other birds, a specific prediction for the frequency representation along the basilar papilla of the kiwi was derived. The predicted overrepresentation of approximately 4-6 kHz matches potentially salient frequency bands of kiwi vocalisations and may thus be an adaptation to a nocturnal lifestyle in which auditory communication plays a dominant role.
Resumo:
Feedforward inhibition deficits have been consistently demonstrated in a range of neuropsychiatric conditions using prepulse inhibition (PPI) of the acoustic startle eye-blink reflex when assessing sensorimotor gating. While PPI can be recorded in acutely decerebrated rats, behavioural, pharmacological and psychophysiological studies suggest the involvement of a complex neural network extending from brainstem nuclei to higher order cortical areas. The current functional magnetic resonance imaging study investigated the neural network underlying PPI and its association with electromyographically (EMG) recorded PPI of the acoustic startle eye-blink reflex in 16 healthy volunteers. A sparse imaging design was employed to model signal changes in blood oxygenation level-dependent (BOLD) responses to acoustic startle probes that were preceded by a prepulse at 120 ms or 480 ms stimulus onset asynchrony or without prepulse. Sensorimotor gating was EMG confirmed for the 120-ms prepulse condition, while startle responses in the 480-ms prepulse condition did not differ from startle alone. Multiple regression analysis of BOLD contrasts identified activation in pons, thalamus, caudate nuclei, left angular gyrus and bilaterally in anterior cingulate, associated with EMGrecorded sensorimotor gating. Planned contrasts confirmed increased pons activation for startle alone vs 120-ms prepulse condition, while increased anterior superior frontal gyrus activation was confirmed for the reverse contrast. Our findings are consistent with a primary pontine circuitry of sensorimotor gating that interconnects with inferior parietal, superior temporal, frontal and prefrontal cortices via thalamus and striatum. PPI processes in the prefrontal, frontal and superior temporal cortex were functionally distinct from sensorimotor gating.
Resumo:
The neural basis of Pavlovian fear conditioning is well understood and depends upon neural processes within the amygdala. Stress is known to play a role in the modulation of fear-related behavior, including Pavlovian fear conditioning. Chronic restraint stress has been shown to enhance fear conditioning to discrete and contextual stimuli; however, the time course and extent of restraint that is essential for this modulation of fear learning remains unclear. Thus, we tested the extent to which a single exposure to 1 hr of restraint would alter subsequent auditory fear conditioning in rats.
Resumo:
Naming an object entails a number of processing stages, including retrieval of a target lexical concept and encoding of its phonological word form. We investigated these stages using the picture-word interference task in an fMRI experiment. Participants named target pictures in the presence of auditorily presented semantically related, phonologically related, or unrelated distractor words or in isolation. We observed BOLD signal changes in left-hemisphere regions associated with lexical-conceptual and phonological processing, including the midto-posterior lateral temporal cortex. However, these BOLD responses manifested as signal reductions for all distractor conditions relative to naming alone. Compared with unrelated words, phonologically related distractors showed further signal reductions, whereas only the pars orbitalis of the left inferior frontal cortex showed a selective reduction in response in the semantic condition. We interpret these findings as indicating that the word forms of lexical competitors are phonologically encoded and that competition during lexical selection is reduced by phonologically related distractors. Since the extended nature of auditory presentation requires a large portion of a word to be presented before its meaning is accessed, we attribute the BOLD signal reductions observed for semantically related and unrelated words to lateral inhibition mechanisms engaged after target name selection has occurred, as has been proposed in some production models.
Resumo:
Companies such as NeuroSky and Emotiv Systems are selling non-medical EEG devices for human computer interaction. These devices are significantly more affordable than their medical counterparts, and are mainly used to measure levels of engagement, focus, relaxation and stress. This information is sought after for marketing research and games. However, these EEG devices have the potential to enable users to interact with their surrounding environment using thoughts only, without activating any muscles. In this paper, we present preliminary results that demonstrate that despite reduced voltage and time sensitivity compared to medical-grade EEG systems, the quality of the signals of the Emotiv EPOC neuroheadset is sufficiently good in allowing discrimina tion between imaging events. We collected streams of EEG raw data and trained different types of classifiers to discriminate between three states (rest and two imaging events). We achieved a generalisation error of less than 2% for two types of non-linear classifiers.
Resumo:
Melancholic depressive patients referred for ECT were randomized to receive either low dose (n = 20) or high dose (n = 20) stimulus applied bifrontotemporally. The two stimulus groups were comparable on the clinical variables. The EEG seizure was recorded on two channels (right and left frontal), digitized, coded and analyzed offline without knowledge of ECT parameters. EEG seizure was of comparable duration in the two stimulus (high dose and low dose) groups. A new composite measure, Strength-Symmetry-Index (SSI), based on strength and symmetry of seizure EEG was computed using fractal geometry. The SSI of the early-seizure was higher in the high dose than in the low dose ECT group. In a stepwise, logistic regression model, this variable contributed to 65% with correct classification of high dose and low dose ECT seizures.
Resumo:
The literature contains many examples of digital procedures for the analytical treatment of electroencephalograms, but there is as yet no standard by which those techniques may be judged or compared. This paper proposes one method of generating an EEG, based on a computer program for Zetterberg's simulation. It is assumed that the statistical properties of an EEG may be represented by stationary processes having rational transfer functions and achieved by a system of software fillers and random number generators.The model represents neither the neurological mechanism response for generating the EEG, nor any particular type of EEG record; transient phenomena such as spikes, sharp waves and alpha bursts also are excluded. The basis of the program is a valid ‘partial’ statistical description of the EEG; that description is then used to produce a digital representation of a signal which if plotted sequentially, might or might not by chance resemble an EEG, that is unimportant. What is important is that the statistical properties of the series remain those of a real EEG; it is in this sense that the output is a simulation of the EEG. There is considerable flexibility in the form of the output, i.e. its alpha, beta and delta content, which may be selected by the user, the same selected parameters always producing the same statistical output. The filtered outputs from the random number sequences may be scaled to provide realistic power distributions in the accepted EEG frequency bands and then summed to create a digital output signal, the ‘stationary EEG’. It is suggested that the simulator might act as a test input to digital analytical techniques for the EEG, a simulator which would enable at least a substantial part of those techniques to be compared and assessed in an objective manner. The equations necessary to implement the model are given. The program has been run on a DEC1090 computer but is suitable for any microcomputer having more than 32 kBytes of memory; the execution time required to generate a 25 s simulated EEG is in the region of 15 s.
Resumo:
In this paper, we present an approach to estimate fractal complexity of discrete time signal waveforms based on computation of area bounded by sample points of the signal at different time resolutions. The slope of best straight line fit to the graph of log(A(rk)A / rk(2)) versus log(l/rk) is estimated, where A(rk) is the area computed at different time resolutions and rk time resolutions at which the area have been computed. The slope quantifies complexity of the signal and it is taken as an estimate of the fractal dimension (FD). The proposed approach is used to estimate the fractal dimension of parametric fractal signals with known fractal dimensions and the method has given accurate results. The estimation accuracy of the method is compared with that of Higuchi's and Sevcik's methods. The proposed method has given more accurate results when compared with that of Sevcik's method and the results are comparable to that of the Higuchi's method. The practical application of the complexity measure in detecting change in complexity of signals is discussed using real sleep electroencephalogram recordings from eight different subjects. The FD-based approach has shown good performance in discriminating different stages of sleep.
Resumo:
Comprehension of a complex acoustic signal - speech - is vital for human communication, with numerous brain processes required to convert the acoustics into an intelligible message. In four studies in the present thesis, cortical correlates for different stages of speech processing in a mature linguistic system of adults were investigated. In two further studies, developmental aspects of cortical specialisation and its plasticity in adults were examined. In the present studies, electroencephalographic (EEG) and magnetoencephalographic (MEG) recordings of the mismatch negativity (MMN) response elicited by changes in repetitive unattended auditory events and the phonological mismatch negativity (PMN) response elicited by unexpected speech sounds in attended speech inputs served as the main indicators of cortical processes. Changes in speech sounds elicited the MMNm, the magnetic equivalent of the electric MMN, that differed in generator loci and strength from those elicited by comparable changes in non-speech sounds, suggesting intra- and interhemispheric specialisation in the processing of speech and non-speech sounds at an early automatic processing level. This neuronal specialisation for the mother tongue was also reflected in the more efficient formation of stimulus representations in auditory sensory memory for typical native-language speech sounds compared with those formed for unfamiliar, non-prototype speech sounds and simple tones. Further, adding a speech or non-speech sound context to syllable changes was found to modulate the MMNm strength differently in the left and right hemispheres. Following the acoustic-phonetic processing of speech input, phonological effort related to the selection of possible lexical (word) candidates was linked with distinct left-hemisphere neuronal populations. In summary, the results suggest functional specialisation in the neuronal substrates underlying different levels of speech processing. Subsequently, plasticity of the brain's mature linguistic system was investigated in adults, in whom representations for an aurally-mediated communication system, Morse code, were found to develop within the same hemisphere where representations for the native-language speech sounds were already located. Finally, recording and localization of the MMNm response to changes in speech sounds was successfully accomplished in newborn infants, encouraging future MEG investigations on, for example, the state of neuronal specialisation at birth.
Resumo:
Pitch discrimination is a fundamental property of the human auditory system. Our understanding of pitch-discrimination mechanisms is important from both theoretical and clinical perspectives. The discrimination of spectrally complex sounds is crucial in the processing of music and speech. Current methods of cognitive neuroscience can track the brain processes underlying sound processing either with precise temporal (EEG and MEG) or spatial resolution (PET and fMRI). A combination of different techniques is therefore required in contemporary auditory research. One of the problems in comparing the EEG/MEG and fMRI methods, however, is the fMRI acoustic noise. In the present thesis, EEG and MEG in combination with behavioral techniques were used, first, to define the ERP correlates of automatic pitch discrimination across a wide frequency range in adults and neonates and, second, they were used to determine the effect of recorded acoustic fMRI noise on those adult ERP and ERF correlates during passive and active pitch discrimination. Pure tones and complex 3-harmonic sounds served as stimuli in the oddball and matching-to-sample paradigms. The results suggest that pitch discrimination in adults, as reflected by MMN latency, is most accurate in the 1000-2000 Hz frequency range, and that pitch discrimination is facilitated further by adding harmonics to the fundamental frequency. Newborn infants are able to discriminate a 20% frequency change in the 250-4000 Hz frequency range, whereas the discrimination of a 5% frequency change was unconfirmed. Furthermore, the effect of the fMRI gradient noise on the automatic processing of pitch change was more prominent for tones with frequencies exceeding 500 Hz, overlapping with the spectral maximum of the noise. When the fundamental frequency of the tones was lower than the spectral maximum of the noise, fMRI noise had no effect on MMN and P3a, whereas the noise delayed and suppressed N1 and exogenous N2. Noise also suppressed the N1 amplitude in a matching-to-sample working memory task. However, the task-related difference observed in the N1 component, suggesting a functional dissociation between the processing of spatial and non-spatial auditory information, was partially preserved in the noise condition. Noise hampered feature coding mechanisms more than it hampered the mechanisms of change detection, involuntary attention, and the segregation of the spatial and non-spatial domains of working-memory. The data presented in the thesis can be used to develop clinical ERP-based frequency-discrimination protocols and combined EEG and fMRI experimental paradigms.