981 resultados para Auditory cortex


Relevância:

70.00% 70.00%

Publicador:

Resumo:

Syntax denotes a rule system that allows one to predict the sequencing of communication signals. Despite its significance for both human speech processing and animal acoustic communication, the representation of syntactic structure in the mammalian brain has not been studied electrophysiologically at the single-unit level. In the search for a neuronal correlate for syntax, we used playback of natural and temporally destructured complex species-specific communication calls—so-called composites—while recording extracellularly from neurons in a physiologically well defined area (the FM–FM area) of the mustached bat’s auditory cortex. Even though this area is known to be involved in the processing of target distance information for echolocation, we found that units in the FM–FM area were highly responsive to composites. The finding that neuronal responses were strongly affected by manipulation in the time domain of the natural composite structure lends support to the hypothesis that syntax processing in mammals occurs at least at the level of the nonprimary auditory cortex.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Auditory conditioning (associative learning) causes reorganization of the cochleotopic (frequency) maps of the primary auditory cortex (AI) and the inferior colliculus. Focal electric stimulation of the AI also evokes basically the same cortical and collicular reorganization as that caused by conditioning. Therefore, part of the neural mechanism for the plasticity of the central auditory system caused by conditioning can be explored by focal electric stimulation of the AI. The reorganization is due to shifts in best frequencies (BFs) together with shifts in frequency-tuning curves of single neurons. In the AI of the Mongolian gerbil (Meriones unguiculatus) and the posterior division of the AI of the mustached bat (Pteronotus parnellii), focal electric stimulation evokes BF shifts of cortical auditory neurons located within a 0.7-mm distance along the frequency axis. The amount and direction of BF shift differ depending on the relationship in BF between stimulated and recorded neurons, and between the gerbil and mustached bat. Comparison in BF shift between different mammalian species and between different cortical areas of a single species indicates that BF shift toward the BF of electrically stimulated cortical neurons (centripetal BF shift) is common in the AI, whereas BF shift away from the BF of electrically stimulated cortical neurons (centrifugal BF shift) is special. Therefore, we propose a hypothesis that reorganization, and accordingly organization, of cortical auditory areas caused by associative learning can be quite different between specialized and nonspecialized (ordinary) areas of the auditory cortex.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Cortical representational plasticity has been well documented after peripheral and central injuries or improvements in perceptual and motor abilities. This has led to inferences that the changes in cortical representations parallel and account for the improvement in performance during the period of skill acquisition. There have also been several examples of rapidly induced changes in cortical neuronal response properties, for example, by intracortical microstimulation or by classical conditioning paradigms. This report describes similar rapidly induced changes in a cortically mediated perception in human subjects, the ventriloquism aftereffect, which presumably reflects a corresponding change in the cortical representation of acoustic space. The ventriloquism aftereffect describes an enduring shift in the perception of the spatial location of acoustic stimuli after a period of exposure of spatially disparate and simultaneously presented acoustic and visual stimuli. Exposure of a mismatch of 8° for 20–30 min is sufficient to shift the perception of acoustic space by approximately the same amount across subjects and acoustic frequencies. Given that the cerebral cortex is necessary for the perception of acoustic space, it is likely that the ventriloquism aftereffect reflects a change in the cortical representation of acoustic space. Comparisons between the responses of single cortical neurons in the behaving macaque monkey and the stimulus parameters that give rise to the ventriloquism aftereffect suggest that the changes in the cortical representation of acoustic space may begin as early as the primary auditory cortex.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Working memory refers to the ability of the brain to store and manipulate information over brief time periods, ranging from seconds to minutes. As opposed to long-term memory, which is critically dependent upon hippocampal processing, critical substrates for working memory are distributed in a modality-specific fashion throughout cortex. N-methyl-D-aspartate (NMDA) receptors play a crucial role in the initiation of long-term memory. Neurochemical mechanisms underlying the transient memory storage required for working memory, however, remain obscure. Auditory sensory memory, which refers to the ability of the brain to retain transient representations of the physical features (e.g., pitch) of simple auditory stimuli for periods of up to approximately 30 sec, represents one of the simplest components of the brain working memory system. Functioning of the auditory sensory memory system is indexed by the generation of a well-defined event-related potential, termed mismatch negativity (MMN). MMN can thus be used as an objective index of auditory sensory memory functioning and a probe for investigating underlying neurochemical mechanisms. Monkeys generate cortical activity in response to deviant stimuli that closely resembles human MMN. This study uses a combination of intracortical recording and pharmacological micromanipulations in awake monkeys to demonstrate that both competitive and noncompetitive NMDA antagonists block the generation of MMN without affecting prior obligatory activity in primary auditory cortex. These findings suggest that, on a neurophysiological level, MMN represents selective current flow through open, unblocked NMDA channels. Furthermore, they suggest a crucial role of cortical NMDA receptors in the assessment of stimulus familiarity/unfamiliarity, which is a key process underlying working memory performance.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The placement of monocular laser lesions in the adult cat retina produces a lesion projection zone (LPZ) in primary visual cortex (V1) in which the majority of neurons have a normally located receptive field (RF) for stimulation of the intact eye and an ectopically located RF ( displaced to intact retina at the edge of the lesion) for stimulation of the lesioned eye. Animals that had such lesions for 14 - 85 d were studied under halothane and nitrous oxide anesthesia with conventional neurophysiological recording techniques and stimulation of moving light bars. Previous work suggested that a candidate source of input, which could account for the development of the ectopic RFs, was long-range horizontal connections within V1. The critical contribution of such input was examined by placing a pipette containing the neurotoxin kainic acid at a site in the normal V1 visual representation that overlapped with the ectopic RF recorded at a site within the LPZ. Continuation of well defined responses to stimulation of the intact eye served as a control against direct effects of the kainic acid at the LPZ recording site. In six of seven cases examined, kainic acid deactivation of neurons at the injection site blocked responsiveness to lesioned-eye stimulation at the ectopic RF for the LPZ recording site. We therefore conclude that long-range horizontal projections contribute to the dominant input underlying the capacity for retinal lesion-induced plasticity in V1.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Adults show great variation in their auditory skills, such as being able to discriminate between foreign speech-sounds. Previous research has demonstrated that structural features of auditory cortex can predict auditory abilities; here we are interested in the maturation of 2-Hz frequency-modulation (FM) detection, a task thought to tap into mechanisms underlying language abilities. We hypothesized that an individual's FM threshold will correlate with gray-matter density in left Heschl's gyrus, and that this function-structure relationship will change through adolescence. To test this hypothesis, we collected anatomical magnetic resonance imaging data from participants who were tested and scanned at three time points: at 10, 11.5 and 13 years of age. Participants judged which of two tones contained FM; the modulation depth was adjusted using an adaptive staircase procedure and their threshold was calculated based on the geometric mean of the last eight reversals. Using voxel-based morphometry, we found that FM threshold was significantly correlated with gray-matter density in left Heschl's gyrus at the age of 10 years, but that this correlation weakened with age. While there were no differences between girls and boys at Times 1 and 2, at Time 3 there was a relationship between gray-matter density in left Heschl's gyrus in boys but not in girls. Taken together, our results confirm that the structure of the auditory cortex can predict temporal processing abilities, namely that gray-matter density in left Heschl's gyrus can predict 2-Hz FM detection threshold. This ability is dependent on the processing of sounds changing over time, a skill believed necessary for speech processing. We tested this assumption and found that FM threshold significantly correlated with spelling abilities at Time 1, but that this correlation was found only in boys. This correlation decreased at Time 2, and at Time 3 we found a significant correlation between reading and FM threshold, but again, only in boys. We examined the sex differences in both the imaging and behavioral data taking into account pubertal stages, and found that the correlation between FM threshold and spelling was strongest pre-pubertally, and the correlation between FM threshold and gray-matter density in left Heschl's gyrus was strongest mid-pubertally.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Pouvoir déterminer la provenance des sons est fondamental pour bien interagir avec notre environnement. La localisation auditive est une faculté importante et complexe du système auditif humain. Le cerveau doit décoder le signal acoustique pour en extraire les indices qui lui permettent de localiser une source sonore. Ces indices de localisation auditive dépendent en partie de propriétés morphologiques et environnementales qui ne peuvent être anticipées par l'encodage génétique. Le traitement de ces indices doit donc être ajusté par l'expérience durant la période de développement. À l’âge adulte, la plasticité en localisation auditive existe encore. Cette plasticité a été étudiée au niveau comportemental, mais on ne connaît que très peu ses corrélats et mécanismes neuronaux. La présente recherche avait pour objectif d'examiner cette plasticité, ainsi que les mécanismes d'encodage des indices de localisation auditive, tant sur le plan comportemental, qu'à travers les corrélats neuronaux du comportement observé. Dans les deux premières études, nous avons imposé un décalage perceptif de l’espace auditif horizontal à l’aide de bouchons d’oreille numériques. Nous avons montré que de jeunes adultes peuvent rapidement s’adapter à un décalage perceptif important. Au moyen de l’IRM fonctionnelle haute résolution, nous avons observé des changements de l’activité corticale auditive accompagnant cette adaptation, en termes de latéralisation hémisphérique. Nous avons également pu confirmer l’hypothèse de codage par hémichamp comme représentation de l'espace auditif horizontal. Dans une troisième étude, nous avons modifié l’indice auditif le plus important pour la perception de l’espace vertical à l’aide de moulages en silicone. Nous avons montré que l’adaptation à cette modification n’était suivie d’aucun effet consécutif au retrait des moulages, même lors de la toute première présentation d’un stimulus sonore. Ce résultat concorde avec l’hypothèse d’un mécanisme dit de many-to-one mapping, à travers lequel plusieurs profils spectraux peuvent être associés à une même position spatiale. Dans une quatrième étude, au moyen de l’IRM fonctionnelle et en tirant profit de l’adaptation aux moulages de silicone, nous avons révélé l’encodage de l’élévation sonore dans le cortex auditif humain.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Pouvoir déterminer la provenance des sons est fondamental pour bien interagir avec notre environnement. La localisation auditive est une faculté importante et complexe du système auditif humain. Le cerveau doit décoder le signal acoustique pour en extraire les indices qui lui permettent de localiser une source sonore. Ces indices de localisation auditive dépendent en partie de propriétés morphologiques et environnementales qui ne peuvent être anticipées par l'encodage génétique. Le traitement de ces indices doit donc être ajusté par l'expérience durant la période de développement. À l’âge adulte, la plasticité en localisation auditive existe encore. Cette plasticité a été étudiée au niveau comportemental, mais on ne connaît que très peu ses corrélats et mécanismes neuronaux. La présente recherche avait pour objectif d'examiner cette plasticité, ainsi que les mécanismes d'encodage des indices de localisation auditive, tant sur le plan comportemental, qu'à travers les corrélats neuronaux du comportement observé. Dans les deux premières études, nous avons imposé un décalage perceptif de l’espace auditif horizontal à l’aide de bouchons d’oreille numériques. Nous avons montré que de jeunes adultes peuvent rapidement s’adapter à un décalage perceptif important. Au moyen de l’IRM fonctionnelle haute résolution, nous avons observé des changements de l’activité corticale auditive accompagnant cette adaptation, en termes de latéralisation hémisphérique. Nous avons également pu confirmer l’hypothèse de codage par hémichamp comme représentation de l'espace auditif horizontal. Dans une troisième étude, nous avons modifié l’indice auditif le plus important pour la perception de l’espace vertical à l’aide de moulages en silicone. Nous avons montré que l’adaptation à cette modification n’était suivie d’aucun effet consécutif au retrait des moulages, même lors de la toute première présentation d’un stimulus sonore. Ce résultat concorde avec l’hypothèse d’un mécanisme dit de many-to-one mapping, à travers lequel plusieurs profils spectraux peuvent être associés à une même position spatiale. Dans une quatrième étude, au moyen de l’IRM fonctionnelle et en tirant profit de l’adaptation aux moulages de silicone, nous avons révélé l’encodage de l’élévation sonore dans le cortex auditif humain.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

It is well known that self-generated stimuli are processed differently from externally generated stimuli. For example, many people have noticed since childhood that it is very difficult to make a self-tickling. In the auditory domain, self-generated sounds elicit smaller brain responses as compared to externally generated sounds, known as the sensory attenuation (SA) effect. SA is manifested in reduced amplitudes of evoked responses as measured through MEEG, decreased firing rates of neurons and a lower level of perceived loudness for self-generated sounds. The predominant explanation for SA is based on the idea that self-generated stimuli are predicted (e.g., the forward model account). It is the nature of their predictability that is crucial for SA. On the contrary, the sensory gating account emphasizes a general suppressive effect of actions on sensory processing, regardless of the predictability of the stimuli. Both accounts have received empirical support, which suggests that both mechanisms may exist. In chapter 2, three behavioural studies concerning the influence of motor activation on auditory perception were presented. Study 1 compared the effect of SA and attention in an auditory detection task and showed that SA was present even when substantial attention was paid to unpredictable stimuli. Study 2 compared the loudness perception of tones generated by others between Chinese and British participants. Compared to externally generated tones, a decrease in perceived loudness for others generated tones was found among Chinese but not among the British. In study 3, partial evidence was found that even when reading words that are related to action, auditory detection performance was impaired. In chapter 3, the classic SA effect of M100 suppression was replicated with MEG in study 4. With time-frequency analysis, a potential neural information processing sequence was found in auditory cortex. Prior to the onset of self-generated tones, there was an increase of oscillatory power in the alpha band. After the stimulus onset, reduced gamma power and alpha/beta phase locking were found. The three temporally segregated oscillatory events correlated with each other and with SA effect, which may be the underlying neural implementation of SA. In chapter 4, a TMS-MEG study was presented investigating the role of the cerebellum in adapting to delayed presentation of self-generated tones (study 5). It demonstrated that in sham stimulation condition, the brain can adapt to the delay (about 100 ms) within 300 trials of learning by showing a significant increase of SA effect in the suppression of M100, but not M200 component. Whereas after stimulating the cerebellum with a suppressive TMS protocol, the adaptation in M100 suppression disappeared and the pattern of M200 suppression reversed to M200 enhancement. These data support the idea that the suppressive effect of actions on auditory processing is a consequence of both motor driven sensory predictions and general sensory gating. The results also demonstrate the importance of neural oscillations in implementing SA effect and the critical role of the cerebellum in learning sensory predictions under sensory perturbation.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this study we investigate previous claims that a region in the left posterior superior temporal sulcus (pSTS) is more activated by audiovisual than unimodal processing. First, we compare audiovisual to visual-visual and auditory-auditory conceptual matching using auditory or visual object names that are paired with pictures of objects or their environmental sounds. Second, we compare congruent and incongruent audiovisual trials when presentation is simultaneous or sequential. Third, we compare audiovisual stimuli that are either verbal (auditory and visual words) or nonverbal (pictures of objects and their associated sounds). The results demonstrate that, when task, attention, and stimuli are controlled, pSTS activation for audiovisual conceptual matching is 1) identical to that observed for intramodal conceptual matching, 2) greater for incongruent than congruent trials when auditory and visual stimuli are simultaneously presented, and 3) identical for verbal and nonverbal stimuli. These results are not consistent with previous claims that pSTS activation reflects the active formation of an integrated audiovisual representation. After a discussion of the stimulus and task factors that modulate activation, we conclude that, when stimulus input, task, and attention are controlled, pSTS is part of a distributed set of regions involved in conceptual matching, irrespective of whether the stimuli are audiovisual, auditory-auditory or visual-visual.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Mismatch negativity (MMN) is a component of the event-related potential elicited by deviant auditory stimuli. It is presumed to index pre-attentive monitoring of changes in the auditory environment. MMN amplitude is smaller in groups of individuals with schizophrenia compared to healthy controls. We compared duration-deviant MMN in 16 recent-onset and 19 chronic schizophrenia patients versus age- and sex-matched controls. Reduced frontal MMN was found in both patient groups, involved reduced hemispheric asymmetry, and was correlated with Global Assessment of Functioning (GAF) and negative symptom ratings. A cortically-constrained LORETA analysis, incorporating anatomical data from each individual's MRI, was performed to generate a current source density model of the MMN response over time. This model suggested MMN generation within a temporal, parietal and frontal network, which was right hemisphere dominant only in controls. An exploratory analysis revealed reduced CSD in patients in superior and middle temporal cortex, inferior and superior parietal cortex, precuneus, anterior cingulate, and superior and middle frontal cortex. A region of interest (ROI) analysis was performed. For the early phase of the MMN, patients had reduced bilateral temporal and parietal response and no lateralisation in frontal ROIs. For late MMN, patients had reduced bilateral parietal response and no lateralisation in temporal ROIs. In patients, correlations revealed a link between GAF and the MMN response in parietal cortex. In controls, the frontal response onset was 17 ms later than the temporal and parietal response. In patients, onset latency of the MMN response was delayed in secondary, but not primary, auditory cortex. However amplitude reductions were observed in both primary and secondary auditory cortex. These latency delays may indicate relatively intact information processing upstream of the primary auditory cortex, but impaired primary auditory cortex or cortico-cortical or thalamo-cortical communication with higher auditory cortices as a core deficit in schizophrenia.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Auditory fear conditioning is dependent on auditory signaling from the medial geniculate (MGm) and the auditory cortex (TE3) to principal neurons of the lateral amygdala (LA). Local circuit GABAergic interneurons are known to inhibit LA principal neurons via fast and slow IPSP's. Stimulation of MGm and TE3 produces excitatory post-synaptic potentials in both LA principal and interneurons, followed by inhibitory post-synaptic potentials. Manipulations of D1 receptors in the lateral and basal amygdala modulate the retrieval of learned association between an auditory CS and foot shock. Here we examined the effects of D1 agonists on GABAergic IPSP's evoked by stimulation of MGm and TE3 afferents in vitro. Whole cell patch recordings were made from principal neurons of the LA, at room temperature, in coronal brain slices using standard methods. Stimulating electrodes were placed on the fiber tracts medial to the LA and at the external capsule/layer VI border dorsal to the LA to activate (0.1-0.2mA) MGm and TE3 afferents respectively. Neurons were held at -55.0 mV by positive current injection to measure the amplitude of the fast IPSP. Changes in input resistance and membrane potential were measured in the absence of current injection. Stimulation of MGm or TE3 afferents produced EPSP's in the majority of principal neurons and in some an EPSP/IPSP sequence. Stimulation of MGm afferents produced IPSP's with amplitudes of -2.30 ± 0.53 mV and stimulation of TE3 afferents produced IPSP's with amplitudes of -1.98 ± 1.26 mV. Bath application of 20μM SKF38393 increased IPSP amplitudes to -5.94 ± 1.62 mV (MGm, n=3) and-5.46 ± 0.31 mV (TE3, n=3). Maximal effect occurred <10mins. A small increase in resting membrane potential and decrease in input resistance were observed. These data suggest that DA modulates both the auditory thalamic and auditory cortical inputs to the LA fear conditioning circuit via local GABAergic circuits. Supported by NIMH Grants 00956, 46516, and 58911.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In a musical context, the pitch of sounds is encoded according to domain-general principles not confined to music or even to audition overall but common to other perceptual and cognitive processes (such as multiple pattern encoding and feature integration), and to domain-specific and culture-specific properties related to a particular musical system only (such as the pitch steps of the Western tonal system). The studies included in this thesis shed light on the processing stages during which pitch encoding occurs on the basis of both domain-general and music-specific properties, and elucidate the putative brain mechanisms underlying pitch-related music perception. Study I showed, in subjects without formal musical education, that the pitch and timbre of multiple sounds are integrated as unified object representations in sensory memory before attentional intervention. Similarly, multiple pattern pitches are simultaneously maintained in non-musicians' sensory memory (Study II). These findings demonstrate the degree of sophistication of pitch processing at the sensory memory stage, requiring neither attention nor any special expertise of the subjects. Furthermore, music- and culture-specific properties, such as the pitch steps of the equal-tempered musical scale, are automatically discriminated in sensory memory even by subjects without formal musical education (Studies III and IV). The cognitive processing of pitch according to culture-specific musical-scale schemata hence occurs as early as at the sensory-memory stage of pitch analysis. Exposure and cortical plasticity seem to be involved in musical pitch encoding. For instance, after only one hour of laboratory training, the neural representations of pitch in the auditory cortex are altered (Study V). However, faulty brain mechanisms for attentive processing of fine-grained pitch steps lead to inborn deficits in music perception and recognition such as those encountered in congenital amusia (Study VI). These findings suggest that predispositions for exact pitch-step discrimination together with long-term exposure to music govern the acquisition of the automatized schematic knowledge of the music of a particular culture that even non-musicians possess.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Speech has both auditory and visual components (heard speech sounds and seen articulatory gestures). During all perception, selective attention facilitates efficient information processing and enables concentration on high-priority stimuli. Auditory and visual sensory systems interact at multiple processing levels during speech perception and, further, the classical motor speech regions seem also to participate in speech perception. Auditory, visual, and motor-articulatory processes may thus work in parallel during speech perception, their use possibly depending on the information available and the individual characteristics of the observer. Because of their subtle speech perception difficulties possibly stemming from disturbances at elemental levels of sensory processing, dyslexic readers may rely more on motor-articulatory speech perception strategies than do fluent readers. This thesis aimed to investigate the neural mechanisms of speech perception and selective attention in fluent and dyslexic readers. We conducted four functional magnetic resonance imaging experiments, during which subjects perceived articulatory gestures, speech sounds, and other auditory and visual stimuli. Gradient echo-planar images depicting blood oxygenation level-dependent contrast were acquired during stimulus presentation to indirectly measure brain hemodynamic activation. Lip-reading activated the primary auditory cortex, and selective attention to visual speech gestures enhanced activity within the left secondary auditory cortex. Attention to non-speech sounds enhanced auditory cortex activity bilaterally; this effect showed modulation by sound presentation rate. A comparison between fluent and dyslexic readers' brain hemodynamic activity during audiovisual speech perception revealed stronger activation of predominantly motor speech areas in dyslexic readers during a contrast test that allowed exploration of the processing of phonetic features extracted from auditory and visual speech. The results show that visual speech perception modulates hemodynamic activity within auditory cortex areas once considered unimodal, and suggest that the left secondary auditory cortex specifically participates in extracting the linguistic content of seen articulatory gestures. They are strong evidence for the importance of attention as a modulator of auditory cortex function during both sound processing and visual speech perception, and point out the nature of attention as an interactive process (influenced by stimulus-driven effects). Further, they suggest heightened reliance on motor-articulatory and visual speech perception strategies among dyslexic readers, possibly compensating for their auditory speech perception difficulties.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Determining how information flows along anatomical brain pathways is a fundamental requirement for understanding how animals perceive their environments, learn, and behave. Attempts to reveal such neural information flow have been made using linear computational methods, but neural interactions are known to be nonlinear. Here, we demonstrate that a dynamic Bayesian network (DBN) inference algorithm we originally developed to infer nonlinear transcriptional regulatory networks from gene expression data collected with microarrays is also successful at inferring nonlinear neural information flow networks from electrophysiology data collected with microelectrode arrays. The inferred networks we recover from the songbird auditory pathway are correctly restricted to a subset of known anatomical paths, are consistent with timing of the system, and reveal both the importance of reciprocal feedback in auditory processing and greater information flow to higher-order auditory areas when birds hear natural as opposed to synthetic sounds. A linear method applied to the same data incorrectly produces networks with information flow to non-neural tissue and over paths known not to exist. To our knowledge, this study represents the first biologically validated demonstration of an algorithm to successfully infer neural information flow networks.