946 resultados para Auditory cortex
Resumo:
We frequently encounter conflicting emotion cues. This study examined how the neural response to emotional prosody differed in the presence of congruent and incongruent lexico-semantic cues. Two hypotheses were assessed: (i) decoding emotional prosody with conflicting lexico-semantic cues would activate brain regions associated with cognitive conflict (anterior cingulate and dorsolateral prefrontal cortex) or (ii) the increased attentional load of incongruent cues would modulate the activity of regions that decode emotional prosody (right lateral temporal cortex). While the participants indicated the emotion conveyed by prosody, functional magnetic resonance imaging data were acquired on a 3T scanner using blood oxygenation level-dependent contrast. Using SPM5, the response to congruent cues was contrasted with that to emotional prosody alone, as was the response to incongruent lexico-semantic cues (for the 'cognitive conflict' hypothesis). The right lateral temporal lobe region of interest analyses examined modulation of activity in this brain region between these two contrasts (for the 'prosody cortex' hypothesis). Dorsolateral prefrontal and anterior cingulate cortex activity was not observed, and neither was attentional modulation of activity in right lateral temporal cortex activity. However, decoding emotional prosody with incongruent lexico-semantic cues was strongly associated with left inferior frontal gyrus activity. This specialist form of conflict is therefore not processed by the brain using the same neural resources as non-affective cognitive conflict and neither can it be handled by associated sensory cortex alone. The recruitment of inferior frontal cortex may indicate increased semantic processing demands but other contributory functions of this region should be explored.
Resumo:
Objective: This work investigates the nature of the comprehension impairment in Wernicke’s aphasia, by examining the relationship between deficits in auditory processing of fundamental, non-verbal acoustic stimuli and auditory comprehension. Wernicke’s aphasia, a condition resulting in severely disrupted auditory comprehension, primarily occurs following a cerebrovascular accident (CVA) to the left temporo-parietal cortex. Whilst damage to posterior superior temporal areas is associated with auditory linguistic comprehension impairments, functional imaging indicates that these areas may not be specific to speech processing but part of a network for generic auditory analysis. Methods: We examined analysis of basic acoustic stimuli in Wernicke’s aphasia participants (n = 10) using auditory stimuli reflective of theories of cortical auditory processing and of speech cues. Auditory spectral, temporal and spectro-temporal analysis was assessed using pure tone frequency discrimination, frequency modulation (FM) detection and the detection of dynamic modulation (DM) in “moving ripple” stimuli. All tasks used criterion-free, adaptive measures of threshold to ensure reliable results at the individual level. Results: Participants with Wernicke’s aphasia showed normal frequency discrimination but significant impairments in FM and DM detection, relative to age- and hearing-matched controls at the group level (n = 10). At the individual level, there was considerable variation in performance, and thresholds for both frequency and dynamic modulation detection correlated significantly with auditory comprehension abilities in the Wernicke’s aphasia participants. Conclusion: These results demonstrate the co-occurrence of a deficit in fundamental auditory processing of temporal and spectrotemporal nonverbal stimuli in Wernicke’s aphasia, which may have a causal contribution to the auditory language comprehension impairment Results are discussed in the context of traditional neuropsychology and current models of cortical auditory processing.
Resumo:
Background: Auditory discrimination is significantly impaired in Wernicke’s aphasia (WA) and thought to be causatively related to the language comprehension impairment which characterises the condition. This study used mismatch negativity (MMN) to investigate the neural responses corresponding to successful and impaired auditory discrimination in WA. Methods: Behavioural auditory discrimination thresholds of CVC syllables and pure tones were measured in WA (n=7) and control (n=7) participants. Threshold results were used to develop multiple-deviant mismatch negativity (MMN) oddball paradigms containing deviants which were either perceptibly or non-perceptibly different from the standard stimuli. MMN analysis investigated differences associated with group, condition and perceptibility as well as the relationship between MMN responses and comprehension (within which behavioural auditory discrimination profiles were examined). Results: MMN waveforms were observable to both perceptible and non-perceptible auditory changes. Perceptibility was only distinguished by MMN amplitude in the PT condition. The WA group could be distinguished from controls by an increase in MMN response latency to CVC stimuli change. Correlation analyses displayed relationship between behavioural CVC discrimination and MMN amplitude in the control group, where greater amplitude corresponded to better discrimination. The WA group displayed the inverse effect; both discrimination accuracy and auditory comprehension scores were reduced with increased MMN amplitude. In the WA group, a further correlation was observed between the lateralisation of MMN response and CVC discrimination accuracy; the greater the bilateral involvement the better the discrimination accuracy. Conclusions: The results from this study provide further evidence for the nature of auditory comprehension impairment in WA and indicate that the auditory discrimination deficit is grounded in a reduced ability to engage in efficient hierarchical processing and the construction of invariant auditory objects. Correlation results suggest that people with chronic WA may rely on an inefficient, noisy right hemisphere auditory stream when attempting to process speech stimuli.
Resumo:
The basolateral amygdala complex (BLA) is involved in acquisition of contextual and auditory fear conditioning. However, the BLA is not a single structure but comprises a group of nuclei, including the lateral (LA), basal (BA) and accessory basal (AB) nuclei. While it is consensual that the LA is critical for auditory fear conditioning, there is controversy on the participation of the BA in fear conditioning. Hodological and neurophysiological findings suggest that each of these nuclei processes distinct information in parallel; the BA would deal with polymodal or contextual representations, and the LA would process unimodal or elemental representations. Thus, it seems plausible to hypothesize that the BA is required for contextual, but not auditory, fear conditioning. This hypothesis was evaluated in Wistar rats submitted to multiple-site ibotenate-induced damage restricted to the BA and then exposed to a concurrent contextual and auditory fear conditioning training followed by separated contextual and auditory conditioning testing. Differing from electrolytic lesion and lidocaine inactivation, this surgical approach does not disturb fibers of passage originating in other brain areas, restricting damage to the aimed nucleus. Relative to the sham-operated controls, rats with selective damage to the BA exhibited disruption of performance in the contextual, but not the auditory, component of the task. Thus, while the BA seems required for contextual fear conditioning, it is not critical for both an auditory-US association, nor for the expression of the freezing response. (C) 2009 Elsevier Inc. All rights reserved.
Resumo:
It has been recently shownthat localfield potentials (LFPs)fromthe auditory and visual cortices carry information about sensory stimuli, but whether this is a universal property of sensory cortices remains to be determined. Moreover, little is known about the temporal dynamics of sensory information contained in LFPs following stimulus onset. Here we investigated the time course of the amount of stimulus information in LFPs and spikes from the gustatory cortex of awake rats subjected to tastants and water delivery on the tongue. We found that the phase and amplitude of multiple LFP frequencies carry information about stimuli, which have specific time courses after stimulus delivery. The information carried by LFP phase and amplitude was independent within frequency bands, since the joint information exhibited neither synergy nor redundancy. Tastant information in LFPs was also independent and had a different time course from the information carried by spikes. These findings support the hypothesis that the brain uses different frequency channels to dynamically code for multiple features of a stimulus.
Resumo:
The orbitofrontal cortex (OfC) is a heterogeneous prefrontal sector selectively connected with a wide constellation of other prefrontal, limbic, sensory and premotor areas. Among the limbic cortical connections, the ones with the bippocampus and parabippocampal cortex are particularly salient. Sensory cortices connected with the OfC include areas involved in olfactory, gustatory, somatosensory, auditory and visual processing. Subcortical structures with prominent OfC connections include the amygdala, numerous thalamic nuclei, the striatum, hypothalamus, periaqueductal gray matter, and biochemically specific cell groups in the basal forebrain and brainstem. Architectonic and connectional evidence supports parcellation of the OfC. The rostrally placed isocortical sector is mainly connected with isocortical areas, including sensory areas of the auditory, somatic and visual modalities, whereas the caudal non-isocortical sector is principally connected with non-isocortical areas, and, in the sensory domain, with olfactory and gustatory areas. The connections of the isocortical and non- isocortical orbital sectors with the amygdala, thalamus, striatum, hypotbalamus and periaqueductal gray matter are also specific. The medial sector of the OfC is selectively connected with the bippocampus, posterior parabippocampal cortex, posterior cingulate and retrosplenial areas, and area prostriata, while the lateral orbitofrontal sector is the most heavily connected with sensory areas of the gustatory, somatic and visual modalities, with premotor regions, and with the amygdala.
Resumo:
Introduction: Auditory Late Responses (ALR) assess central auditory processing by neuro electric activity of the auditory pathway and analyse the activities involved in cortical abilities of discrimination, attention and integration of the brain. Individuals withAsperger Syndrome experience changes in these skills, so it is important to research these potential this population. The objective of this paper was to describe the auditory late responses of two patients with Asperger Syndrome. Methods: The study included two male patients with Asperger Syndrome, of 7 and 12 years of age, treated in a study centre. The patients did not present any auditory complaint detected by an amnesis. The external auditory canal was inspected and audiological and auditory late responses assessed. After evaluation the components P2, N2 and P3 were analysed. Results: In both patients, the latency of the components P2, N2 and P3 were elongated in both ears. Regarding the amplitude of the P2 component, reduced values were found for the left ear of patient 1 and the right ear of patient 2. The N2 amplitude was reduced for both ears of patient 1 and only the right ear of patient 2. The two patients showed a decrease in the amplitude of the P3 only in the right ear. Conclusion:This study concludes that there were changes in the ALR results in both patients with Asperger Syndrome, suggesting alteration of the auditory function at the cortex level.
Resumo:
Inaccurate wiring and synaptic pathology appear to be major hallmarks of schizophrenia. A variety of gene products involved in synaptic neurotransmission and receptor signaling are differentially expressed in brains of schizophrenia patients. However, synaptic pathology may also develop by improper expression of intra- and extra-cellular structural elements weakening synaptic stability. Therefore, we have investigated transcription of these elements in the left superior temporal gyrus of 10 schizophrenia patients and 10 healthy controls by genome-wide microarrays (Illumina). Fourteen up-regulated and 22 downregulated genes encoding structural elements were chosen from the lists of differentially regulated genes for further qRT-PCR analysis. Almost all genes confirmed by this method were downregulated. Their gene products belonged to vesicle-associated proteins, that is, synaptotagmin 6 and syntaxin 12, to cytoskeletal proteins, like myosin 6, pleckstrin, or to proteins of the extracellular matrix, such as collagens, or laminin C3. Our results underline the pivotal roles of structural genes that control formation and stabilization of pre- and post-synaptic elements or influence axon guidance in schizophrenia. The glial origin of collagen or laminin highlights the close interrelationship between neurons and glial cells in establishment and maintenance of synaptic strength and plasticity. It is hypothesized that abnormal expression of these and related genes has a major impact on the pathophysiology of schizophrenia.
Resumo:
Objective: To characterize the PI component of long latency auditory evoked potentials (LLAEPs) in cochlear implant users with auditory neuropathy spectrum disorder (ANSD) and determine firstly whether they correlate with speech perception performance and secondly whether they correlate with other variables related to cochlear implant use. Methods: This study was conducted at the Center for Audiological Research at the University of Sao Paulo. The sample included 14 pediatric (4-11 years of age) cochlear implant users with ANSD, of both sexes, with profound prelingual hearing loss. Patients with hypoplasia or agenesis of the auditory nerve were excluded from the study. LLAEPs produced in response to speech stimuli were recorded using a Smart EP USB Jr. system. The subjects' speech perception was evaluated using tests 5 and 6 of the Glendonald Auditory Screening Procedure (GASP). Results: The P-1 component was detected in 12/14 (85.7%) children with ANSD. Latency of the P-1 component correlated with duration of sensorial hearing deprivation (*p = 0.007, r = 0.7278), but not with duration of cochlear implant use. An analysis of groups assigned according to GASP performance (k-means clustering) revealed that aspects of prior central auditory system development reflected in the P-1 component are related to behavioral auditory skills. Conclusions: In children with ANSD using cochlear implants, the P-1 component can serve as a marker of central auditory cortical development and a predictor of the implanted child's speech perception performance. (c) 2012 Elsevier Ireland Ltd. All rights reserved.
When that tune runs through your head: A PET investigation of auditory imagery for familiar melodies
Resumo:
The present study used positron emission tomography (PET) to examine the cerebral activity pattern associated with auditory imagery forfamiliar tunes. Subjects either imagined the continuation of nonverbaltunes cued by their first few notes, listened to a short sequence of notesas a control task, or listened and then reimagined that short sequence. Subtraction of the activation in the control task from that in the real-tune imagery task revealed primarily right-sided activation in frontal and superior temporal regions, plus supplementary motor area(SMA). Isolating retrieval of the real tunes by subtracting activation in the reimagine task from that in the real-tune imagery task revealedactivation primarily in right frontal areas and right superior temporal gyrus. Subtraction of activation in the control condition from that in the reimagine condition, intended to capture imagery of unfamiliarsequences, revealed activation in SMA, plus some left frontal regions. We conclude that areas of right auditory association cortex, together with right and left frontal cortices, are implicated in imagery for familiartunes, in accord with previous behavioral, lesion and PET data. Retrieval from musical semantic memory is mediated by structures in the right frontal lobe, in contrast to results from previous studies implicating left frontal areas for all semantic retrieval. The SMA seems to be involved specifically in image generation, implicating a motor code in this process.
When that tune runs through your head: a PET investigation of auditory imagery for familiar melodies
Resumo:
The present study used positron emission tomography (PET) to examine the cerebral activity pattern associated with auditory imagery for familiar tunes. Subjects either imagined the continuation of nonverbal tunes cued by their first few notes, listened to a short sequence of notes as a control task, or listened and then reimagined that short sequence. Subtraction of the activation in the control task from that in the real-tune imagery task revealed primarily right-sided activation in frontal and superior temporal regions, plus supplementary motor area (SMA). Isolating retrieval of the real tunes by subtracting activation in the reimagine task from that in the real-tune imagery task revealed activation primarily in right frontal areas and right superior temporal gyrus. Subtraction of activation in the control condition from that in the reimagine condition, intended to capture imagery of unfamiliar sequences, revealed activation in SMA, plus some left frontal regions. We conclude that areas of right auditory association cortex, together with right and left frontal cortices, are implicated in imagery for familiar tunes, in accord with previous behavioral, lesion and PET data. Retrieval from musical semantic memory is mediated by structures in the right frontal lobe, in contrast to results from previous studies implicating left frontal areas for all semantic retrieval. The SMA seems to be involved specifically in image generation, implicating a motor code in this process.
Resumo:
The vestibular system contributes to the control of posture and eye movements and is also involved in various cognitive functions including spatial navigation and memory. These functions are subtended by projections to a vestibular cortex, whose exact location in the human brain is still a matter of debate (Lopez and Blanke, 2011). The vestibular cortex can be defined as the network of all cortical areas receiving inputs from the vestibular system, including areas where vestibular signals influence the processing of other sensory (e.g. somatosensory and visual) and motor signals. Previous neuroimaging studies used caloric vestibular stimulation (CVS), galvanic vestibular stimulation (GVS), and auditory stimulation (clicks and short-tone bursts) to activate the vestibular receptors and localize the vestibular cortex. However, these three methods differ regarding the receptors stimulated (otoliths, semicircular canals) and the concurrent activation of the tactile, thermal, nociceptive and auditory systems. To evaluate the convergence between these methods and provide a statistical analysis of the localization of the human vestibular cortex, we performed an activation likelihood estimation (ALE) meta-analysis of neuroimaging studies using CVS, GVS, and auditory stimuli. We analyzed a total of 352 activation foci reported in 16 studies carried out in a total of 192 healthy participants. The results reveal that the main regions activated by CVS, GVS, or auditory stimuli were located in the Sylvian fissure, insula, retroinsular cortex, fronto-parietal operculum, superior temporal gyrus, and cingulate cortex. Conjunction analysis indicated that regions showing convergence between two stimulation methods were located in the median (short gyrus III) and posterior (long gyrus IV) insula, parietal operculum and retroinsular cortex (Ri). The only area of convergence between all three methods of stimulation was located in Ri. The data indicate that Ri, parietal operculum and posterior insula are vestibular regions where afferents converge from otoliths and semicircular canals, and may thus be involved in the processing of signals informing about body rotations, translations and tilts. Results from the meta-analysis are in agreement with electrophysiological recordings in monkeys showing main vestibular projections in the transitional zone between Ri, the insular granular field (Ig), and SII.
Resumo:
Auditory cortical receptive field plasticity produced during behavioral learning may be considered to constitute "physiological memory" because it has major characteristics of behavioral memory: associativity, specificity, rapid acquisition, and long-term retention. To investigate basal forebrain mechanisms in receptive field plasticity, we paired a tone with stimulation of the nucleus basalis, the main subcortical source of cortical acetylcholine, in the adult guinea pig. Nucleus basalis stimulation produced electroencephalogram desynchronization that was blocked by systemic and cortical atropine. Paired tone/nucleus basalis stimulation, but not unpaired stimulation, induced receptive field plasticity similar to that produced by behavioral learning. Thus paired activation of the nucleus basalis is sufficient to induce receptive field plasticity, possibly via cholinergic actions in the cortex.
Resumo:
Action selection and organization are very complex processes that need to exploit contextual information and the retrieval of previously memorized information, as well as the integration of these different types of data. On the basis of anatomical connection with premotor and parietal areas involved in action goal coding, and on the data about the literature it seems appropriate to suppose that one of the most candidate involved in the selection of neuronal pools for the selection and organization of intentional actions is the prefrontal cortex. We recorded single ventrolateral prefrontal (VLPF) neurons activity while monkeys performed simple and complex manipulative actions aimed at distinct final goals, by employing a modified and more strictly controlled version of the grasp-to-eat(a food pellet)/grasp-to-place(an object) paradigm used in previous studies on parietal (Fogassi et al., 2005) and premotor neurons (Bonini et al., 2010). With this task we have been able both to evaluate the processing and integration of distinct (visual and auditory) contextual sequentially presented information in order to select the forthcoming action to perform and to examine the possible presence of goal-related activity in this portion of cortex. Moreover, we performed an observation task to clarify the possible contribution of VLPF neurons to the understanding of others’ goal-directed actions. Simple Visuo Motor Task (sVMT). We found four main types of neurons: unimodal sensory-driven, motor-related, unimodal sensory-and-motor, and multisensory neurons. We found a substantial number of VLPF neurons showing both a motor-related discharge and a visual presentation response (sensory-and-motor neurons), with remarkable visuo-motor congruence for the preferred target. Interestingly the discharge of multisensory neurons reflected a behavioural decision independently from the sensory modality of the stimulus allowing the monkey to make it: some encoded a decision to act/refraining from acting (the majority), while others specified one among the four behavioural alternatives. Complex Visuo Motor Task (cVMT). The cVMT was similar to the sVMT, but included a further grasping motor act (grasping a lid in order to remove it, before grasping the target) and was run in two modalities: randomized and in blocks. Substantially, motor-related and sensory-and-motor neurons tested in the cVMTrandomized were activated already during the first grasping motor act, but the selectivity for one of the two graspable targets emerged only during the execution of the second grasping. In contrast, when the cVMT was run in block, almost all these neurons not only discharged during the first grasping motor act, but also displayed the same target selectivity showed in correspondence of the hand contact with the target. Observation Task (OT). A great part of the neurons active during the OT showed a firing rate modulation in correspondence with the action performed by the experimenter. Among them, we found neurons significantly activated during the observation of the experimenter’s action (action observation-related neurons) and neurons responding not only to the action observation, but also to the presented cue stimuli (sensory-and-action observation-related neurons. Among the neurons of the first set, almost the half displayed a target selectivity, with a not clear difference between the two presented targets; Concerning to the second neuronal set, sensory-and-action related neurons, we found a low target selectivity and a not strictly congruence between the selectivity exhibited in the visual response and in the action observation.
Resumo:
Children with autistic spectrum disorder (ASD) may have poor audio-visual integration, possibly reflecting dysfunctional 'mirror neuron' systems which have been hypothesised to be at the core of the condition. In the present study, a computer program, utilizing speech synthesizer software and a 'virtual' head (Baldi), delivered speech stimuli for identification in auditory, visual or bimodal conditions. Children with ASD were poorer than controls at recognizing stimuli in the unimodal conditions, but once performance on this measure was controlled for, no group difference was found in the bimodal condition. A group of participants with ASD were also trained to develop their speech-reading ability. Training improved visual accuracy and this also improved the children's ability to utilize visual information in their processing of speech. Overall results were compared to predictions from mathematical models based on integration and non-integration, and were most consistent with the integration model. We conclude that, whilst they are less accurate in recognizing stimuli in the unimodal condition, children with ASD show normal integration of visual and auditory speech stimuli. Given that training in recognition of visual speech was effective, children with ASD may benefit from multi-modal approaches in imitative therapy and language training. (C) 2004 Elsevier Ltd. All rights reserved.