909 resultados para Auditory representations
Resumo:
Abstract : Auditory spatial functions are of crucial importance in everyday life. Determining the origin of sound sources in space plays a key role in a variety of tasks including orientation of attention, disentangling of complex acoustic patterns reaching our ears in noisy environments. Following brain damage, auditory spatial processing can be disrupted, resulting in severe handicaps. Complaints of patients with sound localization deficits include the inability to locate their crying child or being over-loaded by sounds in crowded public places. Yet, the brain bears a large capacity for reorganization following damage and/or learning. This phenomenon is referred as plasticity and is believed to underlie post-lesional functional recovery as well as learning-induced improvement. The aim of this thesis was to investigate the organization and plasticity of different aspects of auditory spatial functions. Overall, we report the outcomes of three studies: In the study entitled "Learning-induced plasticity in auditory spatial representations" (Spierer et al., 2007b), we focused on the neurophysiological and behavioral changes induced by auditory spatial training in healthy subjects. We found that relatively brief auditory spatial discrimination training improves performance and modifies the cortical representation of the trained sound locations, suggesting that cortical auditory representations of space are dynamic and subject to rapid reorganization. In the same study, we tested the generalization and persistence of training effects over time, as these are two determining factors in the development of neurorehabilitative intervention. In "The path to success in auditory spatial discrimination" (Spierer et al., 2007c), we investigated the neurophysiological correlates of successful spatial discrimination and contribute to the modeling of the anatomo-functional organization of auditory spatial processing in healthy subjects. We showed that discrimination accuracy depends on superior temporal plane (STP) activity in response to the first sound of a pair of stimuli. Our data support a model wherein refinement of spatial representations occurs within the STP and that interactions with parietal structures allow for transformations into coordinate frames that are required for higher-order computations including absolute localization of sound sources. In "Extinction of auditory stimuli in hemineglect: space versus ear" (Spierer et al., 2007a), we investigated auditory attentional deficits in brain-damaged patients. This work provides insight into the auditory neglect syndrome and its relation with neglect symptoms within the visual modality. Apart from contributing to a basic understanding of the cortical mechanisms underlying auditory spatial functions, the outcomes of the studies also contribute to develop neurorehabilitation strategies, which are currently being tested in clinical populations.
Learning-induced plasticity in auditory spatial representations revealed by electrical neuroimaging.
Resumo:
Auditory spatial representations are likely encoded at a population level within human auditory cortices. We investigated learning-induced plasticity of spatial discrimination in healthy subjects using auditory-evoked potentials (AEPs) and electrical neuroimaging analyses. Stimuli were 100 ms white-noise bursts lateralized with varying interaural time differences. In three experiments, plasticity was induced with 40 min of discrimination training. During training, accuracy significantly improved from near-chance levels to approximately 75%. Before and after training, AEPs were recorded to stimuli presented passively with a more medial sound lateralization outnumbering a more lateral one (7:1). In experiment 1, the same lateralizations were used for training and AEP sessions. Significant AEP modulations to the different lateralizations were evident only after training, indicative of a learning-induced mismatch negativity (MMN). More precisely, this MMN at 195-250 ms after stimulus onset followed from differences in the AEP topography to each stimulus position, indicative of changes in the underlying brain network. In experiment 2, mirror-symmetric locations were used for training and AEP sessions; no training-related AEP modulations or MMN were observed. In experiment 3, the discrimination of trained plus equidistant untrained separations was tested psychophysically before and 0, 6, 24, and 48 h after training. Learning-induced plasticity lasted <6 h, did not generalize to untrained lateralizations, and was not the simple result of strengthening the representation of the trained lateralizations. Thus, learning-induced plasticity of auditory spatial discrimination relies on spatial comparisons, rather than a spatial anchor or a general comparator. Furthermore, cortical auditory representations of space are dynamic and subject to rapid reorganization.
Resumo:
Everyday, humans and animals navigate complex acoustic environments, where multiple sound sources overlap. Somehow, they effortlessly perform an acoustic scene analysis and extract relevant signals from background noise. Constant updating of the behavioral relevance of ambient sounds requires the representation and integration of incoming acoustical information with internal representations such as behavioral goals, expectations and memories of previous sound-meaning associations. Rapid plasticity of auditory representations may contribute to our ability to attend and focus on relevant sounds. In order to better understand how auditory representations are transformed in the brain to incorporate behavioral contextual information, we explored task-dependent plasticity in neural responses recorded at four levels of the auditory cortical processing hierarchy of ferrets: the primary auditory cortex (A1), two higher-order auditory areas (dorsal PEG and ventral-anterior PEG) and dorso-lateral frontal cortex. In one study we explored the laminar profile of rapid-task related plasticity in A1 and found that plasticity occurred at all depths, but was greatest in supragranular layers. This result suggests that rapid task-related plasticity in A1 derives primarily from intracortical modulation of neural selectivity. In two other studies we explored task-dependent plasticity in two higher-order areas of the ferret auditory cortex that may correspond to belt (secondary) and parabelt (tertiary) auditory areas. We found that representations of behaviorally-relevant sounds are progressively enhanced during performance of auditory tasks. These selective enhancement effects became progressively larger as you ascend the auditory cortical hierarchy. We also observed neuronal responses to non-auditory, task-related information (reward timing, expectations) in the parabelt area that were very similar to responses previously described in frontal cortex. These results suggests that auditory representations in the brain are transformed from the more veridical spectrotemporal information encoded in earlier auditory stages to a more abstract representation encoding sound behavioral meaning in higher-order auditory areas and dorso-lateral frontal cortex.
Resumo:
Spatial hearing refers to a set of abilities enabling us to determine the location of sound sources, redirect our attention toward relevant acoustic events, and recognize separate sound sources in noisy environments. Determining the location of sound sources plays a key role in the way in which humans perceive and interact with their environment. Deficits in sound localization abilities are observed after lesions to the neural tissues supporting these functions and can result in serious handicaps in everyday life. These deficits can, however, be remediated (at least to a certain degree) by the surprising capacity of reorganization that the human brain possesses following damage and/or learning, namely, the brain plasticity. In this thesis, our aim was to investigate the functional organization of auditory spatial functions and the learning-induced plasticity of these functions. Overall, we describe the results of three studies. The first study entitled "The role of the right parietal cortex in sound localization: A chronometric single pulse transcranial magnetic stimulation study" (At et al., 2011), study A, investigated the role of the right parietal cortex in spatial functions and its chronometry (i.e. the critical time window of its contribution to sound localizations). We concentrated on the behavioral changes produced by the temporarily inactivation of the parietal cortex with transcranial magnetic stimulation (TMS). We found that the integrity of the right parietal cortex is crucial for localizing sounds in the space and determined a critical time window of its involvement, suggesting a right parietal dominance for auditory spatial discrimination in both hemispaces. In "Distributed coding of the auditory space in man: evidence from training-induced plasticity" (At et al., 2013a), study B, we investigated the neurophysiological correlates and changes of the different sub-parties of the right auditory hemispace induced by a multi-day auditory spatial training in healthy subjects with electroencephalography (EEG). We report a distributed coding for sound locations over numerous auditory regions, particular auditory areas code specifically for precise parts of the auditory space, and this specificity for a distinct region is enhanced with training. In the third study "Training-induced changes in auditory spatial mismatch negativity" (At et al., 2013b), study C, we investigated the pre-attentive neurophysiological changes induced with a training over 4 days in healthy subjects with a passive mismatch negativity (MMN) paradigm. We showed that training changed the mechanisms for the relative representation of sound positions and not the specific lateralization themselves and that it changed the coding in right parahippocampal regions. - L'audition spatiale désigne notre capacité à localiser des sources sonores dans l'espace, de diriger notre attention vers les événements acoustiques pertinents et de reconnaître des sources sonores appartenant à des objets distincts dans un environnement bruyant. La localisation des sources sonores joue un rôle important dans la façon dont les humains perçoivent et interagissent avec leur environnement. Des déficits dans la localisation de sons sont souvent observés quand les réseaux neuronaux impliqués dans cette fonction sont endommagés. Ces déficits peuvent handicaper sévèrement les patients dans leur vie de tous les jours. Cependant, ces déficits peuvent (au moins à un certain degré) être réhabilités grâce à la plasticité cérébrale, la capacité du cerveau humain à se réorganiser après des lésions ou un apprentissage. L'objectif de cette thèse était d'étudier l'organisation fonctionnelle de l'audition spatiale et la plasticité induite par l'apprentissage de ces fonctions. Dans la première étude intitulé « The role of the right parietal cortex in sound localization : A chronometric single pulse study » (At et al., 2011), étude A, nous avons examiné le rôle du cortex pariétal droit dans l'audition spatiale et sa chronométrie, c'est-à- dire le moment critique de son intervention dans la localisation de sons. Nous nous sommes concentrés sur les changements comportementaux induits par l'inactivation temporaire du cortex pariétal droit par le biais de la Stimulation Transcrânienne Magnétique (TMS). Nous avons démontré que l'intégrité du cortex pariétal droit est cruciale pour localiser des sons dans l'espace. Nous avons aussi défini le moment critique de l'intervention de cette structure. Dans « Distributed coding of the auditory space : evidence from training-induced plasticity » (At et al., 2013a), étude B, nous avons examiné la plasticité cérébrale induite par un entraînement des capacités de discrimination auditive spatiale de plusieurs jours. Nous avons montré que le codage des positions spatiales est distribué dans de nombreuses régions auditives, que des aires auditives spécifiques codent pour des parties données de l'espace et que cette spécificité pour des régions distinctes est augmentée par l'entraînement. Dans « Training-induced changes in auditory spatial mismatch negativity » (At et al., 2013b), étude C, nous avons examiné les changements neurophysiologiques pré- attentionnels induits par un entraînement de quatre jours. Nous avons montré que l'entraînement modifie la représentation des positions spatiales entraînées et non-entrainées, et que le codage de ces positions est modifié dans des régions parahippocampales.
Resumo:
Edges are crucial for the formation of coherent objects from sequential sensory inputs within a single modality. Moreover, temporally coincident boundaries of perceptual objects across different sensory modalities facilitate crossmodal integration. Here, we used functional magnetic resonance imaging in order to examine the neural basis of temporal edge detection across modalities. Onsets of sensory inputs are not only related to the detection of an edge but also to the processing of novel sensory inputs. Thus, we used transitions from input to rest (offsets) as convenient stimuli for studying the neural underpinnings of visual and acoustic edge detection per se. We found, besides modality-specific patterns, shared visual and auditory offset-related activity in the superior temporal sulcus and insula of the right hemisphere. Our data suggest that right hemispheric regions known to be involved in multisensory processing are crucial for detection of edges in the temporal domain across both visual and auditory modalities. This operation is likely to facilitate cross-modal object feature binding based on temporal coincidence. Hum Brain Mapp, 2008. (c) 2008 Wiley-Liss, Inc.
Resumo:
This study examined spoken-word recognition in children with specific language impairment (SLI) and normally developing children matched separately for age and receptive language ability. Accuracy and reaction times on an auditory lexical decision task were compared. Children with SLI were less accurate than both control groups. Two subgroups of children with SLI, distinguished by performance accuracy only, were identified. One group performed within normal limits, while a second group was significantly less accurate. Children with SLI were not slower than the age-matched controls or language-matched controls. Further, the time taken to detect an auditory signal, make a decision, or initiate a verbal response did not account for the differences between the groups. The findings are interpreted as evidence for language-appropriate processing skills acting upon imprecise or underspecified stored representations.
Resumo:
Sound localization relies on the analysis of interaural time and intensity differences, as well as attenuation patterns by the outer ear. We investigated the relative contributions of interaural time and intensity difference cues to sound localization by testing 60 healthy subjects: 25 with focal left and 25 with focal right hemispheric brain damage. Group and single-case behavioural analyses, as well as anatomo-clinical correlations, confirmed that deficits were more frequent and much more severe after right than left hemispheric lesions and for the processing of interaural time than intensity difference cues. For spatial processing based on interaural time difference cues, different error types were evident in the individual data. Deficits in discriminating between neighbouring positions occurred in both hemispaces after focal right hemispheric brain damage, but were restricted to the contralesional hemispace after focal left hemispheric brain damage. Alloacusis (perceptual shifts across the midline) occurred only after focal right hemispheric brain damage and was associated with minor or severe deficits in position discrimination. During spatial processing based on interaural intensity cues, deficits were less severe in the right hemispheric brain damage than left hemispheric brain damage group and no alloacusis occurred. These results, matched to anatomical data, suggest the existence of a binaural sound localization system predominantly based on interaural time difference cues and primarily supported by the right hemisphere. More generally, our data suggest that two distinct mechanisms contribute to: (i) the precise computation of spatial coordinates allowing spatial comparison within the contralateral hemispace for the left hemisphere and the whole space for the right hemisphere; and (ii) the building up of global auditory spatial representations in right temporo-parietal cortices.
Resumo:
Mapping the human auditory cortex with standard functional imaging techniques is difficult because of its small size and angular position along the Sylvian fissure. As a result, the exact number and location of auditory cortex areas in the human remains unknown. In a first experiment, we measured the two largest tonotopic areas of primary auditory cortex (PAC, Al and R) using high-resolution functional MRI at 7 Tesla relative to the underlying anatomy of Heschl's gyrus (HG). The data reveals a clear anatomical- functional relationship that indicates the location of PAC across the range of common morphological variants of HG (single gyri, partial duplication and complete duplication). Human PAC tonotopic areas are oriented along an oblique posterior-to-anterior axis with mirror-symmetric frequency gradients perpendicular to HG, as in the macaque. In a second experiment, we tested whether these primary frequency-tuned units were modulated by selective attention to preferred vs. non-preferred sound frequencies in the dynamic manner needed to account for human listening abilities in noisy environments, such as cocktail parties or busy streets. We used a dual-stream selective attention experiment where subjects attended to one of two competing tonal streams presented simultaneously to different ears. Attention to low-frequency tones (250 Hz) enhanced neural responses within low-frequency-tuned voxels relative to high (4000 Hz), and vice versa when at-tention switched from high to low. Human PAC is able to tune into attended frequency channels and can switch frequencies on demand, like a radio. In a third experiment, we investigated repetition suppression effects to environmental sounds within primary and non-primary early-stage auditory areas, identified with the tonotopic mapping design. Repeated presentations of sounds from the same sources, as compared to different sources, gave repetition suppression effects within posterior and medial non-primary areas of the right hemisphere, reflecting their potential involvement in semantic representations. These three studies were conducted at 7 Tesla with high-resolution imaging. However, 7 Tesla scanners are, for the moment, not yet used for clinical diagnosis and mostly reside in institutions external to hospitals. Thus, hospital-based clinical functional and structural studies are mainly performed using lower field systems (1.5 or 3 Tesla). In a fourth experiment, we acquired tonotopic maps at 3 and 7 Tesla and evaluated the consistency of a tonotopic mapping paradigm between scanners. Mirror-symmetric gradients within PAC were highly similar at 7 and 3 Tesla across renderings at different spatial resolutions. We concluded that the tonotopic mapping paradigm is robust and suitable for definition of primary tonotopic areas, also at 3 Tesla. Finally, in a fifth study, we considered whether focal brain lesions alter tonotopic representations in the intact ipsi- and contralesional primary auditory cortex in three patients with hemispheric or cerebellar lesions, without and with auditory complaints. We found evidence for tonotopic reorganisation at the level of the primary auditory cortex in cases of brain lesions independently of auditory complaints. Overall, these results reflect a certain degree of plasticity within primary auditory cortex in different populations of subjects, assessed at different field strengths. - La cartographie du cortex auditif chez l'humain est difficile à réaliser avec des techniques d'imagerie fonctionnelle standard, étant donné sa petite taille et position angulaire le long de la fissure sylvienne. En conséquence, le nombre et l'emplacement exacts des différentes aires du cortex auditif restent inconnus chez l'homme. Lors d'une première expérience, nous avons mesuré, avec de l'imagerie par résonance magnétique à haute intensité (IRMf à 7 Tesla) chez des sujets humains sains, deux larges aires au sein du cortex auditif primaire (PAC; Al et R) avec une représentation spécifique des fréquences pures préférées - ou tonotopie. Nos résultats ont démontré une relation anatomico- fonctionnelle qui définit clairement la position du PAC à travers toutes les variantes du gyrus d'Heschl's (HG). Les aires tonotopiques du PAC humain sont orientées le long d'un axe postéro-antérieur oblique avec des gradients de fréquences spécifiques perpendiculaires à HG, d'une manière similaire à celles mesurées chez le singe. Dans une deuxième expérience, nous avons testé si ces aires primaires pouvaient être modulées, de façon dynamique, par une attention sélective pour des fréquences préférées par rapport à celles non-préférées. Cette modulation est primordiale lors d'interactions sociales chez l'humain en présence de bruits distracteurs tels que d'autres discussions ou un environnement sonore nuisible (comme par exemple, dans la circulation routière). Dans cette étude, nous avons utilisé une expérience d'attention sélective où le sujet devait être attentif à une des deux voies sonores présentées simultanément à chaque oreille. Lorsque le sujet portait était attentif aux sons de basses fréquences (250 Hz), la réponse neuronale relative à ces fréquences augmentait par rapport à celle des hautes fréquences (4000 Hz), et vice versa lorsque l'attention passait des hautes aux basses fréquences. De ce fait, nous pouvons dire que PAC est capable de focaliser sur la fréquence attendue et de changer de canal selon la demande, comme une radio. Lors d'une troisième expérience, nous avons étudié les effets de suppression due à la répétition de sons environnementaux dans les aires auditives primaires et non-primaires, d'abord identifiées via le protocole de la première étude. La présentation répétée de sons provenant de la même source sonore, par rapport à de sons de différentes sources sonores, a induit un effet de suppression dans les aires postérieures et médiales auditives non-primaires de l'hémisphère droite, reflétant une implication de ces aires dans la représentation de la catégorie sémantique. Ces trois études ont été réalisées avec de l'imagerie à haute résolution à 7 Tesla. Cependant, les scanners 7 Tesla ne sont pour le moment utilisés que pour de la recherche fondamentale, principalement dans des institutions externes, parfois proches du patient mais pas directement à son chevet. L'imagerie fonctionnelle et structurelle clinique se fait actuellement principalement avec des infrastructures cliniques à 1.5 ou 3 Tesla. Dans le cadre dune quatrième expérience, nous avons avons évalués la cohérence du paradigme de cartographie tonotopique à travers différents scanners (3 et 7 Tesla) chez les mêmes sujets. Nos résultats démontrent des gradients de fréquences définissant PAC très similaires à 3 et 7 Tesla. De ce fait, notre paradigme de définition des aires primaires auditives est robuste et applicable cliniquement. Finalement, nous avons évalués l'impact de lésions focales sur les représentations tonotopiques des aires auditives primaires des hémisphères intactes contralésionales et ipsilésionales chez trois patients avec des lésions hémisphériques ou cérébélleuses avec ou sans plaintes auditives. Nous avons trouvé l'évidence d'une certaine réorganisation des représentations topographiques au niveau de PAC dans le cas de lésions cérébrales indépendamment des plaintes auditives. En conclusion, nos résultats démontrent une certaine plasticité du cortex auditif primaire avec différentes populations de sujets et différents champs magnétiques.
Resumo:
The tonotopic representations within the primary auditory cortex (PAC) have been successfully mapped with ultra-high field fMRI. Here, we compared the reliability of this tonotopic mapping paradigm at 7 T with 1.5 mm spatial resolution with maps acquired at 3 T with the same stimulation paradigm, but with spatial resolutions of 1.8 and 2.4 mm. For all subjects, the mirror-symmetric gradients within PAC were highly similar at 7 T and 3 T and across renderings at different spatial resolutions; albeit with lower percent signal changes at 3 T. In contrast, the frequency maps outside PAC tended to suffer from a reduced BOLD contrast-to-noise ratio at 3 T for a 1.8 mm voxel size, while robust at 2.4 mm and at 1.5 mm at 7 T. Overall, our results showed the robustness of the phase-encoding paradigm used here to map tonotopic representations across scanners.
Resumo:
Action-related sounds are known to increase the excitability of motoneurones within the primary motor cortex (M1), but the role of this auditory input remains unclear. We investigated repetition priming-induced plasticity, which is characteristic of semantic representations, in M1 by applying transcranial magnetic stimulation pulses to the hand area. Motor evoked potentials (MEPs) were larger while subjects were listening to sounds related versus unrelated to manual actions. Repeated exposure to the same manual-action-related sound yielded a significant decrease in MEPs when right, hand area was stimulated; no repetition effect was observed for manual-action-unrelated sounds. The shared repetition priming characteristics suggest that auditory input to the right primary motor cortex is part of auditory semantic representations.
Resumo:
Accurate perception of the order of occurrence of sensory information is critical for the building up of coherent representations of the external world from ongoing flows of sensory inputs. While some psychophysical evidence reports that performance on temporal perception can improve, the underlying neural mechanisms remain unresolved. Using electrical neuroimaging analyses of auditory evoked potentials (AEPs), we identified the brain dynamics and mechanism supporting improvements in auditory temporal order judgment (TOJ) during the course of the first vs. latter half of the experiment. Training-induced changes in brain activity were first evident 43-76 ms post stimulus onset and followed from topographic, rather than pure strength, AEP modulations. Improvements in auditory TOJ accuracy thus followed from changes in the configuration of the underlying brain networks during the initial stages of sensory processing. Source estimations revealed an increase in the lateralization of initially bilateral posterior sylvian region (PSR) responses at the beginning of the experiment to left-hemisphere dominance at its end. Further supporting the critical role of left and right PSR in auditory TOJ proficiency, as the experiment progressed, responses in the left and right PSR went from being correlated to un-correlated. These collective findings provide insights on the neurophysiologic mechanism and plasticity of temporal processing of sounds and are consistent with models based on spike timing dependent plasticity.
Resumo:
Evidence from human and non-human primate studies supports a dual-pathway model of audition, with partially segregated cortical networks for sound recognition and sound localisation, referred to as the What and Where processing streams. In normal subjects, these two networks overlap partially on the supra-temporal plane, suggesting that some early-stage auditory areas are involved in processing of either auditory feature alone or of both. Using high-resolution 7-T fMRI we have investigated the influence of positional information on sound object representations by comparing activation patterns to environmental sounds lateralised to the right or left ear. While unilaterally presented sounds induced bilateral activation, small clusters in specific non-primary auditory areas were significantly more activated by contra-laterally presented stimuli. Comparison of these data with histologically identified non-primary auditory areas suggests that the coding of sound objects within early-stage auditory areas lateral and posterior to primary auditory cortex AI is modulated by the position of the sound, while that within anterior areas is not.
Resumo:
Discriminating complex sounds relies on multiple stages of differential brain activity. The specific roles of these stages and their links to perception were the focus of the present study. We presented 250ms duration sounds of living and man-made objects while recording 160-channel electroencephalography (EEG). Subjects categorized each sound as that of a living, man-made or unknown item. We tested whether/when the brain discriminates between sound categories even when not transpiring behaviorally. We applied a single-trial classifier that identified voltage topographies and latencies at which brain responses are most discriminative. For sounds that the subjects could not categorize, we could successfully decode the semantic category based on differences in voltage topographies during the 116-174ms post-stimulus period. Sounds that were correctly categorized as that of a living or man-made item by the same subjects exhibited two periods of differences in voltage topographies at the single-trial level. Subjects exhibited differential activity before the sound ended (starting at 112ms) and on a separate period at ~270ms post-stimulus onset. Because each of these periods could be used to reliably decode semantic categories, we interpreted the first as being related to an implicit tuning for sound representations and the second as being linked to perceptual decision-making processes. Collectively, our results show that the brain discriminates environmental sounds during early stages and independently of behavioral proficiency and that explicit sound categorization requires a subsequent processing stage.
Resumo:
Action representations can interact with object recognition processes. For example, so-called mirror neurons respond both when performing an action and when seeing or hearing such actions. Investigations of auditory object processing have largely focused on categorical discrimination, which begins within the initial 100 ms post-stimulus onset and subsequently engages distinct cortical networks. Whether action representations themselves contribute to auditory object recognition and the precise kinds of actions recruiting the auditory-visual mirror neuron system remain poorly understood. We applied electrical neuroimaging analyses to auditory evoked potentials (AEPs) in response to sounds of man-made objects that were further subdivided between sounds conveying a socio-functional context and typically cuing a responsive action by the listener (e.g. a ringing telephone) and those that are not linked to such a context and do not typically elicit responsive actions (e.g. notes on a piano). This distinction was validated psychophysically by a separate cohort of listeners. Beginning approximately 300 ms, responses to such context-related sounds significantly differed from context-free sounds both in the strength and topography of the electric field. This latency is >200 ms subsequent to general categorical discrimination. Additionally, such topographic differences indicate that sounds of different action sub-types engage distinct configurations of intracranial generators. Statistical analysis of source estimations identified differential activity within premotor and inferior (pre)frontal regions (Brodmann's areas (BA) 6, BA8, and BA45/46/47) in response to sounds of actions typically cuing a responsive action. We discuss our results in terms of a spatio-temporal model of auditory object processing and the interplay between semantic and action representations.
Resumo:
For the recognition of sounds to benefit perception and action, their neural representations should also encode their current spatial position and their changes in position over time. The dual-stream model of auditory processing postulates separate (albeit interacting) processing streams for sound meaning and for sound location. Using a repetition priming paradigm in conjunction with distributed source modeling of auditory evoked potentials, we determined how individual sound objects are represented within these streams. Changes in perceived location were induced by interaural intensity differences, and sound location was either held constant or shifted across initial and repeated presentations (from one hemispace to the other in the main experiment or between locations within the right hemispace in a follow-up experiment). Location-linked representations were characterized by differences in priming effects between pairs presented to the same vs. different simulated lateralizations. These effects were significant at 20-39 ms post-stimulus onset within a cluster on the posterior part of the left superior and middle temporal gyri; and at 143-162 ms within a cluster on the left inferior and middle frontal gyri. Location-independent representations were characterized by a difference between initial and repeated presentations, independently of whether or not their simulated lateralization was held constant across repetitions. This effect was significant at 42-63 ms within three clusters on the right temporo-frontal region; and at 165-215 ms in a large cluster on the left temporo-parietal convexity. Our results reveal two varieties of representations of sound objects within the ventral/What stream: one location-independent, as initially postulated in the dual-stream model, and the other location-linked.