988 resultados para Auditory temporal processing
Resumo:
Pyramidal neurones were injected with Lucifer Yellow in slices cut tangential to the surface of area 7m and the superior temporal polysensory area (STP) of the macaque monkey. Comparison of the basal dendritic arbors of supra- and infragranular pyramidal neurones (n=139) that were injected in the same putative modules in the different cortical areas revealed variation in their structure. Moreover, there were relative differences in dendritic morphology of supra- and infragranular pyramidal neurones in the two cortical areas. Shell analyses revealed that layer III pyramidal neurones in area STP had considerably higher peak complexity (maximum number of dendritic intersections per Shell circle) than those in layer V, whereas peak complexities were similar for supra- and infragranular pyramidal neurones in area 7m. In both cortical areas, the basal dendritic trees of layer m pyramidal neurones were characterized by a higher spine density than those in layer V. Calculations of the total number of dendritic spines in the average basal dendritic arbor revealed that layer V pyramidal neurones in area 7m had twice as many spines as cells in layer III. (4535 and 2294, respectively). A similar calculation for neurones in area STP revealed that layer III pyramidal neurones had approximately the same number of spines as cells in layer V (3585 and 3850 spines, respectively). Relative differences in the branching patterns of, and the number of spines in, the basal dendritic arbors of supra- and infragranular pyramidal neurones in the different cortical areas may allow for integration of different numbers of inputs, and different degrees of dendritic processing. These results support the thesis that intra-areal circuitry differs in different cortical areas.
Resumo:
The present study investigates human visual processing of simple two-colour patterns using a delayed match to sample paradigm with positron emission tomography (PET). This study is unique in that we specifically designed the visual stimuli to be the same for both pattern and colour recognition with all patterns being abstract shapes not easily verbally coded composed of two-colour combinations. We did this to explore those brain regions required for both colour and pattern processing and to separate those areas of activation required for one or the other. We found that both tasks activated similar occipital regions, the major difference being more extensive activation in pattern recognition. A right-sided network that involved the inferior parietal lobule, the head of the caudate nucleus, and the pulvinar nucleus of the thalamus was common to both paradigms. Pattern recognition also activated the left temporal pole and right lateral orbital gyrus, whereas colour recognition activated the left fusiform gyrus and several right frontal regions. (C) 2001 Wiley-Liss, Inc.
Resumo:
The processing of lexical ambiguity in context was investigated in eight individuals with schizophrenia and a matched control group. Participants made speeded lexical decisions on the third word in auditory word triplets representing concordant (coin-bank-money), discordant (river-bank-money). neutral (day-bank-money), and unrelated (river-day-money) conditions. When the interstimulus interval (ISI) between the words was 100 ms. individuals with schizophrenia demonstrated priming consistent with selective. context-based lexical activation. At 1250 ms ISI a pattern of nonselective meaning facilitation was obtained. These results suggest an attentional breakdown in the sustained inhibition of meanings on the basis of lexical context. (C) 2002 Elsevier Science (USA).
Resumo:
Given the dynamic nature of cardiac function, correct temporal alignment of pre-operative models and intraoperative images is crucial for augmented reality in cardiac image-guided interventions. As such, the current study focuses on the development of an image-based strategy for temporal alignment of multimodal cardiac imaging sequences, such as cine Magnetic Resonance Imaging (MRI) or 3D Ultrasound (US). First, we derive a robust, modality-independent signal from the image sequences, estimated by computing the normalized crosscorrelation between each frame in the temporal sequence and the end-diastolic frame. This signal is a resembler for the left-ventricle (LV) volume curve over time, whose variation indicates di erent temporal landmarks of the cardiac cycle. We then perform the temporal alignment of these surrogate signals derived from MRI and US sequences of the same patient through Dynamic Time Warping (DTW), allowing to synchronize both sequences. The proposed framework was evaluated in 98 patients, which have undergone both 3D+t MRI and US scans. The end-systolic frame could be accurately estimated as the minimum of the image-derived surrogate signal, presenting a relative error of 1:6 1:9% and 4:0 4:2% for the MRI and US sequences, respectively, thus supporting its association with key temporal instants of the cardiac cycle. The use of DTW reduces the desynchronization of the cardiac events in MRI and US sequences, allowing to temporally align multimodal cardiac imaging sequences. Overall, a generic, fast and accurate method for temporal synchronization of MRI and US sequences of the same patient was introduced. This approach could be straightforwardly used for the correct temporal alignment of pre-operative MRI information and intra-operative US images.
Resumo:
Auditory event-related potentials (AERPs) are widely used in diverse fields of today’s neuroscience, concerning auditory processing, speech perception, language acquisition, neurodevelopment, attention and cognition in normal aging, gender, developmental, neurologic and psychiatric disorders. However, its transposition to clinical practice has remained minimal. Mainly due to scarce literature on normative data across age, wide spectrumof results, variety of auditory stimuli used and to different neuropsychological meanings of AERPs components between authors. One of the most prominent AERP components studied in last decades was N1, which reflects auditory detection and discrimination. Subsequently, N2 indicates attention allocation and phonological analysis. The simultaneous analysis of N1 and N2 elicited by feasible novelty experimental paradigms, such as auditory oddball, seems an objective method to assess central auditory processing. The aim of this systematic review was to bring forward normative values for auditory oddball N1 and N2 components across age. EBSCO, PubMed, Web of Knowledge and Google Scholarwere systematically searched for studies that elicited N1 and/or N2 by auditory oddball paradigm. A total of 2,764 papers were initially identified in the database, of which 19 resulted from hand search and additional references, between 1988 and 2013, last 25 years. A final total of 68 studiesmet the eligibility criteria with a total of 2,406 participants from control groups for N1 (age range 6.6–85 years; mean 34.42) and 1,507 for N2 (age range 9–85 years; mean 36.13). Polynomial regression analysis revealed thatN1latency decreases with aging at Fz and Cz,N1 amplitude at Cz decreases from childhood to adolescence and stabilizes after 30–40 years and at Fz the decrement finishes by 60 years and highly increases after this age. Regarding N2, latency did not covary with age but amplitude showed a significant decrement for both Cz and Fz. Results suggested reliable normative values for Cz and Fz electrode locations; however, changes in brain development and components topography over age should be considered in clinical practice.
Resumo:
Dissertation presented to obtain the Ph.D degree in Neuroscience Instituto de Tecnologia Química e Biológica, Universidade Nova de Lisboa
Resumo:
Recent studies have demonstrated the positive effects of musical training on the perception of vocally expressed emotion. This study investigated the effects of musical training on event-related potential (ERP) correlates of emotional prosody processing. Fourteen musicians and fourteen control subjects listened to 228 sentences with neutral semantic content, differing in prosody (one third with neutral, one third with happy and one third with angry intonation), with intelligible semantic content (semantic content condition--SCC) and unintelligible semantic content (pure prosody condition--PPC). Reduced P50 amplitude was found in musicians. A difference between SCC and PPC conditions was found in P50 and N100 amplitude in non-musicians only, and in P200 amplitude in musicians only. Furthermore, musicians were more accurate in recognizing angry prosody in PPC sentences. These findings suggest that auditory expertise characterizing extensive musical training may impact different stages of vocal emotional processing.
Resumo:
Whether the somatosensory system, like its visual and auditory counterparts, is comprised of parallel functional pathways for processing identity and spatial attributes (so-called what and where pathways, respectively) has hitherto been studied in humans using neuropsychological and hemodynamic methods. Here, electrical neuroimaging of somatosensory evoked potentials (SEPs) identified the spatio-temporal mechanisms subserving vibrotactile processing during two types of blocks of trials. What blocks varied stimuli in their frequency (22.5 Hz vs. 110 Hz) independently of their location (left vs. right hand). Where blocks varied the same stimuli in their location independently of their frequency. In this way, there was a 2x2 within-subjects factorial design, counterbalancing the hand stimulated (left/right) and trial type (what/where). Responses to physically identical somatosensory stimuli differed within 200 ms post-stimulus onset, which is within the same timeframe we previously identified for audition (De Santis, L., Clarke, S., Murray, M.M., 2007. Automatic and intrinsic auditory "what" and "where" processing in humans revealed by electrical neuroimaging. Cereb Cortex 17, 9-17.). Initially (100-147 ms), responses to each hand were stronger to the what than where condition in a statistically indistinguishable network within the hemisphere contralateral to the stimulated hand, arguing against hemispheric specialization as the principal basis for somatosensory what and where pathways. Later (149-189 ms) responses differed topographically, indicative of the engagement of distinct configurations of brain networks. A common topography described responses to the where condition irrespective of the hand stimulated. By contrast, different topographies accounted for the what condition and also as a function of the hand stimulated. Parallel, functionally specialized pathways are observed across sensory systems and may be indicative of a computationally advantageous organization for processing spatial and identity information.
Resumo:
Sound localization relies on the analysis of interaural time and intensity differences, as well as attenuation patterns by the outer ear. We investigated the relative contributions of interaural time and intensity difference cues to sound localization by testing 60 healthy subjects: 25 with focal left and 25 with focal right hemispheric brain damage. Group and single-case behavioural analyses, as well as anatomo-clinical correlations, confirmed that deficits were more frequent and much more severe after right than left hemispheric lesions and for the processing of interaural time than intensity difference cues. For spatial processing based on interaural time difference cues, different error types were evident in the individual data. Deficits in discriminating between neighbouring positions occurred in both hemispaces after focal right hemispheric brain damage, but were restricted to the contralesional hemispace after focal left hemispheric brain damage. Alloacusis (perceptual shifts across the midline) occurred only after focal right hemispheric brain damage and was associated with minor or severe deficits in position discrimination. During spatial processing based on interaural intensity cues, deficits were less severe in the right hemispheric brain damage than left hemispheric brain damage group and no alloacusis occurred. These results, matched to anatomical data, suggest the existence of a binaural sound localization system predominantly based on interaural time difference cues and primarily supported by the right hemisphere. More generally, our data suggest that two distinct mechanisms contribute to: (i) the precise computation of spatial coordinates allowing spatial comparison within the contralateral hemispace for the left hemisphere and the whole space for the right hemisphere; and (ii) the building up of global auditory spatial representations in right temporo-parietal cortices.
Resumo:
When speech is degraded, word report is higher for semantically coherent sentences (e.g., her new skirt was made of denim) than for anomalous sentences (e.g., her good slope was done in carrot). Such increased intelligibility is often described as resulting from "top-down" processes, reflecting an assumption that higher-level (semantic) neural processes support lower-level (perceptual) mechanisms. We used time-resolved sparse fMRI to test for top-down neural mechanisms, measuring activity while participants heard coherent and anomalous sentences presented in speech envelope/spectrum noise at varying signal-to-noise ratios (SNR). The timing of BOLD responses to more intelligible speech provides evidence of hierarchical organization, with earlier responses in peri-auditory regions of the posterior superior temporal gyrus than in more distant temporal and frontal regions. Despite Sentence content × SNR interactions in the superior temporal gyrus, prefrontal regions respond after auditory/perceptual regions. Although we cannot rule out top-down effects, this pattern is more compatible with a purely feedforward or bottom-up account, in which the results of lower-level perceptual processing are passed to inferior frontal regions. Behavioral and neural evidence that sentence content influences perception of degraded speech does not necessarily imply "top-down" neural processes.
Resumo:
The proprotein convertases (PCs) are a family of nine mammalian enzymes that play key roles in the maintenance of cell homeostasis by activating or inactivating proteins via limited proteolysis under temporal and spatial control. A wide range of pathogens, including major human pathogenic viruses can hijack cellular PCs for their own purposes. In particular, productive infection with many enveloped viruses critically depends on the processing of their fusion-active viral envelope glycoproteins by cellular PCs. Based on their crucial role in virus-host interaction, PCs can be important determinants for viral pathogenesis and represent promising targets of therapeutic antiviral intervention. In the present review we will cover basic aspects and recent developments of PC-mediated maturation of viral envelope glycoproteins of selected medically important viruses. The molecular mechanisms underlying the recognition of PCs by viral glycoproteins will be described, including recent findings demonstrating differential PC-recognition of viral and cellular substrates. We will further discuss a possible scenario how viruses during co-evolution with their hosts adapted their glycoproteins to modulate the activity of cellular PCs for their own benefit and discuss the consequences for virus-host interaction and pathogenesis. Particular attention will be given to past and current efforts to evaluate cellular PCs as targets for antiviral therapeutic intervention, with emphasis on emerging highly pathogenic viruses for which no efficacious drugs or vaccines are currently available.
Resumo:
The human auditory system is comprised of specialized but interacting anatomic and functional pathways encoding object, spatial, and temporal information. We review how learning-induced plasticity manifests along these pathways and to what extent there are common mechanisms subserving such plasticity. A first series of experiments establishes a temporal hierarchy along which sounds of objects are discriminated along basic to fine-grained categorical boundaries and learned representations. A widespread network of temporal and (pre)frontal brain regions contributes to object discrimination via recursive processing. Learning-induced plasticity typically manifested as repetition suppression within a common set of brain regions. A second series considered how the temporal sequence of sound sources is represented. We show that lateralized responsiveness during the initial encoding phase of pairs of auditory spatial stimuli is critical for their accurate ordered perception. Finally, we consider how spatial representations are formed and modified through training-induced learning. A population-based model of spatial processing is supported wherein temporal and parietal structures interact in the encoding of relative and absolute spatial information over the initial ∼300ms post-stimulus onset. Collectively, these data provide insights into the functional organization of human audition and open directions for new developments in targeted diagnostic and neurorehabilitation strategies.
Resumo:
Abstract (English)General backgroundMultisensory stimuli are easier to recognize, can improve learning and a processed faster compared to unisensory ones. As such, the ability an organism has to extract and synthesize relevant sensory inputs across multiple sensory modalities shapes his perception of and interaction with the environment. A major question in the scientific field is how the brain extracts and fuses relevant information to create a unified perceptual representation (but also how it segregates unrelated information). This fusion between the senses has been termed "multisensory integration", a notion that derives from seminal animal single-cell studies performed in the superior colliculus, a subcortical structure shown to create a multisensory output differing from the sum of its unisensory inputs. At the cortical level, integration of multisensory information is traditionally deferred to higher classical associative cortical regions within the frontal, temporal and parietal lobes, after extensive processing within the sensory-specific and segregated pathways. However, many anatomical, electrophysiological and neuroimaging findings now speak for multisensory convergence and interactions as a distributed process beginning much earlier than previously appreciated and within the initial stages of sensory processing.The work presented in this thesis is aimed at studying the neural basis and mechanisms of how the human brain combines sensory information between the senses of hearing and touch. Early latency non-linear auditory-somatosensory neural response interactions have been repeatedly observed in humans and non-human primates. Whether these early, low-level interactions are directly influencing behavioral outcomes remains an open question as they have been observed under diverse experimental circumstances such as anesthesia, passive stimulation, as well as speeded reaction time tasks. Under laboratory settings, it has been demonstrated that simple reaction times to auditory-somatosensory stimuli are facilitated over their unisensory counterparts both when delivered to the same spatial location or not, suggesting that audi- tory-somatosensory integration must occur in cerebral regions with large-scale spatial representations. However experiments that required the spatial processing of the stimuli have observed effects limited to spatially aligned conditions or varying depending on which body part was stimulated. Whether those divergences stem from task requirements and/or the need for spatial processing has not been firmly established.Hypotheses and experimental resultsIn a first study, we hypothesized that auditory-somatosensory early non-linear multisensory neural response interactions are relevant to behavior. Performing a median split according to reaction time of a subset of behavioral and electroencephalographic data, we found that the earliest non-linear multisensory interactions measured within the EEG signal (i.e. between 40-83ms post-stimulus onset) were specific to fast reaction times indicating a direct correlation of early neural response interactions and behavior.In a second study, we hypothesized that the relevance of spatial information for task performance has an impact on behavioral measures of auditory-somatosensory integration. Across two psychophysical experiments we show that facilitated detection occurs even when attending to spatial information, with no modulation according to spatial alignment of the stimuli. On the other hand, discrimination performance with probes, quantified using sensitivity (d'), is impaired following multisensory trials in general and significantly more so following misaligned multisensory trials.In a third study, we hypothesized that behavioral improvements might vary depending which body part is stimulated. Preliminary results suggest a possible dissociation between behavioral improvements andERPs. RTs to multisensory stimuli were modulated by space only in the case when somatosensory stimuli were delivered to the neck whereas multisensory ERPs were modulated by spatial alignment for both types of somatosensory stimuli.ConclusionThis thesis provides insight into the functional role played by early, low-level multisensory interac-tions. Combining psychophysics and electrical neuroimaging techniques we demonstrate the behavioral re-levance of early and low-level interactions in the normal human system. Moreover, we show that these early interactions are hermetic to top-down influences on spatial processing suggesting their occurrence within cerebral regions having access to large-scale spatial representations. We finally highlight specific interactions between auditory space and somatosensory stimulation on different body parts. Gaining an in-depth understanding of how multisensory integration normally operates is of central importance as it will ultimately permit us to consider how the impaired brain could benefit from rehabilitation with multisensory stimula-Abstract (French)Background théoriqueDes stimuli multisensoriels sont plus faciles à reconnaître, peuvent améliorer l'apprentissage et sont traités plus rapidement comparé à des stimuli unisensoriels. Ainsi, la capacité qu'un organisme possède à extraire et à synthétiser avec ses différentes modalités sensorielles des inputs sensoriels pertinents, façonne sa perception et son interaction avec l'environnement. Une question majeure dans le domaine scientifique est comment le cerveau parvient à extraire et à fusionner des stimuli pour créer une représentation percep- tuelle cohérente (mais aussi comment il isole les stimuli sans rapport). Cette fusion entre les sens est appelée "intégration multisensorielle", une notion qui provient de travaux effectués dans le colliculus supérieur chez l'animal, une structure sous-corticale possédant des neurones produisant une sortie multisensorielle différant de la somme des entrées unisensorielles. Traditionnellement, l'intégration d'informations multisen- sorielles au niveau cortical est considérée comme se produisant tardivement dans les aires associatives supérieures dans les lobes frontaux, temporaux et pariétaux, suite à un traitement extensif au sein de régions unisensorielles primaires. Cependant, plusieurs découvertes anatomiques, électrophysiologiques et de neuroimageries remettent en question ce postulat, suggérant l'existence d'une convergence et d'interactions multisensorielles précoces.Les travaux présentés dans cette thèse sont destinés à mieux comprendre les bases neuronales et les mécanismes impliqués dans la combinaison d'informations sensorielles entre les sens de l'audition et du toucher chez l'homme. Des interactions neuronales non-linéaires précoces audio-somatosensorielles ont été observées à maintes reprises chez l'homme et le singe dans des circonstances aussi variées que sous anes- thésie, avec stimulation passive, et lors de tâches nécessitant un comportement (une détection simple de stimuli, par exemple). Ainsi, le rôle fonctionnel joué par ces interactions à une étape du traitement de l'information si précoce demeure une question ouverte. Il a également été démontré que les temps de réaction en réponse à des stimuli audio-somatosensoriels sont facilités par rapport à leurs homologues unisensoriels indépendamment de leur position spatiale. Ce résultat suggère que l'intégration audio- somatosensorielle se produit dans des régions cérébrales possédant des représentations spatiales à large échelle. Cependant, des expériences qui ont exigé un traitement spatial des stimuli ont produits des effets limités à des conditions où les stimuli multisensoriels étaient, alignés dans l'espace ou encore comme pouvant varier selon la partie de corps stimulée. Il n'a pas été établi à ce jour si ces divergences pourraient être dues aux contraintes liées à la tâche et/ou à la nécessité d'un traitement de l'information spatiale.Hypothèse et résultats expérimentauxDans une première étude, nous avons émis l'hypothèse que les interactions audio- somatosensorielles précoces sont pertinentes pour le comportement. En effectuant un partage des temps de réaction par rapport à la médiane d'un sous-ensemble de données comportementales et électroencépha- lographiques, nous avons constaté que les interactions multisensorielles qui se produisent à des latences précoces (entre 40-83ms) sont spécifique aux temps de réaction rapides indiquant une corrélation directe entre ces interactions neuronales précoces et le comportement.Dans une deuxième étude, nous avons émis l'hypothèse que si l'information spatiale devient perti-nente pour la tâche, elle pourrait exercer une influence sur des mesures comportementales de l'intégration audio-somatosensorielles. Dans deux expériences psychophysiques, nous montrons que même si les participants prêtent attention à l'information spatiale, une facilitation de la détection se produit et ce toujours indépendamment de la configuration spatiale des stimuli. Cependant, la performance de discrimination, quantifiée à l'aide d'un index de sensibilité (d') est altérée suite aux essais multisensoriels en général et de manière plus significative pour les essais multisensoriels non-alignés dans l'espace.Dans une troisième étude, nous avons émis l'hypothèse que des améliorations comportementales pourraient différer selon la partie du corps qui est stimulée (la main vs. la nuque). Des résultats préliminaires suggèrent une dissociation possible entre une facilitation comportementale et les potentiels évoqués. Les temps de réactions étaient influencés par la configuration spatiale uniquement dans le cas ou les stimuli somatosensoriels étaient sur la nuque alors que les potentiels évoqués étaient modulés par l'alignement spatial pour les deux types de stimuli somatosensorielles.ConclusionCette thèse apporte des éléments nouveaux concernant le rôle fonctionnel joué par les interactions multisensorielles précoces de bas niveau. En combinant la psychophysique et la neuroimagerie électrique, nous démontrons la pertinence comportementale des ces interactions dans le système humain normal. Par ailleurs, nous montrons que ces interactions précoces sont hermétiques aux influences dites «top-down» sur le traitement spatial suggérant leur occurrence dans des régions cérébrales ayant accès à des représentations spatiales de grande échelle. Nous soulignons enfin des interactions spécifiques entre l'espace auditif et la stimulation somatosensorielle sur différentes parties du corps. Approfondir la connaissance concernant les bases neuronales et les mécanismes impliqués dans l'intégration multisensorielle dans le système normale est d'une importance centrale car elle permettra d'examiner et de mieux comprendre comment le cerveau déficient pourrait bénéficier d'une réhabilitation avec la stimulation multisensorielle.
Resumo:
Because we live in an extremely complex social environment, people require the ability to memorize hundreds or thousands of social stimuli. The aim of this study was to investigate the effect of multiple repetitions on the processing of names and faces varying in terms of pre-experimental familiarity. We measured both behavioral and electrophysiological responses to self-, famous and unknown names and faces in three phases of the experiment (in every phase, each type of stimuli was repeated a pre-determined number of times). We found that the negative brain potential in posterior scalp sites observed approximately 170 ms after the stimulus onset (N170) was insensitive to pre-experimental familiarity but showed slight enhancement with each repetition. The negative wave in the inferior-temporal regions observed at approximately 250 ms (N250) was affected by both pre-experimental (famous>unknown) and intra-experimental familiarity (the more repetitions, the larger N250). In addition, N170 and N250 for names were larger in the left inferior-temporal region, whereas right-hemispheric or bilateral patterns of activity for faces were observed. The subsequent presentations of famous and unknown names and faces were also associated with higher amplitudes of the positive waveform in the central-parietal sites analyzed in the 320-900 ms time-window (P300). In contrast, P300 remained unchanged after the subsequent presentations of self-name and self-face. Moreover, the P300 for unknown faces grew more quickly than for unknown names. The latter suggests that the process of learning faces is more effective than learning names, possibly because faces carry more semantic information.
Resumo:
SOUND OBJECTS IN TIME, SPACE AND ACTIONThe term "sound object" describes an auditory experience that is associated with an acoustic event produced by a sound source. At cortical level, sound objects are represented by temporo-spatial activity patterns within distributed neural networks. This investigation concerns temporal, spatial and action aspects as assessed in normal subjects using electrical imaging or measurement of motor activity induced by transcranial magnetic stimulation (TMS).Hearing the same sound again has been shown to facilitate behavioral responses (repetition priming) and to modulate neural activity (repetition suppression). In natural settings the same source is often heard again and again, with variations in spectro-temporal and spatial characteristics. I have investigated how such repeats influence response times in a living vs. non-living categorization task and the associated spatio-temporal patterns of brain activity in humans. Dynamic analysis of distributed source estimations revealed differential sound object representations within the auditory cortex as a function of the temporal history of exposure to these objects. Often heard sounds are coded by a modulation in a bilateral network. Recently heard sounds, independently of the number of previous exposures, are coded by a modulation of a left-sided network.With sound objects which carry spatial information, I have investigated how spatial aspects of the repeats influence neural representations. Dynamics analyses of distributed source estimations revealed an ultra rapid discrimination of sound objects which are characterized by spatial cues. This discrimination involved two temporo-spatially distinct cortical representations, one associated with position-independent and the other with position-linked representations within the auditory ventral/what stream.Action-related sounds were shown to increase the excitability of motoneurons within the primary motor cortex, possibly via an input from the mirror neuron system. The role of motor representations remains unclear. I have investigated repetition priming-induced plasticity of the motor representations of action sounds with the measurement of motor activity induced by TMS pulses applied on the hand motor cortex. TMS delivered to the hand area within the primary motor cortex yielded larger magnetic evoked potentials (MEPs) while the subject was listening to sounds associated with manual than non- manual actions. Repetition suppression was observed at motoneuron level, since during a repeated exposure to the same manual action sound the MEPs were smaller. I discuss these results in terms of specialized neural network involved in sound processing, which is characterized by repetition-induced plasticity.Thus, neural networks which underlie sound object representations are characterized by modulations which keep track of the temporal and spatial history of the sound and, in case of action related sounds, also of the way in which the sound is produced.LES OBJETS SONORES AU TRAVERS DU TEMPS, DE L'ESPACE ET DES ACTIONSLe terme "objet sonore" décrit une expérience auditive associée avec un événement acoustique produit par une source sonore. Au niveau cortical, les objets sonores sont représentés par des patterns d'activités dans des réseaux neuronaux distribués. Ce travail traite les aspects temporels, spatiaux et liés aux actions, évalués à l'aide de l'imagerie électrique ou par des mesures de l'activité motrice induite par stimulation magnétique trans-crânienne (SMT) chez des sujets sains. Entendre le même son de façon répétitive facilite la réponse comportementale (amorçage de répétition) et module l'activité neuronale (suppression liée à la répétition). Dans un cadre naturel, la même source est souvent entendue plusieurs fois, avec des variations spectro-temporelles et de ses caractéristiques spatiales. J'ai étudié la façon dont ces répétitions influencent le temps de réponse lors d'une tâche de catégorisation vivant vs. non-vivant, et les patterns d'activité cérébrale qui lui sont associés. Des analyses dynamiques d'estimations de sources ont révélé des représentations différenciées des objets sonores au niveau du cortex auditif en fonction de l'historique d'exposition à ces objets. Les sons souvent entendus sont codés par des modulations d'un réseau bilatéral. Les sons récemment entendus sont codé par des modulations d'un réseau du côté gauche, indépendamment du nombre d'expositions. Avec des objets sonores véhiculant de l'information spatiale, j'ai étudié la façon dont les aspects spatiaux des sons répétés influencent les représentations neuronales. Des analyses dynamiques d'estimations de sources ont révélé une discrimination ultra rapide des objets sonores caractérisés par des indices spatiaux. Cette discrimination implique deux représentations corticales temporellement et spatialement distinctes, l'une associée à des représentations indépendantes de la position et l'autre à des représentations liées à la position. Ces représentations sont localisées dans la voie auditive ventrale du "quoi".Des sons d'actions augmentent l'excitabilité des motoneurones dans le cortex moteur primaire, possiblement par une afférence du system des neurones miroir. Le rôle des représentations motrices des sons d'actions reste peu clair. J'ai étudié la plasticité des représentations motrices induites par l'amorçage de répétition à l'aide de mesures de potentiels moteurs évoqués (PMEs) induits par des pulsations de SMT sur le cortex moteur de la main. La SMT appliquée sur le cortex moteur primaire de la main produit de plus grands PMEs alors que les sujets écoutent des sons associée à des actions manuelles en comparaison avec des sons d'actions non manuelles. Une suppression liée à la répétition a été observée au niveau des motoneurones, étant donné que lors de l'exposition répétée au son de la même action manuelle les PMEs étaient plus petits. Ces résultats sont discuté en termes de réseaux neuronaux spécialisés impliqués dans le traitement des sons et caractérisés par de la plasticité induite par la répétition. Ainsi, les réseaux neuronaux qui sous-tendent les représentations des objets sonores sont caractérisés par des modulations qui gardent une trace de l'histoire temporelle et spatiale du son ainsi que de la manière dont le son a été produit, en cas de sons d'actions.