851 resultados para Multisensory Integration
Resumo:
Synesthesia entails a special kind of sensory perception, where stimulation in one sensory modality leads to an internally generated perceptual experience of another, not stimulated sensory modality. This phenomenon can be viewed as an abnormal multisensory integration process as here the synesthetic percept is aberrantly fused with the stimulated modality. Indeed, recent synesthesia research has focused on multimodal processing even outside of the specific synesthesia-inducing context and has revealed changed multimodal integration, thus suggesting perceptual alterations at a global level. Here, we focused on audio-visual processing in synesthesia using a semantic classification task in combination with visually or auditory-visually presented animated and in animated objects in an audio-visual congruent and incongruent manner. Fourteen subjects with auditory-visual and/or grapheme-color synesthesia and 14 control subjects participated in the experiment. During presentation of the stimuli, event-related potentials were recorded from 32 electrodes. The analysis of reaction times and error rates revealed no group differences with best performance for audio-visually congruent stimulation indicating the well-known multimodal facilitation effect. We found enhanced amplitude of the N1 component over occipital electrode sites for synesthetes compared to controls. The differences occurred irrespective of the experimental condition and therefore suggest a global influence on early sensory processing in synesthetes.
Resumo:
Pós-graduação em Design - FAAC
Resumo:
Assessment of brain connectivity among different brain areas during cognitive or motor tasks is a crucial problem in neuroscience today. Aim of this research study is to use neural mass models to assess the effect of various connectivity patterns in cortical EEG power spectral density (PSD), and investigate the possibility to derive connectivity circuits from EEG data. To this end, two different models have been built. In the first model an individual region of interest (ROI) has been built as the parallel arrangement of three populations, each one exhibiting a unimodal spectrum, at low, medium or high frequency. Connectivity among ROIs includes three parameters, which specify the strength of connection in the different frequency bands. Subsequent studies demonstrated that a single population can exhibit many different simultaneous rhythms, provided that some of these come from external sources (for instance, from remote regions). For this reason in the second model an individual ROI is simulated only with a single population. Both models have been validated by comparing the simulated power spectral density with that computed in some cortical regions during cognitive and motor tasks. Another research study is focused on multisensory integration of tactile and visual stimuli in the representation of the near space around the body (peripersonal space). This work describes an original neural network to simulate representation of the peripersonal space around the hands, in basal conditions and after training with a tool used to reach the far space. The model is composed of three areas for each hand, two unimodal areas (visual and tactile) connected to a third bimodal area (visual-tactile), which is activated only when a stimulus falls within the peripersonal space. Results show that the peripersonal space, which includes just a small visual space around the hand in normal conditions, becomes elongated in the direction of the tool after training, thanks to a reinforcement of synapses.
Resumo:
This thesis was aimed at verifying the role of the superior colliculus (SC) in human spatial orienting. To do so, subjects performed two experimental tasks that have been shown to involve SC’s activation in animals, that is a multisensory integration task (Experiment 1 and 2) and a visual target selection task (Experiment 3). To investigate this topic in humans, we took advantage of neurophysiological finding revealing that retinal S-cones do not send projections to the collicular and magnocellular pathway. In the Experiment 1, subjects performed a simple reaction-time task in which they were required to respond as quickly as possible to any sensory stimulus (visual, auditory or bimodal audio-visual). The visual stimulus could be an S-cone stimulus (invisible to the collicular and magnocellular pathway) or a long wavelength stimulus (visible to the SC). Results showed that when using S-cone stimuli, RTs distribution was simply explained by probability summation, indicating that the redundant auditory and visual channels are independent. Conversely, with red long-wavelength stimuli, visible to the SC, the RTs distribution was related to nonlinear neural summation, which constitutes evidence of integration of different sensory information. We also demonstrate that when AV stimuli were presented at fixation, so that the spatial orienting component of the task was reduced, neural summation was possible regardless of stimulus color. Together, these findings provide support for a pivotal role of the SC in mediating multisensory spatial integration in humans, when behavior involves spatial orienting responses. Since previous studies have shown an anatomical asymmetry of fibres projecting to the SC from the hemiretinas, the Experiment 2 was aimed at investigating temporo-nasal asymmetry in multisensory integration. To do so, subjects performed monocularly the same task shown in the Experiment 1. When spatially coincident audio-visual stimuli were visible to the SC (i.e. red stimuli), the RTE depended on a neural coactivation mechanism, suggesting an integration of multisensory information. When using stimuli invisible to the SC (i.e. purple stimuli), the RTE depended only on a simple statistical facilitation effect, in which the two sensory stimuli were processed by independent channels. Finally, we demonstrate that the multisensory integration effect was stronger for stimuli presented to the temporal hemifield than to the nasal hemifield. Taken together, these findings suggested that multisensory stimulation can be differentially effective depending on specific stimulus parameters. The Experiment 3 was aimed at verifying the role of the SC in target selection by using a color-oddity search task, comprising stimuli either visible or invisible to the collicular and magnocellular pathways. Subjects were required to make a saccade toward a target that could be presented alone or with three distractors of another color (either S-cone or long-wavelength). When using S-cone distractors, invisible to the SC, localization errors were similar to those observed in the distractor-free condition. Conversely, with long-wavelength distractors, visible to the SC, saccadic localization error and variability were significantly greater than in either the distractor-free condition or the S-cone distractors condition. Our results clearly indicate that the SC plays a direct role in visual target selection in humans. Overall, our results indicate that the SC plays an important role in mediating spatial orienting responses both when required covert (Experiments 1 and 2) and overt orienting (Experiment 3).
Resumo:
Human brain is provided with a flexible audio-visual system, which interprets and guides responses to external events according to spatial alignment, temporal synchronization and effectiveness of unimodal signals. The aim of the present thesis was to explore the possibility that such a system might represent the neural correlate of sensory compensation after a damage to one sensory pathway. To this purpose, three experimental studies have been conducted, which addressed the immediate, short-term and long-term effects of audio-visual integration on patients with Visual Field Defect (VFD). Experiment 1 investigated whether the integration of stimuli from different modalities (cross-modal) and from the same modality (within-modal) have a different, immediate effect on localization behaviour. Patients had to localize modality-specific stimuli (visual or auditory), cross-modal stimulus pairs (visual-auditory) and within-modal stimulus pairs (visual-visual). Results showed that cross-modal stimuli evoked a greater improvement than within modal stimuli, consistent with a Bayesian explanation. Moreover, even when visual processing was impaired, cross-modal stimuli improved performance in an optimal fashion. These findings support the hypothesis that the improvement derived from multisensory integration is not attributable to simple target redundancy, and prove that optimal integration of cross-modal signals occurs in processing stage which are not consciously accessible. Experiment 2 examined the possibility to induce a short term improvement of localization performance without an explicit knowledge of visual stimulus. Patients with VFD and patients with neglect had to localize weak sounds before and after a brief exposure to a passive cross-modal stimulation, which comprised spatially disparate or spatially coincident audio-visual stimuli. After exposure to spatially disparate stimuli in the affected field, only patients with neglect exhibited a shifts of auditory localization toward the visual attractor (the so called Ventriloquism After-Effect). In contrast, after adaptation to spatially coincident stimuli, both neglect and hemianopic patients exhibited a significant improvement of auditory localization, proving the occurrence of After Effect for multisensory enhancement. These results suggest the presence of two distinct recalibration mechanisms, each mediated by a different neural route: a geniculo-striate circuit and a colliculus-extrastriate circuit respectively. Finally, Experiment 3 verified whether a systematic audio-visual stimulation could exert a long-lasting effect on patients’ oculomotor behaviour. Eye movements responses during a visual search task and a reading task were studied before and after visual (control) or audio-visual (experimental) training, in a group of twelve patients with VFD and twelve controls subjects. Results showed that prior to treatment, patients’ performance was significantly different from that of controls in relation to fixations and saccade parameters; after audiovisual training, all patients reported an improvement in ocular exploration characterized by fewer fixations and refixations, quicker and larger saccades, and reduced scanpath length. Similarly, reading parameters were significantly affected by the training, with respect to specific impairments observed in left and right hemisphere–damaged patients. The present findings provide evidence that a systematic audio-visual stimulation may encourage a more organized pattern of visual exploration with long lasting effects. In conclusion, results from these studies clearly demonstrate that the beneficial effects of audio-visual integration can be retained in absence of explicit processing of visual stimulus. Surprisingly, an improvement of spatial orienting can be obtained not only when a on-line response is required, but also after either a brief or a long adaptation to audio-visual stimulus pairs, so suggesting the maintenance of mechanisms subserving cross-modal perceptual learning after a damage to geniculo-striate pathway. The colliculus-extrastriate pathway, which is spared in patients with VFD, seems to play a pivotal role in this sensory compensation.
Resumo:
The research activity carried out during the PhD course was focused on the development of mathematical models of some cognitive processes and their validation by means of data present in literature, with a double aim: i) to achieve a better interpretation and explanation of the great amount of data obtained on these processes from different methodologies (electrophysiological recordings on animals, neuropsychological, psychophysical and neuroimaging studies in humans), ii) to exploit model predictions and results to guide future research and experiments. In particular, the research activity has been focused on two different projects: 1) the first one concerns the development of neural oscillators networks, in order to investigate the mechanisms of synchronization of the neural oscillatory activity during cognitive processes, such as object recognition, memory, language, attention; 2) the second one concerns the mathematical modelling of multisensory integration processes (e.g. visual-acoustic), which occur in several cortical and subcortical regions (in particular in a subcortical structure named Superior Colliculus (SC)), and which are fundamental for orienting motor and attentive responses to external world stimuli. This activity has been realized in collaboration with the Center for Studies and Researches in Cognitive Neuroscience of the University of Bologna (in Cesena) and the Department of Neurobiology and Anatomy of the Wake Forest University School of Medicine (NC, USA). PART 1. Objects representation in a number of cognitive functions, like perception and recognition, foresees distribute processes in different cortical areas. One of the main neurophysiological question concerns how the correlation between these disparate areas is realized, in order to succeed in grouping together the characteristics of the same object (binding problem) and in maintaining segregated the properties belonging to different objects simultaneously present (segmentation problem). Different theories have been proposed to address these questions (Barlow, 1972). One of the most influential theory is the so called “assembly coding”, postulated by Singer (2003), according to which 1) an object is well described by a few fundamental properties, processing in different and distributed cortical areas; 2) the recognition of the object would be realized by means of the simultaneously activation of the cortical areas representing its different features; 3) groups of properties belonging to different objects would be kept separated in the time domain. In Chapter 1.1 and in Chapter 1.2 we present two neural network models for object recognition, based on the “assembly coding” hypothesis. These models are networks of Wilson-Cowan oscillators which exploit: i) two high-level “Gestalt Rules” (the similarity and previous knowledge rules), to realize the functional link between elements of different cortical areas representing properties of the same object (binding problem); 2) the synchronization of the neural oscillatory activity in the γ-band (30-100Hz), to segregate in time the representations of different objects simultaneously present (segmentation problem). These models are able to recognize and reconstruct multiple simultaneous external objects, even in difficult case (some wrong or lacking features, shared features, superimposed noise). In Chapter 1.3 the previous models are extended to realize a semantic memory, in which sensory-motor representations of objects are linked with words. To this aim, the network, previously developed, devoted to the representation of objects as a collection of sensory-motor features, is reciprocally linked with a second network devoted to the representation of words (lexical network) Synapses linking the two networks are trained via a time-dependent Hebbian rule, during a training period in which individual objects are presented together with the corresponding words. Simulation results demonstrate that, during the retrieval phase, the network can deal with the simultaneous presence of objects (from sensory-motor inputs) and words (from linguistic inputs), can correctly associate objects with words and segment objects even in the presence of incomplete information. Moreover, the network can realize some semantic links among words representing objects with some shared features. These results support the idea that semantic memory can be described as an integrated process, whose content is retrieved by the co-activation of different multimodal regions. In perspective, extended versions of this model may be used to test conceptual theories, and to provide a quantitative assessment of existing data (for instance concerning patients with neural deficits). PART 2. The ability of the brain to integrate information from different sensory channels is fundamental to perception of the external world (Stein et al, 1993). It is well documented that a number of extraprimary areas have neurons capable of such a task; one of the best known of these is the superior colliculus (SC). This midbrain structure receives auditory, visual and somatosensory inputs from different subcortical and cortical areas, and is involved in the control of orientation to external events (Wallace et al, 1993). SC neurons respond to each of these sensory inputs separately, but is also capable of integrating them (Stein et al, 1993) so that the response to the combined multisensory stimuli is greater than that to the individual component stimuli (enhancement). This enhancement is proportionately greater if the modality-specific paired stimuli are weaker (the principle of inverse effectiveness). Several studies have shown that the capability of SC neurons to engage in multisensory integration requires inputs from cortex; primarily the anterior ectosylvian sulcus (AES), but also the rostral lateral suprasylvian sulcus (rLS). If these cortical inputs are deactivated the response of SC neurons to cross-modal stimulation is no different from that evoked by the most effective of its individual component stimuli (Jiang et al 2001). This phenomenon can be better understood through mathematical models. The use of mathematical models and neural networks can place the mass of data that has been accumulated about this phenomenon and its underlying circuitry into a coherent theoretical structure. In Chapter 2.1 a simple neural network model of this structure is presented; this model is able to reproduce a large number of SC behaviours like multisensory enhancement, multisensory and unisensory depression, inverse effectiveness. In Chapter 2.2 this model was improved by incorporating more neurophysiological knowledge about the neural circuitry underlying SC multisensory integration, in order to suggest possible physiological mechanisms through which it is effected. This endeavour was realized in collaboration with Professor B.E. Stein and Doctor B. Rowland during the 6 months-period spent at the Department of Neurobiology and Anatomy of the Wake Forest University School of Medicine (NC, USA), within the Marco Polo Project. The model includes four distinct unisensory areas that are devoted to a topological representation of external stimuli. Two of them represent subregions of the AES (i.e., FAES, an auditory area, and AEV, a visual area) and send descending inputs to the ipsilateral SC; the other two represent subcortical areas (one auditory and one visual) projecting ascending inputs to the same SC. Different competitive mechanisms, realized by means of population of interneurons, are used in the model to reproduce the different behaviour of SC neurons in conditions of cortical activation and deactivation. The model, with a single set of parameters, is able to mimic the behaviour of SC multisensory neurons in response to very different stimulus conditions (multisensory enhancement, inverse effectiveness, within- and cross-modal suppression of spatially disparate stimuli), with cortex functional and cortex deactivated, and with a particular type of membrane receptors (NMDA receptors) active or inhibited. All these results agree with the data reported in Jiang et al. (2001) and in Binns and Salt (1996). The model suggests that non-linearities in neural responses and synaptic (excitatory and inhibitory) connections can explain the fundamental aspects of multisensory integration, and provides a biologically plausible hypothesis about the underlying circuitry.
Resumo:
Lesions to the primary geniculo-striate visual pathway cause blindness in the contralesional visual field. Nevertheless, previous studies have suggested that patients with visual field defects may still be able to implicitly process the affective valence of unseen emotional stimuli (affective blindsight) through alternative visual pathways bypassing the striate cortex. These alternative pathways may also allow exploitation of multisensory (audio-visual) integration mechanisms, such that auditory stimulation can enhance visual detection of stimuli which would otherwise be undetected when presented alone (crossmodal blindsight). The present dissertation investigated implicit emotional processing and multisensory integration when conscious visual processing is prevented by real or virtual lesions to the geniculo-striate pathway, in order to further clarify both the nature of these residual processes and the functional aspects of the underlying neural pathways. The present experimental evidence demonstrates that alternative subcortical visual pathways allow implicit processing of the emotional content of facial expressions in the absence of cortical processing. However, this residual ability is limited to fearful expressions. This finding suggests the existence of a subcortical system specialised in detecting danger signals based on coarse visual cues, therefore allowing the early recruitment of flight-or-fight behavioural responses even before conscious and detailed recognition of potential threats can take place. Moreover, the present dissertation extends the knowledge about crossmodal blindsight phenomena by showing that, unlike with visual detection, sound cannot crossmodally enhance visual orientation discrimination in the absence of functional striate cortex. This finding demonstrates, on the one hand, that the striate cortex plays a causative role in crossmodally enhancing visual orientation sensitivity and, on the other hand, that subcortical visual pathways bypassing the striate cortex, despite affording audio-visual integration processes leading to the improvement of simple visual abilities such as detection, cannot mediate multisensory enhancement of more complex visual functions, such as orientation discrimination.
Resumo:
Primate multisensory object perception involves distributed brain regions. To investigate the network character of these regions of the human brain, we applied data-driven group spatial independent component analysis (ICA) to a functional magnetic resonance imaging (fMRI) data set acquired during a passive audio-visual (AV) experiment with common object stimuli. We labeled three group-level independent component (IC) maps as auditory (A), visual (V), and AV, based on their spatial layouts and activation time courses. The overlap between these IC maps served as definition of a distributed network of multisensory candidate regions including superior temporal, ventral occipito-temporal, posterior parietal and prefrontal regions. During an independent second fMRI experiment, we explicitly tested their involvement in AV integration. Activations in nine out of these twelve regions met the max-criterion (A < AV > V) for multisensory integration. Comparison of this approach with a general linear model-based region-of-interest definition revealed its complementary value for multisensory neuroimaging. In conclusion, we estimated functional networks of uni- and multisensory functional connectivity from one dataset and validated their functional roles in an independent dataset. These findings demonstrate the particular value of ICA for multisensory neuroimaging research and using independent datasets to test hypotheses generated from a data-driven analysis.
Resumo:
Speech is typically a multimodal phenomenon, yet few studies have focused on the exclusive contributions of visual cues to language acquisition. To address this gap, we investigated whether visual prosodic information can facilitate speech segmentation. Previous research has demonstrated that language learners can use lexical stress and pitch cues to segment speech and that learners can extract this information from talking faces. Thus, we created an artificial speech stream that contained minimal segmentation cues and paired it with two synchronous facial displays in which visual prosody was either informative or uninformative for identifying word boundaries. Across three familiarisation conditions (audio stream alone, facial streams alone, and paired audiovisual), learning occurred only when the facial displays were informative to word boundaries, suggesting that facial cues can help learners solve the early challenges of language acquisition.
Resumo:
But: La perte unilatérale du cortex visuel postérieur engendre une cécité corticale controlatérale à la lésion, qu’on appelle hémianopsie homonyme (HH). Celle-ci est notamment accompagnée de problèmes d’exploration visuelle dans l’hémichamp aveugle dus à des stratégies oculaires déficitaires, qui ont été la cible des thérapies de compensation. Or, cette perte de vision peut s’accompagner d’une perception visuelle inconsciente, appelée blindsight. Notre hypothèse propose que le blindsight soit médié par la voie rétino-colliculaire extrastriée, recrutant le colliculus supérieur (CS), une structure multisensorielle. Notre programme a pour objectif d’évaluer l’impact d’un entraînement multisensoriel (audiovisuel) sur la performance visuelle inconsciente des personnes hémianopsiques et les stratégies oculaires. Nous essayons, ainsi, de démontrer l’implication du CS dans le phénomène de blindsight et la pertinence de la technique de compensation multisensorielle comme thérapie de réadaptation. Méthode: Notre participante, ML, atteinte d’une HH droite a effectué un entraînement d’intégration audiovisuel pour une période de 10 jours. Nous avons évalué la performance visuelle en localisation et en détection ainsi que les stratégies oculaires selon trois comparaisons principales : (1) entre l’hémichamp normal et l’hémichamp aveugle; (2) entre la condition visuelle et les conditions audiovisuelles; (3) entre les sessions de pré-entraînement, post-entraînement et 3 mois post-entraînement. Résultats: Nous avons démontré que (1) les caractéristiques des saccades et des fixations sont déficitaires dans l’hémichamp aveugle; (2) les stratégies saccadiques diffèrent selon les excentricités et les conditions de stimulations; (3) une adaptation saccadique à long terme est possible dans l’hémichamp aveugle si l’on considère le bon cadre de référence; (4) l’amélioration des mouvements oculaires est liée au blindsight. Conclusion(s): L’entraînement multisensoriel conduit à une amélioration de la performance visuelle pour des cibles non perçues, tant en localisation qu’en détection, ce qui est possiblement induit par le développement de la performance oculomotrice.
Resumo:
The article explores the possibilities of formalizing and explaining the mechanisms that support spatial and social perspective alignment sustained over the duration of a social interaction. The basic proposed principle is that in social contexts the mechanisms for sensorimotor transformations and multisensory integration (learn to) incorporate information relative to the other actor(s), similar to the "re-calibration" of visual receptive fields in response to repeated tool use. This process aligns or merges the co-actors' spatial representations and creates a "Shared Action Space" (SAS) supporting key computations of social interactions and joint actions; for example, the remapping between the coordinate systems and frames of reference of the co-actors, including perspective taking, the sensorimotor transformations required for lifting jointly an object, and the predictions of the sensory effects of such joint action. The social re-calibration is proposed to be based on common basis function maps (BFMs) and could constitute an optimal solution to sensorimotor transformation and multisensory integration in joint action or more in general social interaction contexts. However, certain situations such as discrepant postural and viewpoint alignment and associated differences in perspectives between the co-actors could constrain the process quite differently. We discuss how alignment is achieved in the first place, and how it is maintained over time, providing a taxonomy of various forms and mechanisms of space alignment and overlap based, for instance, on automaticity vs. control of the transformations between the two agents. Finally, we discuss the link between low-level mechanisms for the sharing of space and high-level mechanisms for the sharing of cognitive representations. © 2013 Pezzulo, Iodice, Ferraina and Kessler.
Resumo:
Research in pediatric central nervous system pathophysiology is focused around three primary goals: identification of neurodevelopmental disorders, understanding the differences in brain development which underlie these disorders, and improving treatment for these young children. Autism spectrum disorders (ASDs) are a complex set of disorders which are characterized by difficulties in language and social interactions. These behavioral measures are highly variable and a number of underlying causes can generate similar behavioral effects. Therefore, it is important to identify neurophysiological markers to better identify and characterize these disorders. Recent ASD findings using MEG show atypical latency and amplitude responses and poor cortical connectivity in children with ASDs across the cognitive spectrum from basic auditory processing, multisensory integration, to face and semantic processing. These results further support the view that ASDs are a complex neurologically-based disorder. On the other hand, the cause of Down syndrome is well understood as originating from a partial or full replication of chromosome 21. However, the cognitive and neurological consequences of this chromosomal abnormality are not yet well understood. Using a simple observation and motor execution task, poor functional connectivity in sensory-motor areas, particularly in the gamma band range, has been identified in children with Down syndrome and is consistent with behavioral deficits in the sensory-motor realm. Additional studies are needed to better understand whether targeted identification of these abnormalities can facilitate treatment in this disorder. Finally, while epilepsy can be reliably diagnosed, seizure control is still limited in many cases where the seizure onset zone is not readily apparent. Advances in pre-surgical evaluation and intra-operative co-registration will be described. These studies describing pediatric CNS pathophysiology will be discussed. © Springer-Verlag 2010.
Resumo:
Once thought to be predominantly the domain of cortex, multisensory integration has now been found at numerous sub-cortical locations in the auditory pathway. Prominent ascending and descending connection within the pathway suggest that the system may utilize non-auditory activity to help filter incoming sounds as they first enter the ear. Active mechanisms in the periphery, particularly the outer hair cells (OHCs) of the cochlea and middle ear muscles (MEMs), are capable of modulating the sensitivity of other peripheral mechanisms involved in the transduction of sound into the system. Through indirect mechanical coupling of the OHCs and MEMs to the eardrum, motion of these mechanisms can be recorded as acoustic signals in the ear canal. Here, we utilize this recording technique to describe three different experiments that demonstrate novel multisensory interactions occurring at the level of the eardrum. 1) In the first experiment, measurements in humans and monkeys performing a saccadic eye movement task to visual targets indicate that the eardrum oscillates in conjunction with eye movements. The amplitude and phase of the eardrum movement, which we dub the Oscillatory Saccadic Eardrum Associated Response or OSEAR, depended on the direction and horizontal amplitude of the saccade and occurred in the absence of any externally delivered sounds. 2) For the second experiment, we use an audiovisual cueing task to demonstrate a dynamic change to pressure levels in the ear when a sound is expected versus when one is not. Specifically, we observe a drop in frequency power and variability from 0.1 to 4kHz around the time when the sound is expected to occur in contract to a slight increase in power at both lower and higher frequencies. 3) For the third experiment, we show that seeing a speaker say a syllable that is incongruent with the accompanying audio can alter the response patterns of the auditory periphery, particularly during the most relevant moments in the speech stream. These visually influenced changes may contribute to the altered percept of the speech sound. Collectively, we presume that these findings represent the combined effect of OHCs and MEMs acting in tandem in response to various non-auditory signals in order to manipulate the receptive properties of the auditory system. These influences may have a profound, and previously unrecognized, impact on how the auditory system processes sounds from initial sensory transduction all the way to perception and behavior. Moreover, we demonstrate that the entire auditory system is, fundamentally, a multisensory system.
Resumo:
La vision joue un rôle très important dans la prévention du danger. La douleur a aussi pour fonction de prévenir les lésions corporelles. Nous avons donc testé l’hypothèse qu’une hypersensibilité à la douleur découlerait de la cécité en guise de compensation sensorielle. En effet, une littérature exhaustive indique qu’une plasticité intermodale s’opère chez les non-voyants, ce qui module à la hausse la sensibilité de leurs sens résiduels. De plus, plusieurs études montrent que la douleur peut être modulée par la vision et par une privation visuelle temporaire. Dans une première étude, nous avons mesuré les seuils de détection thermique et les seuils de douleur chez des aveugles de naissance et des voyants à l’aide d’une thermode qui permet de chauffer ou de refroidir la peau. Les participants ont aussi eu à quantifier la douleur perçue en réponse à des stimuli laser CO2 et à répondre à des questionnaires mesurant leur attitude face à des situations douloureuses de la vie quotidienne. Les résultats obtenus montrent que les aveugles congénitaux ont des seuils de douleur plus bas et des rapports de douleur plus élevés que leurs congénères voyants. De plus, les résultats psychométriques indiquent que les non-voyants sont plus attentifs à la douleur. Dans une deuxième étude, nous avons mesuré l’impact de l'expérience visuelle sur la perception de la douleur en répliquant la première étude dans un échantillon d’aveugles tardifs. Les résultats montrent que ces derniers sont en tous points similaires aux voyants quant à leur sensibilité à la douleur. Dans une troisième étude, nous avons testé les capacités de discrimination de température des aveugles congénitaux, car la détection de changements rapides de température est cruciale pour éviter les brûlures. Il s’est avéré que les aveugles de naissance ont une discrimination de température plus fine et qu’ils sont plus sensibles à la sommation spatiale de la chaleur. Dans une quatrième étude, nous avons examiné la contribution des fibres A∂ et C au traitement nociceptif des non-voyants, car ces récepteurs signalent la première et la deuxième douleur, respectivement. Nous avons observé que les aveugles congénitaux détectent plus facilement et répondent plus rapidement aux sensations générées par l’activation des fibres C. Dans une cinquième et dernière étude, nous avons sondé les changements potentiels qu’entrainerait la perte de vision dans la modulation descendante des intrants nociceptifs en mesurant les effets de l’appréhension d’un stimulus nocif sur la perception de la douleur. Les résultats montrent que, contrairement aux voyants, les aveugles congénitaux voient leur douleur exacerbée par l’incertitude face au danger, suggérant ainsi que la modulation centrale de la douleur est facilitée chez ces derniers. En gros, ces travaux indiquent que l’absence d’expérience visuelle, plutôt que la cécité, entraine une hausse de la sensibilité nociceptive, ce qui apporte une autre dimension au modèle d’intégration multi-sensorielle de la vision et de la douleur.
Resumo:
Motor learning is based on motor perception and emergent perceptual-motor representations. A lot of behavioral research is related to single perceptual modalities but during last two decades the contribution of multimodal perception on motor behavior was discovered more and more. A growing number of studies indicates an enhanced impact of multimodal stimuli on motor perception, motor control and motor learning in terms of better precision and higher reliability of the related actions. Behavioral research is supported by neurophysiological data, revealing that multisensory integration supports motor control and learning. But the overwhelming part of both research lines is dedicated to basic research. Besides research in the domains of music, dance and motor rehabilitation, there is almost no evidence for enhanced effectiveness of multisensory information on learning of gross motor skills. To reduce this gap, movement sonification is used here in applied research on motor learning in sports. Based on the current knowledge on the multimodal organization of the perceptual system, we generate additional real-time movement information being suitable for integration with perceptual feedback streams of visual and proprioceptive modality. With ongoing training, synchronously processed auditory information should be initially integrated into the emerging internal models, enhancing the efficacy of motor learning. This is achieved by a direct mapping of kinematic and dynamic motion parameters to electronic sounds, resulting in continuous auditory and convergent audiovisual or audio-proprioceptive stimulus arrays. In sharp contrast to other approaches using acoustic information as error-feedback in motor learning settings, we try to generate additional movement information suitable for acceleration and enhancement of adequate sensorimotor representations and processible below the level of consciousness. In the experimental setting, participants were asked to learn a closed motor skill (technique acquisition of indoor rowing). One group was treated with visual information and two groups with audiovisual information (sonification vs. natural sounds). For all three groups learning became evident and remained stable. Participants treated with additional movement sonification showed better performance compared to both other groups. Results indicate that movement sonification enhances motor learning of a complex gross motor skill-even exceeding usually expected acoustic rhythmic effects on motor learning.