999 resultados para Multisensory processing


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Les troubles du spectre autistique (TSA) sont actuellement caractérisés par une triade d'altérations, incluant un dysfonctionnement social, des déficits de communication et des comportements répétitifs. L'intégration simultanée de multiples sens est cruciale dans la vie quotidienne puisqu'elle permet la création d'un percept unifié. De façon similaire, l'allocation d'attention à de multiples stimuli simultanés est critique pour le traitement de l'information environnementale dynamique. Dans l'interaction quotidienne avec l'environnement, le traitement sensoriel et les fonctions attentionnelles sont des composantes de base dans le développement typique (DT). Bien qu'ils ne fassent pas partie des critères diagnostiques actuels, les difficultés dans les fonctions attentionnelles et le traitement sensoriel sont très courants parmi les personnes autistes. Pour cela, la présente thèse évalue ces fonctions dans deux études séparées. La première étude est fondée sur la prémisse que des altérations dans le traitement sensoriel de base pourraient être à l'origine des comportements sensoriels atypiques chez les TSA, tel que proposé par des théories actuelles des TSA. Nous avons conçu une tâche de discrimination de taille intermodale, afin d'investiguer l'intégrité et la trajectoire développementale de l'information visuo-tactile chez les enfants avec un TSA (N = 21, âgés de 6 à18 ans), en comparaison à des enfants à DT, appariés sur l’âge et le QI de performance. Dans une tâche à choix forcé à deux alternatives simultanées, les participants devaient émettre un jugement sur la taille de deux stimuli, basé sur des inputs unisensoriels (visuels ou tactiles) ou multisensoriels (visuo-tactiles). Des seuils différentiels ont évalué la plus petite différence à laquelle les participants ont été capables de faire la discrimination de taille. Les enfants avec un TSA ont montré une performance diminuée et pas d'effet de maturation aussi bien dans les conditions unisensorielles que multisensorielles, comparativement aux participants à DT. Notre première étude étend donc des résultats précédents d'altérations dans le traitement multisensoriel chez les TSA au domaine visuo-tactile. Dans notre deuxième étude, nous avions évalué les capacités de poursuite multiple d’objets dans l’espace (3D-Multiple Object Tracking (3D-MOT)) chez des adultes autistes (N = 15, âgés de 18 à 33 ans), comparés à des participants contrôles appariés sur l'âge et le QI, qui devaient suivre une ou trois cibles en mouvement parmi des distracteurs dans un environnement de réalité virtuelle. Les performances ont été mesurées par des seuils de vitesse, qui évaluent la plus grande vitesse à laquelle des observateurs sont capables de suivre des objets en mouvement. Les individus autistes ont montré des seuils de vitesse réduits dans l'ensemble, peu importe le nombre d'objets à suivre. Ces résultats étendent des résultats antérieurs d'altérations au niveau des mécanismes d'attention en autisme quant à l'allocation simultanée de l'attention envers des endroits multiples. Pris ensemble, les résultats de nos deux études révèlent donc des altérations chez les TSA quant au traitement simultané d'événements multiples, que ce soit dans une modalité ou à travers des modalités, ce qui peut avoir des implications importantes au niveau de la présentation clinique de cette condition.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis is an investigation of structural brain abnormalities, as well as multisensory and unisensory processing deficits in autistic traits and Autism Spectrum Disorder (ASD). To achieve this, structural and functional magnetic resonance imaging (fMRI) and psychophysical techniques were employed. ASD is a neurodevelopmental condition which is characterised by the social communication and interaction deficits, as well as repetitive patterns of behaviour, interests and activities. These traits are thought to be present in a typical population. The Autism Spectrum Quotient questionnaire (AQ) was developed to assess the prevalence of autistic traits in the general population. Von dem Hagen et al. (2011) revealed a link between AQ with white matter (WM) and grey matter (GM) volume (using voxel-based-morphometry). However, their findings revealed no difference in GM in areas associated with social cognition. Cortical thickness (CT) measurements are known to be a more direct measure of cortical morphology than GM volume. Therefore, Chapter 2 investigated the relationship between AQ scores and CT in the same sample of participants. This study showed that AQ scores correlated with CT in the left temporo-occipital junction, left posterior cingulate, right precentral gyrus and bilateral precentral sulcus, in a typical population. These areas were previously associated with structural and functional differences in ASD. Thus the findings suggest, to some extent, autistic traits are reflected in brain structure - in the general population. The ability to integrate auditory and visual information is crucial to everyday life, and results are mixed regarding how ASD influences audiovisual integration. To investigate this question, Chapter 3 examined the Temporal Integration Window (TIW), which indicates how precisely sight and sound need to be temporally aligned so that a unitary audiovisual event can be perceived. 26 adult males with ASD and 26 age and IQ-matched typically developed males were presented with flash-beep (BF), point-light drummer, and face-voice (FV) displays with varying degrees of asynchrony and asked to make Synchrony Judgements (SJ) and Temporal Order Judgements (TOJ). Analysis of the data included fitting Gaussian functions as well as using an Independent Channels Model (ICM) to fit the data (Garcia-Perez & Alcala-Quintana, 2012). Gaussian curve fitting for SJs showed that the ASD group had a wider TIW, but for TOJ no group effect was found. The ICM supported these results and model parameters indicated that the wider TIW for SJs in the ASD group was not due to sensory processing at the unisensory level, but rather due to decreased temporal resolution at a decisional level of combining sensory information. Furthermore, when performing TOJ, the ICM revealed a smaller Point of Subjective Simultaneity (PSS; closer to physical synchrony) in the ASD group than in the TD group. Finding that audiovisual temporal processing is different in ASD encouraged us to investigate the neural correlates of multisensory as well as unisensory processing using functional magnetic resonance imaging fMRI. Therefore, Chapter 4 investigated audiovisual, auditory and visual processing in ASD of simple BF displays and complex, social FV displays. During a block design experiment, we measured the BOLD signal when 13 adults with ASD and 13 typically developed (TD) age-sex- and IQ- matched adults were presented with audiovisual, audio and visual information of BF and FV displays. Our analyses revealed that processing of audiovisual as well as unisensory auditory and visual stimulus conditions in both the BF and FV displays was associated with reduced activation in ASD. Audiovisual, auditory and visual conditions of FV stimuli revealed reduced activation in ASD in regions of the frontal cortex, while BF stimuli revealed reduced activation the lingual gyri. The inferior parietal gyrus revealed an interaction between stimulus sensory condition of BF stimuli and group. Conjunction analyses revealed smaller regions of the superior temporal cortex (STC) in ASD to be audiovisual sensitive. Against our predictions, the STC did not reveal any activation differences, per se, between the two groups. However, a superior frontal area was shown to be sensitive to audiovisual face-voice stimuli in the TD group, but not in the ASD group. Overall this study indicated differences in brain activity for audiovisual, auditory and visual processing of social and non-social stimuli in individuals with ASD compared to TD individuals. These results contrast previous behavioural findings, suggesting different audiovisual integration, yet intact auditory and visual processing in ASD. Our behavioural findings revealed audiovisual temporal processing deficits in ASD during SJ tasks, therefore we investigated the neural correlates of SJ in ASD and TD controls. Similar to Chapter 4, we used fMRI in Chapter 5 to investigate audiovisual temporal processing in ASD in the same participants as recruited in Chapter 4. BOLD signals were measured while the ASD and TD participants were asked to make SJ on audiovisual displays of different levels of asynchrony: the participants’ PSS, audio leading visual information (audio first), visual leading audio information (visual first). Whereas no effect of group was found with BF displays, increased putamen activation was observed in ASD participants compared to TD participants when making SJs on FV displays. Investigating SJ on audiovisual displays in the bilateral superior temporal gyrus (STG), an area involved in audiovisual integration (see Chapter 4), we found no group differences or interaction between group and levels of audiovisual asynchrony. The investigation of different levels of asynchrony revealed a complex pattern of results indicating a network of areas more involved in processing PSS than audio first and visual first, as well as areas responding differently to audio first compared to video first. These activation differences between audio first and video first in different brain areas are constant with the view that audio leading and visual leading stimuli are processed differently.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Balance maintenance relies on a complex interplay between many different sensory modalities. Although optimal multisensory processing is thought to decline with ageing, inefficient integration is particularly associated with falls in older adults. We investigated whether improved balance control, following a novel balance training intervention, was associated with more efficient multisensory integration in older adults, particularly those who have fallen in the past. Specifically, 76 healthy and fall-prone older adults were allocated to either a balance training programme conducted over 5 weeks or to a passive control condition. Balance training involved a VR display in which the on-screen position of a target object was controlled by shifts in postural balance on a Wii balance board. Susceptibility to the sound-induced flash illusion, before and after the intervention (or control condition), was used as a measure of multisensory function. Whilst balance and postural control improved for all participants assigned to the Intervention group, improved functional balance was correlated with more efficient multisensory processing in the fall-prone older adults only. Our findings add to growing evidence suggesting important links between balance control and multisensory interactions in the ageing brain and have implications for the development of interventions designed to reduce the risk of falls.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

This study analyzed high-density event-related potentials (ERPs) within an electrical neuroimaging framework to provide insights regarding the interaction between multisensory processes and stimulus probabilities. Specifically, we identified the spatiotemporal brain mechanisms by which the proportion of temporally congruent and task-irrelevant auditory information influences stimulus processing during a visual duration discrimination task. The spatial position (top/bottom) of the visual stimulus was indicative of how frequently the visual and auditory stimuli would be congruent in their duration (i.e., context of congruence). Stronger influences of irrelevant sound were observed when contexts associated with a high proportion of auditory-visual congruence repeated and also when contexts associated with a low proportion of congruence switched. Context of congruence and context transition resulted in weaker brain responses at 228 to 257 ms poststimulus to conditions giving rise to larger behavioral cross-modal interactions. Importantly, a control oddball task revealed that both congruent and incongruent audiovisual stimuli triggered equivalent non-linear multisensory interactions when congruence was not a relevant dimension. Collectively, these results are well explained by statistical learning, which links a particular context (here: a spatial location) with a certain level of top-down attentional control that further modulates cross-modal interactions based on whether a particular context repeated or changed. The current findings shed new light on the importance of context-based control over multisensory processing, whose influences multiplex across finer and broader time scales.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Research in the last four decades has brought a considerable advance in our understanding of how the brain synthesizes information arising from different sensory modalities. Indeed, many cortical and subcortical areas, beyond those traditionally considered to be ‘associative,’ have been shown to be involved in multisensory interaction and integration (Ghazanfar and Schroeder 2006). Visuo-tactile interaction is of particular interest, because of the prominent role played by vision in guiding our actions and anticipating their tactile consequences in everyday life. In this chapter, we focus on the functional role that visuo-tactile processing may play in driving two types of body-object interactions: avoidance and approach. We will first review some basic features of visuo-tactile interactions, as revealed by electrophysiological studies in monkeys. These will prove to be relevant for interpreting the subsequent evidence arising from human studies. A crucial point that will be stressed is that these visuo-tactile mechanisms have not only sensory, but also motor-related activity that qualifies them as multisensory-motor interfaces. Evidence will then be presented for the existence of functionally homologous processing in the human brain, both from neuropsychological research in brain-damaged patients and in healthy participants. The final part of the chapter will focus on some recent studies in humans showing that the human motor system is provided with a multisensory interface that allows for continuous monitoring of the space near the body (i.e., peripersonal space). We further demonstrate that multisensory processing can be modulated on-line as a consequence of interacting with objects. This indicates that, far from being passive, the monitoring of peripersonal space is an active process subserving actions between our body and objects located in the space around us.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Spatial perspective-taking that involves imagined changes in one’s spatial orientation is facilitated by vestibular stimulation inducing a congruent sensation of self-motion. We examined further the role of vestibular resources in perspective-taking by evaluating whether aberrant and conflicting vestibular stimulation impaired perspective-taking performance. Participants (N = 39) undertook either an “own body transformation” (OBT)task, requiring speeded spatial judgments made from the perspective of a schematic figure, or a control task requiring reconfiguration of spatial mappings from one’s own visuo-spatial perspective. These tasks were performed both without and with vestibular stimulation by whole-body Coriolis motion, according to a repeated measures design, balanced for order. Vestibular stimulation was found to impair performance during the first minute post stimulus relative to the stationary condition. This disruption was task-specific, affecting only the OBT task and not the control task, and dissipated by the second minute post-stimulus. Our experiment thus demonstrates selective temporary impairment of perspective-taking from aberrant vestibular stimulation, implying that uncompromised vestibular resources are necessary for efficient perspective-taking. This finding provides evidence for an embodied mechanism for perspective-taking whereby vestibular input contributes to multisensory processing underlying bodily and social cognition. Ultimately, this knowledge may contribute to the design of interventions that help patients suffering sudden vertigo adapt to the cognitive difficulties caused by aberrant vestibular stimulation.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Recognizing one’s body as separate from the external world plays a crucial role in detecting external events, and thus in planning adequate reactions to them. In addition, recognizing one’s body as distinct from others’ bodies allows remapping the experiences of others onto one’s sensory system, providing improved social understanding. In line with these assumptions, two well-known multisensory mechanisms demonstrated modulations of somatosensation when viewing both one’s own and someone else’s body: the Visual Enhancement of Touch (VET) and the Visual Remapping of Touch (VRT) effects. Vision of the body, in the former, and vision of the body being touched, in the latter, enhance tactile processing. The present dissertation investigated the multisensory nature of these mechanisms and their neural bases. Further experiments compared these effects for viewing one’s own body or viewing another person’s body. These experiments showed important differences in multisensory processing for one’s own body, and for other bodies, and also highlighted interactions between VET and VRT effects. The present experimental evidence demonstrated that a multisensory representation of one’s body – underlie by a high order fronto-parietal network - sends rapid modulatory feedback to primary somatosensory cortex, thus functionally enhancing tactile processing. These effects were highly spatially-specific, and depended on current body position. In contrast, vision of another person’s body can drive mental representations able to modulate tactile perception without any spatial constraint. Finally, these modulatory effects seem sometimes to interact with high order information, such as emotional content of a face. This allows one’s somatosensory system to adequately modulate perception of external events on the body surface, as a function of its interaction with the emotional state expressed by another individual.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Edges are crucial for the formation of coherent objects from sequential sensory inputs within a single modality. Moreover, temporally coincident boundaries of perceptual objects across different sensory modalities facilitate crossmodal integration. Here, we used functional magnetic resonance imaging in order to examine the neural basis of temporal edge detection across modalities. Onsets of sensory inputs are not only related to the detection of an edge but also to the processing of novel sensory inputs. Thus, we used transitions from input to rest (offsets) as convenient stimuli for studying the neural underpinnings of visual and acoustic edge detection per se. We found, besides modality-specific patterns, shared visual and auditory offset-related activity in the superior temporal sulcus and insula of the right hemisphere. Our data suggest that right hemispheric regions known to be involved in multisensory processing are crucial for detection of edges in the temporal domain across both visual and auditory modalities. This operation is likely to facilitate cross-modal object feature binding based on temporal coincidence. Hum Brain Mapp, 2008. (c) 2008 Wiley-Liss, Inc.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Asperger Syndrome (AS) belongs to autism spectrum disorders where both verbal and non-verbal communication difficulties are at the core of the impairment. Social communication requires a complex use of affective, linguistic-cognitive and perceptual processes. In the four studies included in the current thesis, some of the linguistic and perceptual factors that are important for face-to-face communication were studied using behavioural methods. In all four studies the results obtained from individuals with AS were compared with typically developed age, gender and IQ matched controls. First, the language skills of school-aged children were characterized in detail with standardized tests that measured different aspects of receptive and expressive language (Study I). The children with AS were found to be worse than the controls in following complex verbal instructions. Next, the visual perception of facial expressions of emotion with varying degrees of visual detail was examined (Study II). Adults with AS were found to have impaired recognition of facial expressions on the basis of very low spatial frequencies which are important for processing global information. Following that, multisensory perception was investigated by looking at audiovisual speech perception (Studies III and IV). Adults with AS were found to perceive audiovisual speech qualitatively differently from typically developed adults, although both groups were equally accurate in recognizing auditory and visual speech presented alone. Finally, the effect of attention on audiovisual speech perception was studied by registering eye gaze behaviour (Study III) and by studying the voluntary control of visual attention (Study IV). The groups did not differ in eye gaze behaviour or in the voluntary control of visual attention. The results of the study series demonstrate that many factors underpinning face-to-face social communication are atypical in AS. In contrast with previous assumptions about intact language abilities, the current results show that children with AS have difficulties in understanding complex verbal instructions. Furthermore, the study makes clear that deviations in the perception of global features in faces expressing emotions as well as in the multisensory perception of speech are likely to harm face-to-face social communication.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Saccadic eye movements can be elicited by more than one type of sensory stimulus. This implies substantial transformations of signals originating in different sense organs as they reach a common motor output pathway. In this study, we compared the prevalence and magnitude of auditory- and visually evoked activity in a structure implicated in oculomotor processing, the primate frontal eye fields (FEF). We recorded from 324 single neurons while 2 monkeys performed delayed saccades to visual or auditory targets. We found that 64% of FEF neurons were active on presentation of auditory targets and 87% were active during auditory-guided saccades, compared with 75 and 84% for visual targets and saccades. As saccade onset approached, the average level of population activity in the FEF became indistinguishable on visual and auditory trials. FEF activity was better correlated with the movement vector than with the target location for both modalities. In summary, the large proportion of auditory-responsive neurons in the FEF, the similarity between visual and auditory activity levels at the time of the saccade, and the strong correlation between the activity and the saccade vector suggest that auditory signals undergo tailoring to match roughly the strength of visual signals present in the FEF, facilitating accessing of a common motor output pathway.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Tool use in humans requires that multisensory information is integrated across different locations, from objects seen to be distant from the hand, but felt indirectly at the hand via the tool. We tested the hypothesis that using a simple tool to perceive vibrotactile stimuli results in the enhanced processing of visual stimuli presented at the distal, functional part of the tool. Such a finding would be consistent with a shift of spatial attention to the location where the tool is used. Methodology/Principal Findings: We tested this hypothesis by scanning healthy human participants' brains using functional magnetic resonance imaging, while they used a simple tool to discriminate between target vibrations, accompanied by congruent or incongruent visual distractors, on the same or opposite side to the tool. The attentional hypothesis was supported: BOLD response in occipital cortex, particularly in the right hemisphere lingual gyrus, varied significantly as a function of tool position, increasing contralaterally, and decreasing ipsilaterally to the tool. Furthermore, these modulations occurred despite the fact that participants were repeatedly instructed to ignore the visual stimuli, to respond only to the vibrotactile stimuli, and to maintain visual fixation centrally. In addition, the magnitude of multisensory (visual-vibrotactile) interactions in participants' behavioural responses significantly predicted the BOLD response in occipital cortical areas that were also modulated as a function of both visual stimulus position and tool position. Conclusions/Significance: These results show that using a simple tool to locate and to perceive vibrotactile stimuli is accompanied by a shift of spatial attention to the location where the functional part of the tool is used, resulting in enhanced processing of visual stimuli at that location, and decreased processing at other locations. This was most clearly observed in the right hemisphere lingual gyrus. Such modulations of visual processing may reflect the functional importance of visuospatial information during human tool use

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Lesions to the primary geniculo-striate visual pathway cause blindness in the contralesional visual field. Nevertheless, previous studies have suggested that patients with visual field defects may still be able to implicitly process the affective valence of unseen emotional stimuli (affective blindsight) through alternative visual pathways bypassing the striate cortex. These alternative pathways may also allow exploitation of multisensory (audio-visual) integration mechanisms, such that auditory stimulation can enhance visual detection of stimuli which would otherwise be undetected when presented alone (crossmodal blindsight). The present dissertation investigated implicit emotional processing and multisensory integration when conscious visual processing is prevented by real or virtual lesions to the geniculo-striate pathway, in order to further clarify both the nature of these residual processes and the functional aspects of the underlying neural pathways. The present experimental evidence demonstrates that alternative subcortical visual pathways allow implicit processing of the emotional content of facial expressions in the absence of cortical processing. However, this residual ability is limited to fearful expressions. This finding suggests the existence of a subcortical system specialised in detecting danger signals based on coarse visual cues, therefore allowing the early recruitment of flight-or-fight behavioural responses even before conscious and detailed recognition of potential threats can take place. Moreover, the present dissertation extends the knowledge about crossmodal blindsight phenomena by showing that, unlike with visual detection, sound cannot crossmodally enhance visual orientation discrimination in the absence of functional striate cortex. This finding demonstrates, on the one hand, that the striate cortex plays a causative role in crossmodally enhancing visual orientation sensitivity and, on the other hand, that subcortical visual pathways bypassing the striate cortex, despite affording audio-visual integration processes leading to the improvement of simple visual abilities such as detection, cannot mediate multisensory enhancement of more complex visual functions, such as orientation discrimination.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Comprehending speech is one of the most important human behaviors, but we are only beginning to understand how the brain accomplishes this difficult task. One key to speech perception seems to be that the brain integrates the independent sources of information available in the auditory and visual modalities in a process known as multisensory integration. This allows speech perception to be accurate, even in environments in which one modality or the other is ambiguous in the context of noise. Previous electrophysiological and functional magnetic resonance imaging (fMRI) experiments have implicated the posterior superior temporal sulcus (STS) in auditory-visual integration of both speech and non-speech stimuli. While evidence from prior imaging studies have found increases in STS activity for audiovisual speech compared with unisensory auditory or visual speech, these studies do not provide a clear mechanism as to how the STS communicates with early sensory areas to integrate the two streams of information into a coherent audiovisual percept. Furthermore, it is currently unknown if the activity within the STS is directly correlated with strength of audiovisual perception. In order to better understand the cortical mechanisms that underlie audiovisual speech perception, we first studied the STS activity and connectivity during the perception of speech with auditory and visual components of varying intelligibility. By studying fMRI activity during these noisy audiovisual speech stimuli, we found that STS connectivity with auditory and visual cortical areas mirrored perception; when the information from one modality is unreliable and noisy, the STS interacts less with the cortex processing that modality and more with the cortex processing the reliable information. We next characterized the role of STS activity during a striking audiovisual speech illusion, the McGurk effect, to determine if activity within the STS predicts how strongly a person integrates auditory and visual speech information. Subjects with greater susceptibility to the McGurk effect exhibited stronger fMRI activation of the STS during perception of McGurk syllables, implying a direct correlation between strength of audiovisual integration of speech and activity within an the multisensory STS.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Three studies investigated the relation between symbolic gestures and words, aiming at discover the neural basis and behavioural features of the lexical semantic processing and integration of the two communicative signals. The first study aimed at determining whether elaboration of communicative signals (symbolic gestures and words) is always accompanied by integration with each other and, if present, this integration can be considered in support of the existence of a same control mechanism. Experiment 1 aimed at determining whether and how gesture is integrated with word. Participants were administered with a semantic priming paradigm with a lexical decision task and pronounced a target word, which was preceded by a meaningful or meaningless prime gesture. When meaningful, the gesture could be either congruent or incongruent with word meaning. Duration of prime presentation (100, 250, 400 ms) randomly varied. Voice spectra, lip kinematics, and time to response were recorded and analyzed. Formant 1 of voice spectra, and mean velocity in lip kinematics increased when the prime was meaningful and congruent with the word, as compared to meaningless gesture. In other words, parameters of voice and movement were magnified by congruence, but this occurred only when prime duration was 250 ms. Time to response to meaningful gesture was shorter in the condition of congruence compared to incongruence. Experiment 2 aimed at determining whether the mechanism of integration of a prime word with a target word is similar to that of a prime gesture with a target word. Formant 1 of the target word increased when word prime was meaningful and congruent, as compared to meaningless congruent prime. Increase was, however, present for whatever prime word duration. In the second study, experiment 3 aimed at determining whether symbolic prime gesture comprehension makes use of motor simulation. Transcranial Magnetic Stimulation was delivered to left primary motor cortex 100, 250, 500 ms after prime gesture presentation. Motor Evoked Potential of First Dorsal Interosseus increased when stimulation occurred 100 ms post-stimulus. Thus, gesture was understood within 100ms and integrated with the target word within 250 ms. Experiment 4 excluded any hand motor simulation in order to comprehend prime word. The effect of the prior presentation of a symbolic gesture on congruent target word processing was investigated in study 3. In experiment 5, symbolic gestures were presented as primes, followed by semantically congruent target word or pseudowords. In this case, lexical-semantic decision was accompanied by a motor simulation at 100ms after the onset of the verbal stimuli. Summing up, the same type of integration with a word was present for both prime gesture and word. It was probably subsequent to understanding of the signal, which used motor simulation for gesture and direct access to semantics for words. However, gesture and words could be understood at the same motor level through simulation if words were preceded by an adequate gestural context. Results are discussed in the prospective of a continuum between transitive actions and emblems, in parallelism with language; the grounded/symbolic content of the different signals evidences relation between sensorimotor and linguistic systems, which could interact at different levels.