927 resultados para multisensory stimuli


Relevância:

70.00% 70.00%

Publicador:

Resumo:

Multisensory stimuli can improve performance, facilitating RTs on sensorimotor tasks. This benefit is referred to as the redundant signals effect (RSE) and can exceed predictions on the basis of probability summation, indicative of integrative processes. Although an RSE exceeding probability summation has been repeatedly observed in humans and nonprimate animals, there are scant and inconsistent data from nonhuman primates performing similar protocols. Rather, existing paradigms have instead focused on saccadic eye movements. Moreover, the extant results in monkeys leave unresolved how stimulus synchronicity and intensity impact performance. Two trained monkeys performed a simple detection task involving arm movements to auditory, visual, or synchronous auditory-visual multisensory pairs. RSEs in excess of predictions on the basis of probability summation were observed and thus forcibly follow from neural response interactions. Parametric variation of auditory stimulus intensity revealed that in both animals, RT facilitation was limited to situations where the auditory stimulus intensity was below or up to 20 dB above perceptual threshold, despite the visual stimulus always being suprathreshold. No RT facilitation or even behavioral costs were obtained with auditory intensities 30-40 dB above threshold. The present study demonstrates the feasibility and the suitability of behaving monkeys for investigating links between psychophysical and neurophysiologic instantiations of multisensory interactions.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

L’intégration de stimulations provenant de modalités sensorielles différentes nous offre des avantages perceptifs tels qu’une meilleure discrimination et une accélération des temps de réponse (TR) face aux évènements environnementaux. Cette thèse a investigué les effets de la position spatiale de stimulations visuelles et tactiles sur le gain de redondance (GR), qui correspond à une réduction du temps de réaction lorsque deux stimulations sont présentées simultanément plutôt qu’isolément. La première étude a comparé le GR lorsque les mêmes stimulations visuotactiles sont présentées dans une tâche de détection et une tâche de discrimination spatiale. Les stimulations étaient présentées unilatéralement dans le même hémichamp ou bilatéralement dans les hémichamps opposés. Dans la tâche de détection, les participants devaient répondre à toutes les stimulations, peu importe leur localisation. Les résultats de cette tâche démontrent que les stimulations unilatérales et bilatérales produisent un GR et une violation du modèle de course indissociables. Dans la tâche de discrimination spatiale où les participants devaient répondre seulement aux stimulations présentées dans l’hémichamp droit, les TR aux stimulations bilatérales étaient moins rapides. Nous n’avons pas observé de différence entre le GR maximal obtenu dans l’une ou l’autre des tâches de cette étude. Nous concluons que lorsque l’information spatiale n’est pas pertinente pour accomplir la tâche, les stimulations unilatérales et bilatérales sont équivalentes. La manipulation de la pertinence de l’information spatiale permet donc d’induire une altération du GR en fonction de la localisation des stimulations. Lors d’une seconde étude, nous avons investigué si la différence entre les gains comportementaux résultants de l’intégration multimodale et intramodale dépend de la configuration spatiale des stimulations. Les résultats montrent que le GR obtenu pour les conditions multimodales surpasse celui obtenu pour les stimulations intramodales. De plus, le GR des conditions multimodales n’est pas influencé par la configuration spatiale des stimulations. À l’opposé, les stimulations intramodales produisent un GR plus important iii lorsque les stimulations sont présentées bilatéralement. Nos résultats suggèrent que l’intégration multimodale et intramodale se distinguent quant au GR qu’ils produisent et quant aux conditions nécessaires à cette amélioration. La troisième étude examine le rôle du corps calleux (CC) dans l’observation du GR obtenu pour les stimulations multimodales et intramodales lorsque celles-ci sont présentées unilatéralement et bilatéralement. Quatre patients ayant une agénésie congénitale du corps calleux (AgCC) et un patient callosotomisé ont été comparés à des individus normaux dans une tâche de détection. Dans l’ensemble, les résultats suggèrent que le CC n’est pas nécessaire pour l’intégration interhémisphérique de stimulations multimodales. Sur la base d’études précédentes démontrant le rôle des collicules supérieurs (CS) dans l’intégration multimodale, nous concluons qu’en l’absence du CC, les bénéfices comportementaux résultants d’un traitement sous-cortical par les CS ne reflètent pas les règles d’intégration observées dans les études neurophysiologiques chez l’animal.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The research activity carried out during the PhD course was focused on the development of mathematical models of some cognitive processes and their validation by means of data present in literature, with a double aim: i) to achieve a better interpretation and explanation of the great amount of data obtained on these processes from different methodologies (electrophysiological recordings on animals, neuropsychological, psychophysical and neuroimaging studies in humans), ii) to exploit model predictions and results to guide future research and experiments. In particular, the research activity has been focused on two different projects: 1) the first one concerns the development of neural oscillators networks, in order to investigate the mechanisms of synchronization of the neural oscillatory activity during cognitive processes, such as object recognition, memory, language, attention; 2) the second one concerns the mathematical modelling of multisensory integration processes (e.g. visual-acoustic), which occur in several cortical and subcortical regions (in particular in a subcortical structure named Superior Colliculus (SC)), and which are fundamental for orienting motor and attentive responses to external world stimuli. This activity has been realized in collaboration with the Center for Studies and Researches in Cognitive Neuroscience of the University of Bologna (in Cesena) and the Department of Neurobiology and Anatomy of the Wake Forest University School of Medicine (NC, USA). PART 1. Objects representation in a number of cognitive functions, like perception and recognition, foresees distribute processes in different cortical areas. One of the main neurophysiological question concerns how the correlation between these disparate areas is realized, in order to succeed in grouping together the characteristics of the same object (binding problem) and in maintaining segregated the properties belonging to different objects simultaneously present (segmentation problem). Different theories have been proposed to address these questions (Barlow, 1972). One of the most influential theory is the so called “assembly coding”, postulated by Singer (2003), according to which 1) an object is well described by a few fundamental properties, processing in different and distributed cortical areas; 2) the recognition of the object would be realized by means of the simultaneously activation of the cortical areas representing its different features; 3) groups of properties belonging to different objects would be kept separated in the time domain. In Chapter 1.1 and in Chapter 1.2 we present two neural network models for object recognition, based on the “assembly coding” hypothesis. These models are networks of Wilson-Cowan oscillators which exploit: i) two high-level “Gestalt Rules” (the similarity and previous knowledge rules), to realize the functional link between elements of different cortical areas representing properties of the same object (binding problem); 2) the synchronization of the neural oscillatory activity in the γ-band (30-100Hz), to segregate in time the representations of different objects simultaneously present (segmentation problem). These models are able to recognize and reconstruct multiple simultaneous external objects, even in difficult case (some wrong or lacking features, shared features, superimposed noise). In Chapter 1.3 the previous models are extended to realize a semantic memory, in which sensory-motor representations of objects are linked with words. To this aim, the network, previously developed, devoted to the representation of objects as a collection of sensory-motor features, is reciprocally linked with a second network devoted to the representation of words (lexical network) Synapses linking the two networks are trained via a time-dependent Hebbian rule, during a training period in which individual objects are presented together with the corresponding words. Simulation results demonstrate that, during the retrieval phase, the network can deal with the simultaneous presence of objects (from sensory-motor inputs) and words (from linguistic inputs), can correctly associate objects with words and segment objects even in the presence of incomplete information. Moreover, the network can realize some semantic links among words representing objects with some shared features. These results support the idea that semantic memory can be described as an integrated process, whose content is retrieved by the co-activation of different multimodal regions. In perspective, extended versions of this model may be used to test conceptual theories, and to provide a quantitative assessment of existing data (for instance concerning patients with neural deficits). PART 2. The ability of the brain to integrate information from different sensory channels is fundamental to perception of the external world (Stein et al, 1993). It is well documented that a number of extraprimary areas have neurons capable of such a task; one of the best known of these is the superior colliculus (SC). This midbrain structure receives auditory, visual and somatosensory inputs from different subcortical and cortical areas, and is involved in the control of orientation to external events (Wallace et al, 1993). SC neurons respond to each of these sensory inputs separately, but is also capable of integrating them (Stein et al, 1993) so that the response to the combined multisensory stimuli is greater than that to the individual component stimuli (enhancement). This enhancement is proportionately greater if the modality-specific paired stimuli are weaker (the principle of inverse effectiveness). Several studies have shown that the capability of SC neurons to engage in multisensory integration requires inputs from cortex; primarily the anterior ectosylvian sulcus (AES), but also the rostral lateral suprasylvian sulcus (rLS). If these cortical inputs are deactivated the response of SC neurons to cross-modal stimulation is no different from that evoked by the most effective of its individual component stimuli (Jiang et al 2001). This phenomenon can be better understood through mathematical models. The use of mathematical models and neural networks can place the mass of data that has been accumulated about this phenomenon and its underlying circuitry into a coherent theoretical structure. In Chapter 2.1 a simple neural network model of this structure is presented; this model is able to reproduce a large number of SC behaviours like multisensory enhancement, multisensory and unisensory depression, inverse effectiveness. In Chapter 2.2 this model was improved by incorporating more neurophysiological knowledge about the neural circuitry underlying SC multisensory integration, in order to suggest possible physiological mechanisms through which it is effected. This endeavour was realized in collaboration with Professor B.E. Stein and Doctor B. Rowland during the 6 months-period spent at the Department of Neurobiology and Anatomy of the Wake Forest University School of Medicine (NC, USA), within the Marco Polo Project. The model includes four distinct unisensory areas that are devoted to a topological representation of external stimuli. Two of them represent subregions of the AES (i.e., FAES, an auditory area, and AEV, a visual area) and send descending inputs to the ipsilateral SC; the other two represent subcortical areas (one auditory and one visual) projecting ascending inputs to the same SC. Different competitive mechanisms, realized by means of population of interneurons, are used in the model to reproduce the different behaviour of SC neurons in conditions of cortical activation and deactivation. The model, with a single set of parameters, is able to mimic the behaviour of SC multisensory neurons in response to very different stimulus conditions (multisensory enhancement, inverse effectiveness, within- and cross-modal suppression of spatially disparate stimuli), with cortex functional and cortex deactivated, and with a particular type of membrane receptors (NMDA receptors) active or inhibited. All these results agree with the data reported in Jiang et al. (2001) and in Binns and Salt (1996). The model suggests that non-linearities in neural responses and synaptic (excitatory and inhibitory) connections can explain the fundamental aspects of multisensory integration, and provides a biologically plausible hypothesis about the underlying circuitry.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Wind and warmth sensations proved to be able to enhance users' state of presence in Virtual Reality applications. Still, only few projects deal with their detailed effect on the user and general ways of implementing such stimuli. This work tries to fill this gap: After analyzing requirements for hardware and software concerning wind and warmth simulations, a hardware and also a software setup for the application in a CAVE environment is proposed. The setup is evaluated with regard to technical details and requirements, but also - in the form of a pilot study - in view of user experience and presence. Our setup proved to comply with the requirements and leads to satisfactory results. To our knowledge, the low cost simulation system (approx. 2200 Euro) presented here is one of the most extensive, most flexible and best evaluated systems for creating wind and warmth stimuli in CAVE-based VR applications.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The accessory optical system, the pretectal complex, and superior colliculus are important control centers in a variety of eye movement, being extremely necessary for image formation, consequently to visual perception. The accessory optical system is constituted by the nuclei: dorsal terminal nucleus, lateral terminal nucleus, medial terminal nucleus and interstitial nucleus of the posterior superior fasciculus. From a functional point of view they contribute to the image stabilization, participating in the visuomotor activity where all system cells respond to slow eye movements and visual stimuli, which is important for the proper functioning of other visual systems. The pretectal complex comprises a group of nuclei situated in mesodiencephalic transition, they are: anterior pretectal nucleus, posterior pretectal nucleus, medial pretectal nucleus, olivary pretectal nucleus and the nucleus of the optic tract, all retinal projection recipients and functionally are related to the route of the pupillary light reflex and the optokinetic nystagmus. The superior colliculus is an important subcortical visual station formed by layers and has an important functional role in the control of eye movements and head in response to multisensory stimuli. Our aim was to make a mapping of retinal projections that focus on accessory optical system, the nuclei of pretectal complex and the superior colliculus, searching mainly for pretectal complex, better delineation of these structures through the anterograde tracing with the B subunit of cholera toxin (CTb) followed by immunohistochemistry and characterized (measured diameter) synaptic buttons present on the fibers / terminals of the nucleus complex pré-tectal. In our results accessory optical system, including a region which appears to be medial terminal nucleus and superior colliculus, were strongly marked by fibers / terminals immunoreactive CTb as well as pretectal complex in the nucleus: optic tract, olivary pretectal nucleus, anterior pretectal nucleus and posterior pretectal nucleus. According to the characterization of the buttons it was possible to make a better definition of these nucleus.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study analyzed high-density event-related potentials (ERPs) within an electrical neuroimaging framework to provide insights regarding the interaction between multisensory processes and stimulus probabilities. Specifically, we identified the spatiotemporal brain mechanisms by which the proportion of temporally congruent and task-irrelevant auditory information influences stimulus processing during a visual duration discrimination task. The spatial position (top/bottom) of the visual stimulus was indicative of how frequently the visual and auditory stimuli would be congruent in their duration (i.e., context of congruence). Stronger influences of irrelevant sound were observed when contexts associated with a high proportion of auditory-visual congruence repeated and also when contexts associated with a low proportion of congruence switched. Context of congruence and context transition resulted in weaker brain responses at 228 to 257 ms poststimulus to conditions giving rise to larger behavioral cross-modal interactions. Importantly, a control oddball task revealed that both congruent and incongruent audiovisual stimuli triggered equivalent non-linear multisensory interactions when congruence was not a relevant dimension. Collectively, these results are well explained by statistical learning, which links a particular context (here: a spatial location) with a certain level of top-down attentional control that further modulates cross-modal interactions based on whether a particular context repeated or changed. The current findings shed new light on the importance of context-based control over multisensory processing, whose influences multiplex across finer and broader time scales.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Les troubles du spectre autistique (TSA) sont actuellement caractérisés par une triade d'altérations, incluant un dysfonctionnement social, des déficits de communication et des comportements répétitifs. L'intégration simultanée de multiples sens est cruciale dans la vie quotidienne puisqu'elle permet la création d'un percept unifié. De façon similaire, l'allocation d'attention à de multiples stimuli simultanés est critique pour le traitement de l'information environnementale dynamique. Dans l'interaction quotidienne avec l'environnement, le traitement sensoriel et les fonctions attentionnelles sont des composantes de base dans le développement typique (DT). Bien qu'ils ne fassent pas partie des critères diagnostiques actuels, les difficultés dans les fonctions attentionnelles et le traitement sensoriel sont très courants parmi les personnes autistes. Pour cela, la présente thèse évalue ces fonctions dans deux études séparées. La première étude est fondée sur la prémisse que des altérations dans le traitement sensoriel de base pourraient être à l'origine des comportements sensoriels atypiques chez les TSA, tel que proposé par des théories actuelles des TSA. Nous avons conçu une tâche de discrimination de taille intermodale, afin d'investiguer l'intégrité et la trajectoire développementale de l'information visuo-tactile chez les enfants avec un TSA (N = 21, âgés de 6 à18 ans), en comparaison à des enfants à DT, appariés sur l’âge et le QI de performance. Dans une tâche à choix forcé à deux alternatives simultanées, les participants devaient émettre un jugement sur la taille de deux stimuli, basé sur des inputs unisensoriels (visuels ou tactiles) ou multisensoriels (visuo-tactiles). Des seuils différentiels ont évalué la plus petite différence à laquelle les participants ont été capables de faire la discrimination de taille. Les enfants avec un TSA ont montré une performance diminuée et pas d'effet de maturation aussi bien dans les conditions unisensorielles que multisensorielles, comparativement aux participants à DT. Notre première étude étend donc des résultats précédents d'altérations dans le traitement multisensoriel chez les TSA au domaine visuo-tactile. Dans notre deuxième étude, nous avions évalué les capacités de poursuite multiple d’objets dans l’espace (3D-Multiple Object Tracking (3D-MOT)) chez des adultes autistes (N = 15, âgés de 18 à 33 ans), comparés à des participants contrôles appariés sur l'âge et le QI, qui devaient suivre une ou trois cibles en mouvement parmi des distracteurs dans un environnement de réalité virtuelle. Les performances ont été mesurées par des seuils de vitesse, qui évaluent la plus grande vitesse à laquelle des observateurs sont capables de suivre des objets en mouvement. Les individus autistes ont montré des seuils de vitesse réduits dans l'ensemble, peu importe le nombre d'objets à suivre. Ces résultats étendent des résultats antérieurs d'altérations au niveau des mécanismes d'attention en autisme quant à l'allocation simultanée de l'attention envers des endroits multiples. Pris ensemble, les résultats de nos deux études révèlent donc des altérations chez les TSA quant au traitement simultané d'événements multiples, que ce soit dans une modalité ou à travers des modalités, ce qui peut avoir des implications importantes au niveau de la présentation clinique de cette condition.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Tool use in humans requires that multisensory information is integrated across different locations, from objects seen to be distant from the hand, but felt indirectly at the hand via the tool. We tested the hypothesis that using a simple tool to perceive vibrotactile stimuli results in the enhanced processing of visual stimuli presented at the distal, functional part of the tool. Such a finding would be consistent with a shift of spatial attention to the location where the tool is used. Methodology/Principal Findings: We tested this hypothesis by scanning healthy human participants' brains using functional magnetic resonance imaging, while they used a simple tool to discriminate between target vibrations, accompanied by congruent or incongruent visual distractors, on the same or opposite side to the tool. The attentional hypothesis was supported: BOLD response in occipital cortex, particularly in the right hemisphere lingual gyrus, varied significantly as a function of tool position, increasing contralaterally, and decreasing ipsilaterally to the tool. Furthermore, these modulations occurred despite the fact that participants were repeatedly instructed to ignore the visual stimuli, to respond only to the vibrotactile stimuli, and to maintain visual fixation centrally. In addition, the magnitude of multisensory (visual-vibrotactile) interactions in participants' behavioural responses significantly predicted the BOLD response in occipital cortical areas that were also modulated as a function of both visual stimulus position and tool position. Conclusions/Significance: These results show that using a simple tool to locate and to perceive vibrotactile stimuli is accompanied by a shift of spatial attention to the location where the functional part of the tool is used, resulting in enhanced processing of visual stimuli at that location, and decreased processing at other locations. This was most clearly observed in the right hemisphere lingual gyrus. Such modulations of visual processing may reflect the functional importance of visuospatial information during human tool use

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The ability of integrating into a unified percept sensory inputs deriving from different sensory modalities, but related to the same external event, is called multisensory integration and might represent an efficient mechanism of sensory compensation when a sensory modality is damaged by a cortical lesion. This hypothesis has been discussed in the present dissertation. Experiment 1 explored the role of superior colliculus (SC) in multisensory integration, testing patients with collicular lesions, patients with subcortical lesions not involving the SC and healthy control subjects in a multisensory task. The results revealed that patients with collicular lesions, paralleling the evidence of animal studies, demonstrated a loss of multisensory enhancement, in contrast with control subjects, providing the first lesional evidence in humans of the essential role of SC in mediating audio-visual integration. Experiment 2 investigated the role of cortex in mediating multisensory integrative effects, inducing virtual lesions by inhibitory theta-burst stimulation on temporo-parietal cortex, occipital cortex and posterior parietal cortex, demonstrating that only temporo-parietal cortex was causally involved in modulating the integration of audio-visual stimuli at the same spatial location. Given the involvement of the retino-colliculo-extrastriate pathway in mediating audio-visual integration, the functional sparing of this circuit in hemianopic patients is extremely relevant in the perspective of a multisensory-based approach to the recovery of unisensory defects. Experiment 3 demonstrated the spared functional activity of this circuit in a group of hemianopic patients, revealing the presence of implicit recognition of the fearful content of unseen visual stimuli (i.e. affective blindsight), an ability mediated by the retino-colliculo-extrastriate pathway and its connections with amygdala. Finally, Experiment 4 provided evidence that a systematic audio-visual stimulation is effective in inducing long-lasting clinical improvements in patients with visual field defect and revealed that the activity of the spared retino-colliculo-extrastriate pathway is responsible of the observed clinical amelioration, as suggested by the greater improvement observed in patients with cortical lesions limited to the occipital cortex, compared to patients with lesions extending to other cortical areas, found in tasks high demanding in terms of spatial orienting. Overall, the present results indicated that multisensory integration is mediated by the retino-colliculo-extrastriate pathway and that a systematic audio-visual stimulation, activating this spared neural circuit, is able to affect orientation towards the blind field in hemianopic patients and, therefore, might constitute an effective and innovative approach for the rehabilitation of unisensory visual impairments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The auditory cortex is anatomically segregated into a central core and a peripheral belt region, which exhibit differences in preference to bandpassed noise and in temporal patterns of response to acoustic stimuli. While it has been shown that visual stimuli can modify response magnitude in auditory cortex, little is known about differential patterns of multisensory interactions in core and belt. Here, we used functional magnetic resonance imaging and examined the influence of a short visual stimulus presented prior to acoustic stimulation on the spatial pattern of blood oxygen level-dependent signal response in auditory cortex. Consistent with crossmodal inhibition, the light produced a suppression of signal response in a cortical region corresponding to the core. In the surrounding areas corresponding to the belt regions, however, we found an inverse modulation with an increasing signal in centrifugal direction. Our data suggest that crossmodal effects are differentially modulated according to the hierarchical core-belt organization of auditory cortex.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Comprehending speech is one of the most important human behaviors, but we are only beginning to understand how the brain accomplishes this difficult task. One key to speech perception seems to be that the brain integrates the independent sources of information available in the auditory and visual modalities in a process known as multisensory integration. This allows speech perception to be accurate, even in environments in which one modality or the other is ambiguous in the context of noise. Previous electrophysiological and functional magnetic resonance imaging (fMRI) experiments have implicated the posterior superior temporal sulcus (STS) in auditory-visual integration of both speech and non-speech stimuli. While evidence from prior imaging studies have found increases in STS activity for audiovisual speech compared with unisensory auditory or visual speech, these studies do not provide a clear mechanism as to how the STS communicates with early sensory areas to integrate the two streams of information into a coherent audiovisual percept. Furthermore, it is currently unknown if the activity within the STS is directly correlated with strength of audiovisual perception. In order to better understand the cortical mechanisms that underlie audiovisual speech perception, we first studied the STS activity and connectivity during the perception of speech with auditory and visual components of varying intelligibility. By studying fMRI activity during these noisy audiovisual speech stimuli, we found that STS connectivity with auditory and visual cortical areas mirrored perception; when the information from one modality is unreliable and noisy, the STS interacts less with the cortex processing that modality and more with the cortex processing the reliable information. We next characterized the role of STS activity during a striking audiovisual speech illusion, the McGurk effect, to determine if activity within the STS predicts how strongly a person integrates auditory and visual speech information. Subjects with greater susceptibility to the McGurk effect exhibited stronger fMRI activation of the STS during perception of McGurk syllables, implying a direct correlation between strength of audiovisual integration of speech and activity within an the multisensory STS.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis is an investigation of structural brain abnormalities, as well as multisensory and unisensory processing deficits in autistic traits and Autism Spectrum Disorder (ASD). To achieve this, structural and functional magnetic resonance imaging (fMRI) and psychophysical techniques were employed. ASD is a neurodevelopmental condition which is characterised by the social communication and interaction deficits, as well as repetitive patterns of behaviour, interests and activities. These traits are thought to be present in a typical population. The Autism Spectrum Quotient questionnaire (AQ) was developed to assess the prevalence of autistic traits in the general population. Von dem Hagen et al. (2011) revealed a link between AQ with white matter (WM) and grey matter (GM) volume (using voxel-based-morphometry). However, their findings revealed no difference in GM in areas associated with social cognition. Cortical thickness (CT) measurements are known to be a more direct measure of cortical morphology than GM volume. Therefore, Chapter 2 investigated the relationship between AQ scores and CT in the same sample of participants. This study showed that AQ scores correlated with CT in the left temporo-occipital junction, left posterior cingulate, right precentral gyrus and bilateral precentral sulcus, in a typical population. These areas were previously associated with structural and functional differences in ASD. Thus the findings suggest, to some extent, autistic traits are reflected in brain structure - in the general population. The ability to integrate auditory and visual information is crucial to everyday life, and results are mixed regarding how ASD influences audiovisual integration. To investigate this question, Chapter 3 examined the Temporal Integration Window (TIW), which indicates how precisely sight and sound need to be temporally aligned so that a unitary audiovisual event can be perceived. 26 adult males with ASD and 26 age and IQ-matched typically developed males were presented with flash-beep (BF), point-light drummer, and face-voice (FV) displays with varying degrees of asynchrony and asked to make Synchrony Judgements (SJ) and Temporal Order Judgements (TOJ). Analysis of the data included fitting Gaussian functions as well as using an Independent Channels Model (ICM) to fit the data (Garcia-Perez & Alcala-Quintana, 2012). Gaussian curve fitting for SJs showed that the ASD group had a wider TIW, but for TOJ no group effect was found. The ICM supported these results and model parameters indicated that the wider TIW for SJs in the ASD group was not due to sensory processing at the unisensory level, but rather due to decreased temporal resolution at a decisional level of combining sensory information. Furthermore, when performing TOJ, the ICM revealed a smaller Point of Subjective Simultaneity (PSS; closer to physical synchrony) in the ASD group than in the TD group. Finding that audiovisual temporal processing is different in ASD encouraged us to investigate the neural correlates of multisensory as well as unisensory processing using functional magnetic resonance imaging fMRI. Therefore, Chapter 4 investigated audiovisual, auditory and visual processing in ASD of simple BF displays and complex, social FV displays. During a block design experiment, we measured the BOLD signal when 13 adults with ASD and 13 typically developed (TD) age-sex- and IQ- matched adults were presented with audiovisual, audio and visual information of BF and FV displays. Our analyses revealed that processing of audiovisual as well as unisensory auditory and visual stimulus conditions in both the BF and FV displays was associated with reduced activation in ASD. Audiovisual, auditory and visual conditions of FV stimuli revealed reduced activation in ASD in regions of the frontal cortex, while BF stimuli revealed reduced activation the lingual gyri. The inferior parietal gyrus revealed an interaction between stimulus sensory condition of BF stimuli and group. Conjunction analyses revealed smaller regions of the superior temporal cortex (STC) in ASD to be audiovisual sensitive. Against our predictions, the STC did not reveal any activation differences, per se, between the two groups. However, a superior frontal area was shown to be sensitive to audiovisual face-voice stimuli in the TD group, but not in the ASD group. Overall this study indicated differences in brain activity for audiovisual, auditory and visual processing of social and non-social stimuli in individuals with ASD compared to TD individuals. These results contrast previous behavioural findings, suggesting different audiovisual integration, yet intact auditory and visual processing in ASD. Our behavioural findings revealed audiovisual temporal processing deficits in ASD during SJ tasks, therefore we investigated the neural correlates of SJ in ASD and TD controls. Similar to Chapter 4, we used fMRI in Chapter 5 to investigate audiovisual temporal processing in ASD in the same participants as recruited in Chapter 4. BOLD signals were measured while the ASD and TD participants were asked to make SJ on audiovisual displays of different levels of asynchrony: the participants’ PSS, audio leading visual information (audio first), visual leading audio information (visual first). Whereas no effect of group was found with BF displays, increased putamen activation was observed in ASD participants compared to TD participants when making SJs on FV displays. Investigating SJ on audiovisual displays in the bilateral superior temporal gyrus (STG), an area involved in audiovisual integration (see Chapter 4), we found no group differences or interaction between group and levels of audiovisual asynchrony. The investigation of different levels of asynchrony revealed a complex pattern of results indicating a network of areas more involved in processing PSS than audio first and visual first, as well as areas responding differently to audio first compared to video first. These activation differences between audio first and video first in different brain areas are constant with the view that audio leading and visual leading stimuli are processed differently.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objective: To investigate how age-related declines in vision (particularly contrast sensitivity), simulated using cataract-goggles and low-contrast stimuli, influence the accuracy and speed of cognitive test performance in older adults. An additional aim was to investigate whether declines in vision differentially affect secondary more than primary memory. Method: Using a fully within-subjects design, 50 older drivers aged 66-87 years completed two tests of cognitive performance - letter matching (perceptual speed) and symbol recall (short-term memory) - under different viewing conditions that degraded visual input (low-contrast stimuli, cataract-goggles, and low-contrast stimuli combined with cataract-goggles, compared with normal viewing). However, presentation time was also manipulated for letter matching. Visual function, as measured using standard charts, was taken into account in statistical analyses. Results: Accuracy and speed for cognitive tasks were significantly impaired when visual input was degraded. Furthermore, cognitive performance was positively associated with contrast sensitivity. Presentation time did not influence cognitive performance, and visual gradation did not differentially influence primary and secondary memory. Conclusion: Age-related declines in visual function can impact on the accuracy and speed of cognitive performance, and therefore the cognitive abilities of older adults may be underestimated in neuropsychological testing. It is thus critical that visual function be assessed prior to testing, and that stimuli be adapted to older adults' sensory capabilities (e.g., by maximising stimuli contrast).