930 resultados para Visual and auditory processing
Resumo:
Multisensory interactions have been documented within low-level, even primary, cortices and at early post-stimulus latencies. These effects are in turn linked to behavioral and perceptual modulations. In humans, visual cortex excitability, as measured by transcranial magnetic stimulation (TMS) induced phosphenes, can be reliably enhanced by the co-presentation of sounds. This enhancement occurs at pre-perceptual stages and is selective for different types of complex sounds. However, the source(s) of auditory inputs effectuating these excitability changes in primary visual cortex remain disputed. The present study sought to determine if direct connections between low-level auditory cortices and primary visual cortex are mediating these kinds of effects by varying the pitch and bandwidth of the sounds co-presented with single-pulse TMS over the occipital pole. Our results from 10 healthy young adults indicate that both the central frequency and bandwidth of a sound independently affect the excitability of visual cortex during processing stages as early as 30 msec post-sound onset. Such findings are consistent with direct connections mediating early-latency, low-level multisensory interactions within visual cortices.
Resumo:
Evidence from neuropsychological and activation studies (Clarke et al., 2oo0, Maeder et al., 2000) suggests that sound recognitionand localisation are processed by two anatomically and functionally distinct cortical networks. We report here on a case of a patientthat had an interruption of auditory information and we show: i) the effects of this interruption on cortical auditory processing; ii)the effect of the workload on activation pattern.A 36 year old man suffered from a small left mesencephalic haemotrhage, due to cavernous angioma; the let% inferior colliculuswas resected in the surgical approach of the vascular malformation. In the acute stage, the patient complained of auditoryhallucinations and of auditory loss in right ear, while tonal audiometry was normal. At 12 months, auditory recognition, auditorylocalisation (assessed by lTD and IID cues) and auditory motion perception were normal (Clarke et al., 2000), while verbal dichoticlistening was deficient on the right side.Sound recognition and sound localisation activation patterns were investigated with fMRI, using a passive and an activeparadigm. In normal subjects, distinct cortical networks were involved in sound recognition and localisation, both in passive andactive paradigm (Maeder et al., 2OOOa, 2000b).Passive listening of environmental and spatial stimuli as compared to rest strongly activated right auditory cortex, but failed toactivate left primary auditory cortex. The specialised networks for sound recognition and localisation could not be visual&d onthe right and only minimally on the left convexity. A very different activation pattern was obtained in the active condition wherea motor response was required. Workload not only increased the activation of the right auditory cortex, but also allowed theactivation of the left primary auditory cortex. The specialised networks for sound recognition and localisation were almostcompletely present in both hemispheres.These results show that increasing the workload can i) help to recruit cortical region in the auditory deafferented hemisphere;and ii) lead to processing auditory information within specific cortical networks.References:Clarke et al. (2000). Neuropsychologia 38: 797-807.Mae.der et al. (2OOOa), Neuroimage 11: S52.Maeder et al. (2OOOb), Neuroimage 11: S33
Resumo:
Single-trial encounters with multisensory stimuli affect both memory performance and early-latency brain responses to visual stimuli. Whether and how auditory cortices support memory processes based on single-trial multisensory learning is unknown and may differ qualitatively and quantitatively from comparable processes within visual cortices due to purported differences in memory capacities across the senses. We recorded event-related potentials (ERPs) as healthy adults (n = 18) performed a continuous recognition task in the auditory modality, discriminating initial (new) from repeated (old) sounds of environmental objects. Initial presentations were either unisensory or multisensory; the latter entailed synchronous presentation of a semantically congruent or a meaningless image. Repeated presentations were exclusively auditory, thus differing only according to the context in which the sound was initially encountered. Discrimination abilities (indexed by d') were increased for repeated sounds that were initially encountered with a semantically congruent image versus sounds initially encountered with either a meaningless or no image. Analyses of ERPs within an electrical neuroimaging framework revealed that early stages of auditory processing of repeated sounds were affected by prior single-trial multisensory contexts. These effects followed from significantly reduced activity within a distributed network, including the right superior temporal cortex, suggesting an inverse relationship between brain activity and behavioural outcome on this task. The present findings demonstrate how auditory cortices contribute to long-term effects of multisensory experiences on auditory object discrimination. We propose a new framework for the efficacy of multisensory processes to impact both current multisensory stimulus processing and unisensory discrimination abilities later in time.
Resumo:
Memory is a multi-component cognitive ability to retain and retrieve information presented in different modalities. Research on memory development has shown that the memory capacity and the processes improve gradually from early childhood to adolescence. Findings related to the sex-differences in memory abilities in early childhood have been inconsistent. Although previous research has demonstrated the effects of the modality of stimulus presentation (auditory versus verbal) and the type of material to be remembered (visual/spatial versus auditory/verbal) on the memory processes and memory organization, the recent research with children is rather limited. The present study is a secondary analysis of data, originally collected from 530 typically developing Turkish children and adolescents. The purpose of the present study was to examine the age-related developments and sex differences in auditory-verbal and visual-spatial short-term memory (STM) in 177 typically developing male and female children, 5 to 8 years of age. Dot-Locations and Word-Lists from the Children's Memory Scale were used to measure visual-spatial and auditory-verbal STM performances, respectively. The findings of the present study suggest age-related differences in both visual-spatial and auditory-verbal STM. Sex-differences were observed only in one visual-spatial STM subtest performance. Modality comparisons revealed age- and task-related differences between auditory-verbal and visual-spatial STM performances. There were no sex-related effects in terms of modality specific performances. Overall, the results of this study provide evidence of STM development in early childhood, and these effects were mostly independent of sex and the modality of the task.
Resumo:
Previous studies have shown that the human posterior cingulate contains a visual processing area selective for optic flow (CSv). However, other studies performed in both humans and monkeys have identified a somatotopic motor region at the same location (CMA). Taken together, these findings suggested the possibility that the posterior cingulate contains a single visuomotor integration region. To test this idea we used fMRI to identify both visual and motor areas of the posterior cingulate in the same brains and to test the activity of those regions during a visuomotor task. Results indicated that rather than a single visuomotor region the posterior cingulate contains adjacent but separate motor and visual regions. CSv lies in the fundus of the cingulate sulcus, while CMA lies in the dorsal bank of the sulcus, slightly superior in terms of stereotaxic coordinates. A surprising and novel finding was that activity in CSv was suppressed during the visuomotor task, despite the visual stimulus being identical to that used to localize the region. This may provide an important clue to the specific role played by this region in the utilization of optic flow to control self-motion.
Resumo:
Three-dimensional kinematic analysis of line of gaze, arm and ball was used to describe the visual and motor behaviour of male adolescents diagnosed with attention deficit hyperactivity disorder (ADHD). The ADHD participants were tested when both on (ADHD-On) and off (ADHD-Off) their medication and compared to age-matched normal controls in a modified table tennis task that required tracking the ball and hitting to cued right and left targets. Long-duration information was provided by a pre-cue, in which the target was illuminated approximately 2 s before the serve, and short-duration information by an early-cue illuminated about 350 ms after the serve, leaving -500 ms to select the target and perform the action. The ADHD groups differed significantly from the control group in both the pre-cue and early-cue conditions in being less accurate, in having a later onset and duration of pursuit tracking, and a higher frequency of gaze on and off the ball. The use of medication significantly reduced the gaze frequency of the ADHD participants, but surprisingly this did not lead to an increase in pursuit tracking, suggesting a barrier was reached beyond which ball flight information could not be processed. The control and ADHD groups did not differ in arm movement onset, duration and velocity in the short-duration early-cue condition; in the long-duration pre-cue condition, however, the ADHD group's movement time onset and arm velocity differed significantly from controls. The results show that the ADHD groups were able to process short-duration information without experiencing adverse effects on their motor behaviour; however, long-duration information contributed to irregular movement control.
Resumo:
The patterns of cortico-cortical and cortico-thalamic connections of auditory cortical areas in the rhesus monkey have led to the hypothesis that acoustic information is processed in series and in parallel in the primate auditory cortex. Recent physiological experiments in the behaving monkey indicate that the response properties of neurons in different cortical areas are both functionally distinct from each other, which is indicative of parallel processing, and functionally similar to each other, which is indicative of serial processing. Thus, auditory cortical processing may be similar to the serial and parallel “what” and “where” processing by the primate visual cortex. If “where” information is serially processed in the primate auditory cortex, neurons in cortical areas along this pathway should have progressively better spatial tuning properties. This prediction is supported by recent experiments that have shown that neurons in the caudomedial field have better spatial tuning properties than neurons in the primary auditory cortex. Neurons in the caudomedial field are also better than primary auditory cortex neurons at predicting the sound localization ability across different stimulus frequencies and bandwidths in both azimuth and elevation. These data support the hypothesis that the primate auditory cortex processes acoustic information in a serial and parallel manner and suggest that this may be a general cortical mechanism for sensory perception.
Resumo:
Detection thresholds for two visual- and two auditory-processing tasks were obtained for 73 children and young adults who varied broadly in reading ability. A reading-disabled subgroup had significantly higher thresholds than a normal-reading subgroup for the auditory tasks only. When analyzed across the whole group, the auditory tasks and one of the visual tasks, coherent motion detection, were significantly related to word reading. These effects were largely independent of ADHD ratings; however, none of these measures accounted for significant variance in word reading after controlling for full-scale IQ. In contrast, phoneme awareness, rapid naming, and nonword repetition each explained substantial, significant word reading variance after controlling for IQ, suggesting more specific roles for these oral language skills in the development of word reading. © 2004 Elsevier Inc. All rights reserved.
Resumo:
Peer reviewed
Resumo:
This thesis is an investigation of structural brain abnormalities, as well as multisensory and unisensory processing deficits in autistic traits and Autism Spectrum Disorder (ASD). To achieve this, structural and functional magnetic resonance imaging (fMRI) and psychophysical techniques were employed. ASD is a neurodevelopmental condition which is characterised by the social communication and interaction deficits, as well as repetitive patterns of behaviour, interests and activities. These traits are thought to be present in a typical population. The Autism Spectrum Quotient questionnaire (AQ) was developed to assess the prevalence of autistic traits in the general population. Von dem Hagen et al. (2011) revealed a link between AQ with white matter (WM) and grey matter (GM) volume (using voxel-based-morphometry). However, their findings revealed no difference in GM in areas associated with social cognition. Cortical thickness (CT) measurements are known to be a more direct measure of cortical morphology than GM volume. Therefore, Chapter 2 investigated the relationship between AQ scores and CT in the same sample of participants. This study showed that AQ scores correlated with CT in the left temporo-occipital junction, left posterior cingulate, right precentral gyrus and bilateral precentral sulcus, in a typical population. These areas were previously associated with structural and functional differences in ASD. Thus the findings suggest, to some extent, autistic traits are reflected in brain structure - in the general population. The ability to integrate auditory and visual information is crucial to everyday life, and results are mixed regarding how ASD influences audiovisual integration. To investigate this question, Chapter 3 examined the Temporal Integration Window (TIW), which indicates how precisely sight and sound need to be temporally aligned so that a unitary audiovisual event can be perceived. 26 adult males with ASD and 26 age and IQ-matched typically developed males were presented with flash-beep (BF), point-light drummer, and face-voice (FV) displays with varying degrees of asynchrony and asked to make Synchrony Judgements (SJ) and Temporal Order Judgements (TOJ). Analysis of the data included fitting Gaussian functions as well as using an Independent Channels Model (ICM) to fit the data (Garcia-Perez & Alcala-Quintana, 2012). Gaussian curve fitting for SJs showed that the ASD group had a wider TIW, but for TOJ no group effect was found. The ICM supported these results and model parameters indicated that the wider TIW for SJs in the ASD group was not due to sensory processing at the unisensory level, but rather due to decreased temporal resolution at a decisional level of combining sensory information. Furthermore, when performing TOJ, the ICM revealed a smaller Point of Subjective Simultaneity (PSS; closer to physical synchrony) in the ASD group than in the TD group. Finding that audiovisual temporal processing is different in ASD encouraged us to investigate the neural correlates of multisensory as well as unisensory processing using functional magnetic resonance imaging fMRI. Therefore, Chapter 4 investigated audiovisual, auditory and visual processing in ASD of simple BF displays and complex, social FV displays. During a block design experiment, we measured the BOLD signal when 13 adults with ASD and 13 typically developed (TD) age-sex- and IQ- matched adults were presented with audiovisual, audio and visual information of BF and FV displays. Our analyses revealed that processing of audiovisual as well as unisensory auditory and visual stimulus conditions in both the BF and FV displays was associated with reduced activation in ASD. Audiovisual, auditory and visual conditions of FV stimuli revealed reduced activation in ASD in regions of the frontal cortex, while BF stimuli revealed reduced activation the lingual gyri. The inferior parietal gyrus revealed an interaction between stimulus sensory condition of BF stimuli and group. Conjunction analyses revealed smaller regions of the superior temporal cortex (STC) in ASD to be audiovisual sensitive. Against our predictions, the STC did not reveal any activation differences, per se, between the two groups. However, a superior frontal area was shown to be sensitive to audiovisual face-voice stimuli in the TD group, but not in the ASD group. Overall this study indicated differences in brain activity for audiovisual, auditory and visual processing of social and non-social stimuli in individuals with ASD compared to TD individuals. These results contrast previous behavioural findings, suggesting different audiovisual integration, yet intact auditory and visual processing in ASD. Our behavioural findings revealed audiovisual temporal processing deficits in ASD during SJ tasks, therefore we investigated the neural correlates of SJ in ASD and TD controls. Similar to Chapter 4, we used fMRI in Chapter 5 to investigate audiovisual temporal processing in ASD in the same participants as recruited in Chapter 4. BOLD signals were measured while the ASD and TD participants were asked to make SJ on audiovisual displays of different levels of asynchrony: the participants’ PSS, audio leading visual information (audio first), visual leading audio information (visual first). Whereas no effect of group was found with BF displays, increased putamen activation was observed in ASD participants compared to TD participants when making SJs on FV displays. Investigating SJ on audiovisual displays in the bilateral superior temporal gyrus (STG), an area involved in audiovisual integration (see Chapter 4), we found no group differences or interaction between group and levels of audiovisual asynchrony. The investigation of different levels of asynchrony revealed a complex pattern of results indicating a network of areas more involved in processing PSS than audio first and visual first, as well as areas responding differently to audio first compared to video first. These activation differences between audio first and video first in different brain areas are constant with the view that audio leading and visual leading stimuli are processed differently.
Resumo:
People possess different sensory modalities to detect, interpret, and efficiently act upon various events in a complex and dynamic environment (Fetsch, DeAngelis, & Angelaki, 2013). Much empirical work has been done to understand the interplay of modalities (e.g. audio-visual interactions, see Calvert, Spence, & Stein, 2004). On the one hand, integration of multimodal input as a functional principle of the brain enables the versatile and coherent perception of the environment (Lewkowicz & Ghazanfar, 2009). On the other hand, sensory integration does not necessarily mean that input from modalities is always weighted equally (Ernst, 2008). Rather, when two or more modalities are stimulated concurrently, one often finds one modality dominating over another. Study 1 and 2 of the dissertation addressed the developmental trajectory of sensory dominance. In both studies, 6-year-olds, 9-year-olds, and adults were tested in order to examine sensory (audio-visual) dominance across different age groups. In Study 3, sensory dominance was put into an applied context by examining verbal and visual overshadowing effects among 4- to 6-year olds performing a face recognition task. The results of Study 1 and Study 2 support default auditory dominance in young children as proposed by Napolitano and Sloutsky (2004) that persists up to 6 years of age. For 9-year-olds, results on privileged modality processing were inconsistent. Whereas visual dominance was revealed in Study 1, privileged auditory processing was revealed in Study 2. Among adults, a visual dominance was observed in Study 1, which has also been demonstrated in preceding studies (see Spence, Parise, & Chen, 2012). No sensory dominance was revealed in Study 2 for adults. Potential explanations are discussed. Study 3 referred to verbal and visual overshadowing effects in 4- to 6-year-olds. The aim was to examine whether verbalization (i.e., verbally describing a previously seen face), or visualization (i.e., drawing the seen face) might affect later face recognition. No effect of visualization on recognition accuracy was revealed. As opposed to a verbal overshadowing effect, a verbal facilitation effect occurred. Moreover, verbal intelligence was a significant predictor for recognition accuracy in the verbalization group but not in the control group. This suggests that strengthening verbal intelligence in children can pay off in non-verbal domains as well, which might have educational implications.
Resumo:
The compound eyes of mantis shrimps, a group of tropical marine crustaceans, incorporate principles of serial and parallel processing of visual information that may be applicable to artificial imaging systems. Their eyes include numerous specializations for analysis of the spectral and polarizational properties of light, and include more photoreceptor classes for analysis of ultraviolet light, color, and polarization than occur in any other known visual system. This is possible because receptors in different regions of the eye are anatomically diverse and incorporate unusual structural features, such as spectral filters, not seen in other compound eyes. Unlike eyes of most other animals, eyes of mantis shrimps must move to acquire some types of visual information and to integrate color and polarization with spatial vision. Information leaving the retina appears to be processed into numerous parallel data streams leading into the central nervous system, greatly reducing the analytical requirements at higher levels. Many of these unusual features of mantis shrimp vision may inspire new sensor designs for machine vision
Resumo:
The present study investigates human visual processing of simple two-colour patterns using a delayed match to sample paradigm with positron emission tomography (PET). This study is unique in that we specifically designed the visual stimuli to be the same for both pattern and colour recognition with all patterns being abstract shapes not easily verbally coded composed of two-colour combinations. We did this to explore those brain regions required for both colour and pattern processing and to separate those areas of activation required for one or the other. We found that both tasks activated similar occipital regions, the major difference being more extensive activation in pattern recognition. A right-sided network that involved the inferior parietal lobule, the head of the caudate nucleus, and the pulvinar nucleus of the thalamus was common to both paradigms. Pattern recognition also activated the left temporal pole and right lateral orbital gyrus, whereas colour recognition activated the left fusiform gyrus and several right frontal regions. (C) 2001 Wiley-Liss, Inc.
Resumo:
Auditory event-related potentials (AERPs) are widely used in diverse fields of today’s neuroscience, concerning auditory processing, speech perception, language acquisition, neurodevelopment, attention and cognition in normal aging, gender, developmental, neurologic and psychiatric disorders. However, its transposition to clinical practice has remained minimal. Mainly due to scarce literature on normative data across age, wide spectrumof results, variety of auditory stimuli used and to different neuropsychological meanings of AERPs components between authors. One of the most prominent AERP components studied in last decades was N1, which reflects auditory detection and discrimination. Subsequently, N2 indicates attention allocation and phonological analysis. The simultaneous analysis of N1 and N2 elicited by feasible novelty experimental paradigms, such as auditory oddball, seems an objective method to assess central auditory processing. The aim of this systematic review was to bring forward normative values for auditory oddball N1 and N2 components across age. EBSCO, PubMed, Web of Knowledge and Google Scholarwere systematically searched for studies that elicited N1 and/or N2 by auditory oddball paradigm. A total of 2,764 papers were initially identified in the database, of which 19 resulted from hand search and additional references, between 1988 and 2013, last 25 years. A final total of 68 studiesmet the eligibility criteria with a total of 2,406 participants from control groups for N1 (age range 6.6–85 years; mean 34.42) and 1,507 for N2 (age range 9–85 years; mean 36.13). Polynomial regression analysis revealed thatN1latency decreases with aging at Fz and Cz,N1 amplitude at Cz decreases from childhood to adolescence and stabilizes after 30–40 years and at Fz the decrement finishes by 60 years and highly increases after this age. Regarding N2, latency did not covary with age but amplitude showed a significant decrement for both Cz and Fz. Results suggested reliable normative values for Cz and Fz electrode locations; however, changes in brain development and components topography over age should be considered in clinical practice.
Resumo:
Dorsal and ventral pathways for syntacto-semantic speech processing in the left hemisphere are represented in the dual-stream model of auditory processing. Here we report new findings for the right dorsal and ventral temporo-frontal pathway during processing of affectively intonated speech (i.e. affective prosody) in humans, together with several left hemispheric structural connections, partly resembling those for syntacto-semantic speech processing. We investigated white matter fiber connectivity between regions responding to affective prosody in several subregions of the bilateral superior temporal cortex (secondary and higher-level auditory cortex) and of the inferior frontal cortex (anterior and posterior inferior frontal gyrus). The fiber connectivity was investigated by using probabilistic diffusion tensor based tractography. The results underscore several so far underestimated auditory pathway connections, especially for the processing of affective prosody, such as a right ventral auditory pathway. The results also suggest the existence of a dual-stream processing in the right hemisphere, and a general predominance of the dorsal pathways in both hemispheres underlying the neural processing of affective prosody in an extended temporo-frontal network.