930 resultados para Visual and auditory processing
Resumo:
Background: This study was conducted to describe the association between central auditory processing mechanism and the cardiac autonomic regulation. Methods: It was researched papers on the topic addressed in this study considering the following data bases: Medline, Pubmed, Lilacs, Scopus and Cochrane. The key words were: “auditory stimulation, heart rate, autonomic nervous system and P300”. Results: The findings in the literature demonstrated that auditory stimulation influences the autonomic nervous system and has been used in conjunction with other methods. It is considered a promising step in the investigation of therapeutic procedures for rehabilitation and quality of life of several pathologies. Conclusion: The association between auditory stimulation and the level of the cardiac autonomic nervous system has received significant contributions in relation to musical stimuli.
Resumo:
This study investigated whether there are differences in the Speech-Evoked Auditory Brainstem Response among children with Typical Development (TD), (Central) Auditory Processing Disorder (C) APD, and Language Impairment (LI). The speech-evoked Auditory Brainstem Response was tested in 57 children (ages 6-12). The children were placed into three groups: TD (n = 18), (C)APD (n = 18) and LI (n = 21). Speech-evoked ABR were elicited using the five-formant syllable/da/. Three dimensions were defined for analysis, including timing, harmonics, and pitch. A comparative analysis of the responses between the typical development children and children with (C)APD and LI revealed abnormal encoding of the speech acoustic features that are characteristics of speech perception in children with (C)APD and LI, although the two groups differed in their abnormalities. While the children with (C)APD might had a greater difficulty distinguishing stimuli based on timing cues, the children with LI had the additional difficulty of distinguishing speech harmonics, which are important to the identification of speech sounds. These data suggested that an inefficient representation of crucial components of speech sounds may contribute to the difficulties with language processing found in children with LI. Furthermore, these findings may indicate that the neural processes mediated by the auditory brainstem differ among children with auditory processing and speech-language disorders. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
CONTEXT AND OBJECTIVE: Children and adolescents who live in situations of social vulnerability present a series of health problems. Nonetheless, affirmations that sensory and cognitive abnormalities are present are a matter of controversy. The aim of this study was to investigate aspects to auditory processing, through applying the brainstem auditory evoked potential (BAEP) and behavioral auditory processing tests to children living on the streets, and comparison with a control group. DESIGN AND SETTING: Cross-sectional study in the Laboratory of Auditory Processing, School of Medicine, Universidade de São Paulo. METHODS: The auditory processing tests were applied to a group of 27 individuals, subdivided into 11 children (7 to 10 years old) and 16 adolescents (11 to 16 years old), of both sexes, in situations of social vulnerability, compared with an age-matched control group of 10 children and 11 adolescents without complaints. The BAEP test was also applied to investigate the integrity of the auditory pathway. RESULTS: For both children and adolescents, there were significant differences between the study and control groups in most of the tests applied, with significantly worse performance in the study group, except in the pediatric speech intelligibility test. Only one child had an abnormal result in the BAEP test. CONCLUSIONS: The results showed that the study group (children and adolescents) presented poor performance in the behavioral auditory processing tests, despite their unaltered auditory brainstem pathways, as shown by their normal results in the BAEP test.
Resumo:
Following striate cortex damage in monkeys and humans there can be residual function mediated by parallel visual pathways. In humans this can sometimes be associated with a “feeling” that something has happened, especially with rapid movement or abrupt onset. For less transient events, discriminative performance may still be well above chance even when the subject reports no conscious awareness of the stimulus. In a previous study we examined parameters that yield good residual visual performance in the “blind” hemifield of a subject with unilateral damage to the primary visual cortex. With appropriate parameters we demonstrated good discriminative performance, both with and without conscious awareness of a visual event. These observations raise the possibility of imaging the brain activity generated in the “aware” and the “unaware” modes, with matched levels of discrimination performance, and hence of revealing patterns of brain activation associated with visual awareness. The intact hemifield also allows a comparison with normal vision. Here we report the results of a functional magnetic resonance imaging study on the same subject carried out under aware and unaware stimulus conditions. The results point to a shift in the pattern of activity from neocortex in the aware mode, to subcortical structures in the unaware mode. In the aware mode prestriate and dorsolateral prefrontal cortices (area 46) are active. In the unaware mode the superior colliculus is active, together with medial and orbital prefrontal cortical sites.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
Despite several decades of research, neither clinicians nor academics can agree on a single definition of central auditory processing (CAP) or central auditory processing disorder (CAPD). This article considers why this is the case, and comments on the resulting implications for CAP assessment and CAPD rehabilitation in the clinic.
Resumo:
Children with autistic spectrum disorder (ASD) may have poor audio-visual integration, possibly reflecting dysfunctional 'mirror neuron' systems which have been hypothesised to be at the core of the condition. In the present study, a computer program, utilizing speech synthesizer software and a 'virtual' head (Baldi), delivered speech stimuli for identification in auditory, visual or bimodal conditions. Children with ASD were poorer than controls at recognizing stimuli in the unimodal conditions, but once performance on this measure was controlled for, no group difference was found in the bimodal condition. A group of participants with ASD were also trained to develop their speech-reading ability. Training improved visual accuracy and this also improved the children's ability to utilize visual information in their processing of speech. Overall results were compared to predictions from mathematical models based on integration and non-integration, and were most consistent with the integration model. We conclude that, whilst they are less accurate in recognizing stimuli in the unimodal condition, children with ASD show normal integration of visual and auditory speech stimuli. Given that training in recognition of visual speech was effective, children with ASD may benefit from multi-modal approaches in imitative therapy and language training. (C) 2004 Elsevier Ltd. All rights reserved.
Resumo:
PURPOSE. The driving environment is becoming increasingly complex, including both visual and auditory distractions within the in- vehicle and external driving environments. This study was designed to investigate the effect of visual and auditory distractions on a performance measure that has been shown to be related to driving safety, the useful field of view. METHODS. A laboratory study recorded the useful field of view in 28 young visually normal adults (mean 22.6 +/- 2.2 years). The useful field of view was measured in the presence and absence of visual distracters (of the same angular subtense as the target) and with three levels of auditory distraction (none, listening only, listening and responding). RESULTS. Central errors increased significantly (P < 0.05) in the presence of auditory but not visual distracters, while peripheral errors increased in the presence of both visual and auditory distracters. Peripheral errors increased with eccentricity and were greatest in the inferior region in the presence of distracters. CONCLUSIONS. Visual and auditory distracters reduce the extent of the useful field of view, and these effects are exacerbated in inferior and peripheral locations. This result has significant ramifications for road safety in an increasingly complex in-vehicle and driving environment.
Resumo:
Auditory processing disorder (APD) is diagnosed when a patient presents with listening difficulties which can not be explained by a peripheral hearing impairment or higher-order cognitive or language problems. This review explores the association between auditory processing disorder (APD) and other specific developmental disorders such as dyslexia and attention-deficit hyperactivity disorder. The diagnosis and aetiology of APD are similar to those of other developmental disorders and it is well established that APD often co-occurs with impairments of language, literacy, and attention. The genetic and neurological causes of APD are poorly understood, but developmental and behavioural genetic research with other disorders suggests that clinicians should expect APD to co-occur with other symptoms frequently. The clinical implications of co-occurring symptoms of other developmental disorders are considered and the review concludes that a multi-professional approach to the diagnosis and management of APD, involving speech and language therapy and psychology as well as audiology, is essential to ensure that children have access to the most appropriate range of support and interventions.
Resumo:
This research pursued the conceptualization, implementation, and verification of a system that enhances digital information displayed on an LCD panel to users with visual refractive errors. The target user groups for this system are individuals who have moderate to severe visual aberrations for which conventional means of compensation, such as glasses or contact lenses, does not improve their vision. This research is based on a priori knowledge of the user's visual aberration, as measured by a wavefront analyzer. With this information it is possible to generate images that, when displayed to this user, will counteract his/her visual aberration. The method described in this dissertation advances the development of techniques for providing such compensation by integrating spatial information in the image as a means to eliminate some of the shortcomings inherent in using display devices such as monitors or LCD panels. Additionally, physiological considerations are discussed and integrated into the method for providing said compensation. In order to provide a realistic sense of the performance of the methods described, they were tested by mathematical simulation in software, as well as by using a single-lens high resolution CCD camera that models an aberrated eye, and finally with human subjects having various forms of visual aberrations. Experiments were conducted on these systems and the data collected from these experiments was evaluated using statistical analysis. The experimental results revealed that the pre-compensation method resulted in a statistically significant improvement in vision for all of the systems. Although significant, the improvement was not as large as expected for the human subject tests. Further analysis suggest that even under the controlled conditions employed for testing with human subjects, the characterization of the eye may be changing. This would require real-time monitoring of relevant variables (e.g. pupil diameter) and continuous adjustment in the pre-compensation process to yield maximum viewing enhancement.
Resumo:
This research pursued the conceptualization, implementation, and verification of a system that enhances digital information displayed on an LCD panel to users with visual refractive errors. The target user groups for this system are individuals who have moderate to severe visual aberrations for which conventional means of compensation, such as glasses or contact lenses, does not improve their vision. This research is based on a priori knowledge of the user's visual aberration, as measured by a wavefront analyzer. With this information it is possible to generate images that, when displayed to this user, will counteract his/her visual aberration. The method described in this dissertation advances the development of techniques for providing such compensation by integrating spatial information in the image as a means to eliminate some of the shortcomings inherent in using display devices such as monitors or LCD panels. Additionally, physiological considerations are discussed and integrated into the method for providing said compensation. In order to provide a realistic sense of the performance of the methods described, they were tested by mathematical simulation in software, as well as by using a single-lens high resolution CCD camera that models an aberrated eye, and finally with human subjects having various forms of visual aberrations. Experiments were conducted on these systems and the data collected from these experiments was evaluated using statistical analysis. The experimental results revealed that the pre-compensation method resulted in a statistically significant improvement in vision for all of the systems. Although significant, the improvement was not as large as expected for the human subject tests. Further analysis suggest that even under the controlled conditions employed for testing with human subjects, the characterization of the eye may be changing. This would require real-time monitoring of relevant variables (e.g. pupil diameter) and continuous adjustment in the pre-compensation process to yield maximum viewing enhancement.
Resumo:
Integrating information from multiple sources is a crucial function of the brain. Examples of such integration include multiple stimuli of different modalties, such as visual and auditory, multiple stimuli of the same modality, such as auditory and auditory, and integrating stimuli from the sensory organs (i.e. ears) with stimuli delivered from brain-machine interfaces.
The overall aim of this body of work is to empirically examine stimulus integration in these three domains to inform our broader understanding of how and when the brain combines information from multiple sources.
First, I examine visually-guided auditory, a problem with implications for the general problem in learning of how the brain determines what lesson to learn (and what lessons not to learn). For example, sound localization is a behavior that is partially learned with the aid of vision. This process requires correctly matching a visual location to that of a sound. This is an intrinsically circular problem when sound location is itself uncertain and the visual scene is rife with possible visual matches. Here, we develop a simple paradigm using visual guidance of sound localization to gain insight into how the brain confronts this type of circularity. We tested two competing hypotheses. 1: The brain guides sound location learning based on the synchrony or simultaneity of auditory-visual stimuli, potentially involving a Hebbian associative mechanism. 2: The brain uses a ‘guess and check’ heuristic in which visual feedback that is obtained after an eye movement to a sound alters future performance, perhaps by recruiting the brain’s reward-related circuitry. We assessed the effects of exposure to visual stimuli spatially mismatched from sounds on performance of an interleaved auditory-only saccade task. We found that when humans and monkeys were provided the visual stimulus asynchronously with the sound but as feedback to an auditory-guided saccade, they shifted their subsequent auditory-only performance toward the direction of the visual cue by 1.3-1.7 degrees, or 22-28% of the original 6 degree visual-auditory mismatch. In contrast when the visual stimulus was presented synchronously with the sound but extinguished too quickly to provide this feedback, there was little change in subsequent auditory-only performance. Our results suggest that the outcome of our own actions is vital to localizing sounds correctly. Contrary to previous expectations, visual calibration of auditory space does not appear to require visual-auditory associations based on synchrony/simultaneity.
My next line of research examines how electrical stimulation of the inferior colliculus influences perception of sounds in a nonhuman primate. The central nucleus of the inferior colliculus is the major ascending relay of auditory information before it reaches the forebrain, and thus an ideal target for understanding low-level information processing prior to the forebrain, as almost all auditory signals pass through the central nucleus of the inferior colliculus before reaching the forebrain. Thus, the inferior colliculus is the ideal structure to examine to understand the format of the inputs into the forebrain and, by extension, the processing of auditory scenes that occurs in the brainstem. Therefore, the inferior colliculus was an attractive target for understanding stimulus integration in the ascending auditory pathway.
Moreover, understanding the relationship between the auditory selectivity of neurons and their contribution to perception is critical to the design of effective auditory brain prosthetics. These prosthetics seek to mimic natural activity patterns to achieve desired perceptual outcomes. We measured the contribution of inferior colliculus (IC) sites to perception using combined recording and electrical stimulation. Monkeys performed a frequency-based discrimination task, reporting whether a probe sound was higher or lower in frequency than a reference sound. Stimulation pulses were paired with the probe sound on 50% of trials (0.5-80 µA, 100-300 Hz, n=172 IC locations in 3 rhesus monkeys). Electrical stimulation tended to bias the animals’ judgments in a fashion that was coarsely but significantly correlated with the best frequency of the stimulation site in comparison to the reference frequency employed in the task. Although there was considerable variability in the effects of stimulation (including impairments in performance and shifts in performance away from the direction predicted based on the site’s response properties), the results indicate that stimulation of the IC can evoke percepts correlated with the frequency tuning properties of the IC. Consistent with the implications of recent human studies, the main avenue for improvement for the auditory midbrain implant suggested by our findings is to increase the number and spatial extent of electrodes, to increase the size of the region that can be electrically activated and provide a greater range of evoked percepts.
My next line of research employs a frequency-tagging approach to examine the extent to which multiple sound sources are combined (or segregated) in the nonhuman primate inferior colliculus. In the single-sound case, most inferior colliculus neurons respond and entrain to sounds in a very broad region of space, and many are entirely spatially insensitive, so it is unknown how the neurons will respond to a situation with more than one sound. I use multiple AM stimuli of different frequencies, which the inferior colliculus represents using a spike timing code. This allows me to measure spike timing in the inferior colliculus to determine which sound source is responsible for neural activity in an auditory scene containing multiple sounds. Using this approach, I find that the same neurons that are tuned to broad regions of space in the single sound condition become dramatically more selective in the dual sound condition, preferentially entraining spikes to stimuli from a smaller region of space. I will examine the possibility that there may be a conceptual linkage between this finding and the finding of receptive field shifts in the visual system.
In chapter 5, I will comment on these findings more generally, compare them to existing theoretical models, and discuss what these results tell us about processing in the central nervous system in a multi-stimulus situation. My results suggest that the brain is flexible in its processing and can adapt its integration schema to fit the available cues and the demands of the task.
Resumo:
To investigate central auditory processing in children with unilateral stroke and to verify whether the hemisphere affected by the lesion influenced auditory competence. 23 children (13 male) between 7 and 16 years old were evaluated through speech-in-noise tests (auditory closure); dichotic digit test and staggered spondaic word test (selective attention); pitch pattern and duration pattern sequence tests (temporal processing) and their results were compared with control children. Auditory competence was established according to the performance in auditory analysis ability. Was verified similar performance between groups in auditory closure ability and pronounced deficits in selective attention and temporal processing abilities. Most children with stroke showed an impaired auditory ability in a moderate degree. Children with stroke showed deficits in auditory processing and the degree of impairment was not related to the hemisphere affected by the lesion.
Resumo:
Given the polarity dependent effects of transcranial direct current stimulation (tDCS) in facilitating or inhibiting neuronal processing, and tDCS effects on pitch perception, we tested the effects of tDCS on temporal aspects of auditory processing. We aimed to change baseline activity of the auditory cortex using tDCS as to modulate temporal aspects of auditory processing in healthy subjects without hearing impairment. Eleven subjects received 2mA bilateral anodal, cathodal and sham tDCS over auditory cortex in a randomized and counterbalanced order. Subjects were evaluated by the Random Gap Detection Test (RGDT), a test measuring temporal processing abilities in the auditory domain, before and during the stimulation. Statistical analysis revealed a significant interaction effect of time vs. tDCS condition for 4000 Hz and for clicks. Post-hoc tests showed significant differences according to stimulation polarity on RGDT performance: anodal improved 22.5% and cathodal decreased 54.5% subjects' performance, as compared to baseline. For clicks, anodal also increased performance in 29.4% when compared to baseline. tDCS presented polarity-dependent effects on the activity of the auditory cortex, which results in a positive or negative impact in a temporal resolution task performance. These results encourage further studies exploring tDCS in central auditory processing disorders.