970 resultados para auditory processing
Resumo:
An estimated 30% of individuals with autism spectrum disorders (ASD) remain minimally verbal into late childhood, but research on cognition and brain function in ASD focuses almost exclusively on those with good or only moderately impaired language. Here we present a case study investigating auditory processing of GM, a nonverbal child with ASD and cerebral palsy. At the age of 8 years, GM was tested using magnetoencephalography (MEG) whilst passively listening to speech sounds and complex tones. Where typically developing children and verbal autistic children all demonstrated similar brain responses to speech and nonspeech sounds, GM produced much stronger responses to nonspeech than speech, particularly in the 65–165 ms (M50/M100) time window post-stimulus onset. GM was retested aged 10 years using electroencephalography (EEG) whilst passively listening to pure tone stimuli. Consistent with her MEG response to complex tones, GM showed an unusually early and strong response to pure tones in her EEG responses. The consistency of the MEG and EEG data in this single case study demonstrate both the potential and the feasibility of these methods in the study of minimally verbal children with ASD. Further research is required to determine whether GM's atypical auditory responses are characteristic of other minimally verbal children with ASD or of other individuals with cerebral palsy.
Resumo:
Older adults frequently report that they can hear what they have been told but cannot understand the meaning. This is particularly true in noisy conditions, where the additional challenge of suppressing irrelevant noise (i.e. a competing talker) adds another layer of difficulty to their speech understanding. Hearing aids improve speech perception in quiet, but their success in noisy environments has been modest, suggesting that peripheral hearing loss may not be the only factor in the older adult’s perceptual difficulties. Recent animal studies have shown that auditory synapses and cells undergo significant age-related changes that could impact the integrity of temporal processing in the central auditory system. Psychoacoustic studies carried out in humans have also shown that hearing loss can explain the decline in older adults’ performance in quiet compared to younger adults, but these psychoacoustic measurements are not accurate in describing auditory deficits in noisy conditions. These results would suggest that temporal auditory processing deficits could play an important role in explaining the reduced ability of older adults to process speech in noisy environments. The goals of this dissertation were to understand how age affects neural auditory mechanisms and at which level in the auditory system these changes are particularly relevant for explaining speech-in-noise problems. Specifically, we used non-invasive neuroimaging techniques to tap into the midbrain and the cortex in order to analyze how auditory stimuli are processed in younger (our standard) and older adults. We will also attempt to investigate a possible interaction between processing carried out in the midbrain and cortex.
Resumo:
People possess different sensory modalities to detect, interpret, and efficiently act upon various events in a complex and dynamic environment (Fetsch, DeAngelis, & Angelaki, 2013). Much empirical work has been done to understand the interplay of modalities (e.g. audio-visual interactions, see Calvert, Spence, & Stein, 2004). On the one hand, integration of multimodal input as a functional principle of the brain enables the versatile and coherent perception of the environment (Lewkowicz & Ghazanfar, 2009). On the other hand, sensory integration does not necessarily mean that input from modalities is always weighted equally (Ernst, 2008). Rather, when two or more modalities are stimulated concurrently, one often finds one modality dominating over another. Study 1 and 2 of the dissertation addressed the developmental trajectory of sensory dominance. In both studies, 6-year-olds, 9-year-olds, and adults were tested in order to examine sensory (audio-visual) dominance across different age groups. In Study 3, sensory dominance was put into an applied context by examining verbal and visual overshadowing effects among 4- to 6-year olds performing a face recognition task. The results of Study 1 and Study 2 support default auditory dominance in young children as proposed by Napolitano and Sloutsky (2004) that persists up to 6 years of age. For 9-year-olds, results on privileged modality processing were inconsistent. Whereas visual dominance was revealed in Study 1, privileged auditory processing was revealed in Study 2. Among adults, a visual dominance was observed in Study 1, which has also been demonstrated in preceding studies (see Spence, Parise, & Chen, 2012). No sensory dominance was revealed in Study 2 for adults. Potential explanations are discussed. Study 3 referred to verbal and visual overshadowing effects in 4- to 6-year-olds. The aim was to examine whether verbalization (i.e., verbally describing a previously seen face), or visualization (i.e., drawing the seen face) might affect later face recognition. No effect of visualization on recognition accuracy was revealed. As opposed to a verbal overshadowing effect, a verbal facilitation effect occurred. Moreover, verbal intelligence was a significant predictor for recognition accuracy in the verbalization group but not in the control group. This suggests that strengthening verbal intelligence in children can pay off in non-verbal domains as well, which might have educational implications.
Resumo:
It is well known that self-generated stimuli are processed differently from externally generated stimuli. For example, many people have noticed since childhood that it is very difficult to make a self-tickling. In the auditory domain, self-generated sounds elicit smaller brain responses as compared to externally generated sounds, known as the sensory attenuation (SA) effect. SA is manifested in reduced amplitudes of evoked responses as measured through MEEG, decreased firing rates of neurons and a lower level of perceived loudness for self-generated sounds. The predominant explanation for SA is based on the idea that self-generated stimuli are predicted (e.g., the forward model account). It is the nature of their predictability that is crucial for SA. On the contrary, the sensory gating account emphasizes a general suppressive effect of actions on sensory processing, regardless of the predictability of the stimuli. Both accounts have received empirical support, which suggests that both mechanisms may exist. In chapter 2, three behavioural studies concerning the influence of motor activation on auditory perception were presented. Study 1 compared the effect of SA and attention in an auditory detection task and showed that SA was present even when substantial attention was paid to unpredictable stimuli. Study 2 compared the loudness perception of tones generated by others between Chinese and British participants. Compared to externally generated tones, a decrease in perceived loudness for others generated tones was found among Chinese but not among the British. In study 3, partial evidence was found that even when reading words that are related to action, auditory detection performance was impaired. In chapter 3, the classic SA effect of M100 suppression was replicated with MEG in study 4. With time-frequency analysis, a potential neural information processing sequence was found in auditory cortex. Prior to the onset of self-generated tones, there was an increase of oscillatory power in the alpha band. After the stimulus onset, reduced gamma power and alpha/beta phase locking were found. The three temporally segregated oscillatory events correlated with each other and with SA effect, which may be the underlying neural implementation of SA. In chapter 4, a TMS-MEG study was presented investigating the role of the cerebellum in adapting to delayed presentation of self-generated tones (study 5). It demonstrated that in sham stimulation condition, the brain can adapt to the delay (about 100 ms) within 300 trials of learning by showing a significant increase of SA effect in the suppression of M100, but not M200 component. Whereas after stimulating the cerebellum with a suppressive TMS protocol, the adaptation in M100 suppression disappeared and the pattern of M200 suppression reversed to M200 enhancement. These data support the idea that the suppressive effect of actions on auditory processing is a consequence of both motor driven sensory predictions and general sensory gating. The results also demonstrate the importance of neural oscillations in implementing SA effect and the critical role of the cerebellum in learning sensory predictions under sensory perturbation.
Resumo:
How and why visualisations support learning was the subject of this qualitative instrumental collective case study. Five computer programming languages (PHP, Visual Basic, Alice, GameMaker, and RoboLab) supporting differing degrees of visualisation were used as cases to explore the effectiveness of software visualisation to develop fundamental computer programming concepts (sequence, iteration, selection, and modularity). Cognitive theories of visual and auditory processing, cognitive load, and mental models provided a framework in which student cognitive development was tracked and measured by thirty-one 15-17 year old students drawn from a Queensland metropolitan secondary private girls’ school, as active participants in the research. Seventeen findings in three sections increase our understanding of the effects of visualisation on the learning process. The study extended the use of mental model theory to track the learning process, and demonstrated application of student research based metacognitive analysis on individual and peer cognitive development as a means to support research and as an approach to teaching. The findings also forward an explanation for failures in previous software visualisation studies, in particular the study has demonstrated that for the cases examined, where complex concepts are being developed, the mixing of auditory (or text) and visual elements can result in excessive cognitive load and impede learning. This finding provides a framework for selecting the most appropriate instructional programming language based on the cognitive complexity of the concepts under study.
Resumo:
In this paper we provide normative data along multiple cognitive and affective variable dimensions for a set of 110 sounds, including living and manmade stimuli. Environmental sounds are being increasingly utilized as stimuli in the cognitive, neuropsychological and neuroimaging fields, yet there is no comprehensive set of normative information for these type of stimuli available for use across these experimental domains. Experiment 1 collected data from 162 participants in an on-line questionnaire, which included measures of identification and categorization as well as cognitive and affective variables. A subsequent experiment collected response times to these sounds. Sounds were normalized to the same length (1 second) in order to maximize usage across multiple paradigms and experimental fields. These sounds can be freely downloaded for use, and all response data have also been made available in order that researchers can choose one or many of the cognitive and affective dimensions along which they would like to control their stimuli. Our hope is that the availability of such information will assist researchers in the fields of cognitive and clinical psychology and the neuroimaging community in choosing well-controlled environmental sound stimuli, and allow comparison across multiple studies.
Resumo:
Discovering the means to prevent and cure schizophrenia is a vision that motivates many scientists. But in order to achieve this goal, we need to understand its neurobiological basis. The emergent metadiscipline of cognitive neuroscience fields an impressive array of tools that can be marshaled towards achieving this goal, including powerful new methods of imaging the brain (both structural and functional) as well as assessments of perceptual and cognitive capacities based on psychophysical procedures, experimental tasks and models developed by cognitive science. We believe that the integration of data from this array of tools offers the greatest possibilities and potential for advancing understanding of the neural basis of not only normal cognition but also the cognitive impairments that are fundamental to schizophrenia. Since sufficient expertise in the application of these tools and methods rarely reside in a single individual, or even a single laboratory, collaboration is a key element in this endeavor. Here, we review some of the products of our integrative efforts in collaboration with our colleagues on the East Coast of Australia and Pacific Rim. This research focuses on the neural basis of executive function deficits and impairments in early auditory processing in patients using various combinations of performance indices (from perceptual and cognitive paradigms), ERPs, fMRI and sMRI. In each case, integration of two or more sources of information provides more information than any one source alone by revealing new insights into structure-function relationships. Furthermore, the addition of other imaging methodologies (such as DTI) and approaches (such as computational models of cognition) offers new horizons in human brain imaging research and in understanding human behavior.
Resumo:
BACKGROUND: Dystrobrevin binding protein 1 (DTNBP1) is a schizophrenia susceptibility gene involved with neurotransmission regulation (especially dopamine and glutamate) and neurodevelopment. The gene is known to be associated with cognitive deficit phenotypes within schizophrenia. In our previous studies, DTNBP1 was found associated not only with schizophrenia but with other psychiatric disorders including psychotic depression, post-traumatic stress disorder, nicotine dependence and opiate dependence. These findings suggest that DNTBP1 may be involved in pathways that lead to multiple psychiatric phenotypes. In this study, we explored the association between DTNBP1 SNPs (single nucleotide polymorphisms) and multiple psychiatric phenotypes included in the Diagnostic Interview of Psychosis (DIP). METHODS: Five DTNBP1 SNPs, rs17470454, rs1997679, rs4236167, rs9370822 and rs9370823, were genotyped in 235 schizophrenia subjects screened for various phenotypes in the domains of depression, mania, hallucinations, delusions, subjective thought disorder, behaviour and affect, and speech disorder. SNP-phenotype association was determined with ANOVA under general, dominant/recessive and over-dominance models. RESULTS: Post hoc tests determined that SNP rs1997679 was associated with visual hallucination; SNP rs4236167 was associated with general auditory hallucination as well as specific features including non-verbal, abusive and third-person form auditory hallucinations; and SNP rs9370822 was associated with visual and olfactory hallucinations. SNPs that survived correction for multiple testing were rs4236167 for third-person and abusive form auditory hallucinations; and rs9370822 for olfactory hallucinations. CONCLUSION: These data suggest that DTNBP1 is likely to play a role in development of auditory related, visual and olfactory hallucinations which is consistent with evidence of DTNBP1 activity in the auditory processing regions, in visual processing and in the regulation of glutamate and dopamine activity
Resumo:
Non-stationary signal modeling is a well addressed problem in the literature. Many methods have been proposed to model non-stationary signals such as time varying linear prediction and AM-FM modeling, the later being more popular. Estimation techniques to determine the AM-FM components of narrow-band signal, such as Hilbert transform, DESA1, DESA2, auditory processing approach, ZC approach, etc., are prevalent but their robustness to noise is not clearly addressed in the literature. This is critical for most practical applications, such as in communications. We explore the robustness of different AM-FM estimators in the presence of white Gaussian noise. Also, we have proposed three new methods for IF estimation based on non-uniform samples of the signal and multi-resolution analysis. Experimental results show that ZC based methods give better results than the popular methods such as DESA in clean condition as well as noisy condition.
Resumo:
Neuronal oscillations are thought to underlie interactions between distinct brain regions required for normal memory functioning. This study aimed at elucidating the neuronal basis of memory abnormalities in neurodegenerative disorders. Magnetoencephalography (MEG) was used to measure oscillatory brain signals in patients with Alzheimer s disease (AD), a neurodegenerative disease causing progressive cognitive decline, and mild cognitive impairment (MCI), a disorder characterized by mild but clinically significant complaints of memory loss without apparent impairment in other cognitive domains. Furthermore, to help interpret our AD/MCI results and to develop more powerful oscillatory MEG paradigms for clinical memory studies, oscillatory neuronal activity underlying declarative memory, the function which is afflicted first in both AD and MCI, was investigated in a group of healthy subjects. An increased temporal-lobe contribution coinciding with parieto-occipital deficits in oscillatory activity was observed in AD patients: sources in the 6 12.5 Hz range were significantly stronger in the parieto-occipital and significantly weaker in the right temporal region in AD patients, as compared to MCI patients and healthy elderly subjects. Further, the auditory steady-state response, thought to represent both evoked and induced activity, was enhanced in AD patients, as compared to controls, possibly reflecting decreased inhibition in auditory processing and deficits in adaptation to repetitive stimulation with low relevance. Finally, the methodological study revealed that successful declarative encoding and retrieval is associated with increases in occipital gamma and right hemisphere theta power in healthy unmedicated subjects. This result suggests that investigation of neuronal oscillations during cognitive performance could potentially be used to investigate declarative memory deficits in AD patients. Taken together, the present results provide an insight on the role of brain oscillatory activity in memory function and memory disorders.
Resumo:
Background: Opiod dependence is a chronic severe brain disorder associated with enormous health and social problems. The relapse back to opioid abuse is very high especially in early abstinence, but neuropsychological and neurophysiological deficits during opioid abuse or soon after cessation of opioids are scarcely investigated. Also the structural brain changes and their correlations with the length of opioid abuse or abuse onset age are not known. In this study the cognitive functions, neural basis of cognitive dysfunction, and brain structural changes was studied in opioid-dependent patients and in age and sex matched healthy controls. Materials and methods: All subjects participating in the study, 23 opioid dependents of whom, 15 were also benzodiazepine and five cannabis co-dependent and 18 healthy age and sex matched controls went through Structured Clinical Interviews (SCID) to obtain DSM-IV axis I and II diagnosis and to exclude psychiatric illness not related to opioid dependence or personality disorders. Simultaneous magnetoencephalography (MEG) and electroencephalography (EEG) measurements were done on 21 opioid-dependent individuals on the day of hospitalization for withdrawal therapy. The neural basis of auditory processing was studied and pre-attentive attention and sensory memory were investigated. During the withdrawal 15 opioid-dependent patients participated in neuropsychological tests, measuring fluid intelligence, attention and working memory, verbal and visual memory, and executive functions. Fifteen healthy subjects served as controls for the MEG-EEG measurements and neuropsychological assessment. The brain magnetic resonance imaging (MRI) was obtained from 17 patients after approximately two weeks abstinence, and from 17 controls. The areas of different brain structures and the absolute and relative volumes of cerebrum, cerebral white and gray matter, and cerebrospinal fluid (CSF) spaces were measured and the Sylvian fissure ratio (SFR) and bifrontal ratio were calculated. Also correlation between the cerebral measures and neuropsychological performance was done. Results: MEG-EEG measurements showed that compared to controls the opioid-dependent patients had delayed mismatch negativity (MMN) response to novel sounds in the EEG and P3am on the contralateral hemisphere to the stimulated ear in MEG. The equivalent current dipole (ECD) of N1m response was stronger in patients with benzodiazepine co-dependence than those without benzodiazepine co-dependence or controls. In early abstinence the opioid dependents performed poorer than the controls in tests measuring attention and working memory, executive function and fluid intelligence. Test results of the Culture Fair Intelligence Test (CFIT), testing fluid intelligence, and Paced Auditory Serial Addition Test (PASAT), measuring attention and working memory correlated positively with the days of abstinence. MRI measurements showed that the relative volume of CSF was significantly larger in opioid dependents, which could also be seen in visual analysis. Also Sylvian fissures, expressed by SFR were wider in patients, which correlated negatively with the age of opioid abuse onset. In controls the relative gray matter volume had a positive correlation with composite cognitive performance, but this correlation was not found in opioid dependents in early abstinence. Conclusions: Opioid dependents had wide Sylvian fissures and CSF spaces indicating frontotemporal atrophy. Dilatation of Sylvian fissures correlated with the abuse onset age. During early withdrawal cognitive performance of opioid dependents was impaired. While intoxicated the pre-attentive attention to novel stimulus was delayed and benzodiazepine co-dependence impaired sound detection. All these changes point to disturbances on frontotemporal areas.
Resumo:
The project consisted of two long-term follow-up studies of preterm children addressing the question whether intrauterine growth restriction affects the outcome. Assessment at 5 years of age of 203 children with a birth weight less than 1000 g born in Finland in 1996-1997 showed that 9% of the children had cognitive impairment, 14% cerebral palsy, and 4% needed a hearing aid. The intelligence quotient was lower (p<0.05) than the reference value. Thus, 20% exhibited major, 19% minor disabilities, and 61% had no functional abnormalities. Being small for gestational age (SGA) was associated with sub-optimal growth later. In children born before 27 gestational weeks, the SGA had more neuropsychological disabilities than those appropriate for gestational age (AGA). In another cohort with birth weight less than 1500 g assessed at 5 years of age, echocardiography showed a thickened interventricular septum and a decreased left ventricular end-diastolic diameter in both SGA and AGA born children. They also had a higher systolic blood pressure than the reference. Laser-Doppler flowmetry showed different endothelium-dependent and -independent vasodilation responses in the AGA children compared to those of the controls. SGA was not associated with cardio-vascular abnormalities. Auditory event-related potentials (AERPs) were recorded using an oddball paradigm with frequency deviants (standard tone 500 Hz and deviant 750-Hz with 10% probability). At term, the P350 was smaller in SGA and AGA infants than in controls. At 12 months, the automatic change detection peak (mismatch negativity, MMN) was observed in the controls. However, the pre-term infants had a difference positivity that correlated with their neurodevelopment scores. At 5 years of age, the P1-deflection, which reflects primary auditory processing, was smaller, and the MMN larger in the preterm than in the control children. Even with a challenging paradigm or a distraction paradigm, P1 was smaller in the preterm than in the control children. The SGA and AGA children showed similar AERP responses. Prematurity is a major risk factor for abnormal brain development. Preterm children showed signs of cardiovascular abnormality suggesting that prematurity per se may carry a risk for later morbidity. The small positive amplitudes in AERPs suggest persisting altered auditory processing in the preterm in-fants.
Resumo:
Low-frequency sounds are advantageous for long-range acoustic signal transmission, but for small animals they constitute a challenge for signal detection and localization. The efficient detection of sound in insects is enhanced by mechanical resonance either in the tracheal or tympanal system before subsequent neuronal amplification. Making small structures resonant at low sound frequencies poses challenges for insects and has not been adequately studied. Similarly, detecting the direction of long-wavelength sound using interaural signal amplitude and/or phase differences is difficult for small animals. Pseudophylline bushcrickets predominantly call at high, often ultrasonic frequencies, but a few paleotropical species use lower frequencies. We investigated the mechanical frequency tuning of the tympana of one such species, Onomarchus uninotatus, a large bushcricket that produces a narrow bandwidth call at an unusually low carrier frequency of 3.2. kHz. Onomarchus uninotatus, like most bushcrickets, has two large tympanal membranes on each fore-tibia. We found that both these membranes vibrate like hinged flaps anchored at the dorsal wall and do not show higher modes of vibration in the frequency range investigated (1.5-20. kHz). The anterior tympanal membrane acts as a low-pass filter, attenuating sounds at frequencies above 3.5. kHz, in contrast to the high-pass filter characteristic of other bushcricket tympana. Responses to higher frequencies are partitioned to the posterior tympanal membrane, which shows maximal sensitivity at several broad frequency ranges, peaking at 3.1, 7.4 and 14.4. kHz. This partitioning between the two tympanal membranes constitutes an unusual feature of peripheral auditory processing in insects. The complex tracheal shape of O. uninotatus also deviates from the known tube or horn shapes associated with simple band-pass or high-pass amplification of tracheal input to the tympana. Interestingly, while the anterior tympanal membrane shows directional sensitivity at conspecific call frequencies, the posterior tympanal membrane is not directional at conspecific frequencies and instead shows directionality at higher frequencies.
Resumo:
Natural sounds are structured on many time-scales. A typical segment of speech, for example, contains features that span four orders of magnitude: Sentences ($\sim1$s); phonemes ($\sim10$−$1$ s); glottal pulses ($\sim 10$−$2$s); and formants ($\sim 10$−$3$s). The auditory system uses information from each of these time-scales to solve complicated tasks such as auditory scene analysis [1]. One route toward understanding how auditory processing accomplishes this analysis is to build neuroscience-inspired algorithms which solve similar tasks and to compare the properties of these algorithms with properties of auditory processing. There is however a discord: Current machine-audition algorithms largely concentrate on the shorter time-scale structures in sounds, and the longer structures are ignored. The reason for this is two-fold. Firstly, it is a difficult technical problem to construct an algorithm that utilises both sorts of information. Secondly, it is computationally demanding to simultaneously process data both at high resolution (to extract short temporal information) and for long duration (to extract long temporal information). The contribution of this work is to develop a new statistical model for natural sounds that captures structure across a wide range of time-scales, and to provide efficient learning and inference algorithms. We demonstrate the success of this approach on a missing data task.
Resumo:
Determining how information flows along anatomical brain pathways is a fundamental requirement for understanding how animals perceive their environments, learn, and behave. Attempts to reveal such neural information flow have been made using linear computational methods, but neural interactions are known to be nonlinear. Here, we demonstrate that a dynamic Bayesian network (DBN) inference algorithm we originally developed to infer nonlinear transcriptional regulatory networks from gene expression data collected with microarrays is also successful at inferring nonlinear neural information flow networks from electrophysiology data collected with microelectrode arrays. The inferred networks we recover from the songbird auditory pathway are correctly restricted to a subset of known anatomical paths, are consistent with timing of the system, and reveal both the importance of reciprocal feedback in auditory processing and greater information flow to higher-order auditory areas when birds hear natural as opposed to synthetic sounds. A linear method applied to the same data incorrectly produces networks with information flow to non-neural tissue and over paths known not to exist. To our knowledge, this study represents the first biologically validated demonstration of an algorithm to successfully infer neural information flow networks.