906 resultados para Auditory masking
Resumo:
How speech is separated perceptually from other speech remains poorly understood. Recent research indicates that the ability of an extraneous formant to impair intelligibility depends on the variation of its frequency contour. This study explored the effects of manipulating the depth and pattern of that variation. Three formants (F1+F2+F3) constituting synthetic analogues of natural sentences were distributed across the 2 ears, together with a competitor for F2 (F2C) that listeners must reject to optimize recognition (left = F1+F2C; right = F2+F3). The frequency contours of F1 - F3 were each scaled to 50% of their natural depth, with little effect on intelligibility. Competitors were created either by inverting the frequency contour of F2 about its geometric mean (a plausibly speech-like pattern) or using a regular and arbitrary frequency contour (triangle wave, not plausibly speech-like) matched to the average rate and depth of variation for the inverted F2C. Adding a competitor typically reduced intelligibility; this reduction depended on the depth of F2C variation, being greatest for 100%-depth, intermediate for 50%-depth, and least for 0%-depth (constant) F2Cs. This suggests that competitor impact depends on overall depth of frequency variation, not depth relative to that for the target formants. The absence of tuning (i.e., no minimum in intelligibility for the 50% case) suggests that the ability to reject an extraneous formant does not depend on similarity in the depth of formant-frequency variation. Furthermore, triangle-wave competitors were as effective as their more speech-like counterparts, suggesting that the selection of formants from the ensemble also does not depend on speech-specific constraints. © 2014 The Author(s).
Resumo:
Three experiments investigated the dynamics of auditory stream segregation. Experiment 1 used a 2.0-s constant-frequency inducer (10 repetitions of a low-frequency pure tone) to promote segregation in a subsequent, 1.2-s test sequence of alternating low- and high-frequency tones. Replacing the final inducer tone with silence reduced reported test-sequence segregation substantially. This reduction did not occur when either the 4th or 7th inducer was replaced with silence. This suggests that a change at the induction/test-sequence boundary actively resets buildup, rather than less segregation occurring simply because fewer inducer tones were presented. Furthermore, Experiment 2 found that a constant-frequency inducer produced its maximum segregation-promoting effect after only 3 tone cycles - this contrasts with the more gradual build-up typically observed for alternating sequences. Experiment 3 required listeners to judge continuously the grouping of 20-s test sequences. Constant-frequency inducers were considerably more effective at promoting segregation than alternating ones; this difference persisted for ∼10 s. In addition, resetting arising from a single deviant (longer tone) was associated only with constant-frequency inducers. Overall, the results suggest that constant-frequency inducers promote segregation by capturing one subset of test-sequence tones into an on-going, pre-established stream and that a deviant tone may reduce segregation by disrupting this capture. © 2013 Acoustical Society of America.
Resumo:
Recent research suggests that the ability of an extraneous formant to impair intelligibility depends on the variation of its frequency contour. This idea was explored using a method that ensures interference cannot occur through energetic masking. Three-formant (F1+F2+F3) analogues of natural sentences were synthesized using a monotonous periodic source. Target formants were presented monaurally, with the target ear assigned randomly on each trial. A competitor for F2 (F2C) was presented contralaterally; listeners must reject F2C to optimize recognition. In experiment 1, F2Cs with various frequency and amplitude contours were used. F2Cs with time-varying frequency contours were effective competitors; constant-frequency F2Cs had far less impact. To a lesser extent, amplitude contour also influenced competitor impact; this effect was additive. In experiment 2, F2Cs were created by inverting the F2 frequency contour about its geometric mean and varying its depth of variation over a range from constant to twice the original (0%-200%). The impact on intelligibility was least for constant F2Cs and increased up to ∼100% depth, but little thereafter. The effect of an extraneous formant depends primarily on its frequency contour; interference increases as the depth of variation is increased until the range exceeds that typical for F2 in natural speech.
Resumo:
Auditory Training (AT) describes a regimen of varied listening exercises designed to improve an individual’s ability to perceive speech. The theory of AT is based on brain plasticity (the capacity of neurones in the central auditory system to alter their structure and function) in response to auditory stimulation. The practice of repeatedly listening to the speech sounds included in AT exercises is believed to drive the development of more efficient neuronal pathways, thereby improving auditory processing and speech discrimination. This critical review aims to assess whether auditory training can improve speech discrimination in adults with mild-moderate SNHL. The majority of patients attending Audiology services are adults with presbyacusis and it is therefore important to evaluate evidence of any treatment effect of AT in aural rehabilitation. Ideally this review would seek to appraise evidence of neurophysiological effects of AT so as to verify whether it does induce change in the CAS. However, due to the absence of such studies on this particular patient group, the outcome measure of speech discrimination, as a behavioural indicator of treatment effect is used instead. A review of available research was used to inform an argument for or against using AT in rehabilitative clinical practice. Six studies were identified and although the preliminary evidence indicates an improvement gained from a range of AT paradigms, the treatment effect size was modest and there remains a lack of large-sample RCTs. Future investigation into the efficacy of AT needs to employ neurophysiological studies using auditory evoked potentials in hearing-impaired adults in order to explore effects of AT on the CAS.
Resumo:
Recent research suggests that the ability of an extraneous formant to impair intelligibility depends on the variation of its frequency contour. This idea was explored using a method that ensures interference occurs only through informational masking. Three-formant analogues of sentences were synthesized using a monotonous periodic source (F0 = 140 Hz). Target formants were presented monaurally; the target ear was assigned randomly on each trial. A competitor for F2 (F2C) was presented contralaterally; listeners must reject F2C to optimize recognition. In experiment 1, F2Cs with various frequency and amplitude contours were used. F2Cs with time-varying frequency contours were effective competitors; constant-frequency F2Cs had far less impact. Amplitude contour also influenced competitor impact; this effect was additive. In experiment 2, F2Cs were created by inverting the F2 frequency contour about its geometric mean and varying its depth of variation over a range from constant to twice the original (0–200%). The impact on intelligibility was least for constant F2Cs and increased up to ~100% depth, but little thereafter. The effect of an extraneous formant depends primarily on its frequency contour; interference increases as the depth of variation is increased until the range exceeds that typical for F2 in natural speech.
Resumo:
This paper explores the design, development and evaluation of a novel real-time auditory display system for accelerated racing driver skills acquisition. The auditory feedback provides concurrent sensory augmentation and performance feedback using a novel target matching design. Real-time, dynamic, tonal audio feedback representing lateral G-force (a proxy for tire slip) is delivered to one ear whilst a target lateral G-force value representing the ‘limit’ of the car, to which the driver aims to drive, is panned to the driver’s other ear; tonal match across both ears signifies that the ‘limit’ has been reached. An evaluation approach was established to measure the efficacy of the audio feedback in terms of performance, workload and drivers’ assessment of self-efficacy. A preliminary human subject study was conducted in a driving simulator environment. Initial results are encouraging, indicating that there is potential for performance gain and driver confidence enhancement based on the audio feedback.
Resumo:
An estimated 30% of individuals with autism spectrum disorders (ASD) remain minimally verbal into late childhood, but research on cognition and brain function in ASD focuses almost exclusively on those with good or only moderately impaired language. Here we present a case study investigating auditory processing of GM, a nonverbal child with ASD and cerebral palsy. At the age of 8 years, GM was tested using magnetoencephalography (MEG) whilst passively listening to speech sounds and complex tones. Where typically developing children and verbal autistic children all demonstrated similar brain responses to speech and nonspeech sounds, GM produced much stronger responses to nonspeech than speech, particularly in the 65–165 ms (M50/M100) time window post-stimulus onset. GM was retested aged 10 years using electroencephalography (EEG) whilst passively listening to pure tone stimuli. Consistent with her MEG response to complex tones, GM showed an unusually early and strong response to pure tones in her EEG responses. The consistency of the MEG and EEG data in this single case study demonstrate both the potential and the feasibility of these methods in the study of minimally verbal children with ASD. Further research is required to determine whether GM's atypical auditory responses are characteristic of other minimally verbal children with ASD or of other individuals with cerebral palsy.
Resumo:
Auditory sensory gating (ASG) is the ability in individuals to suppress incoming irrelevant sensory input, indexed by evoked response to paired auditory stimuli. ASG is impaired in psychopathology such as schizophrenia, in which it has been proposed as putative endophenotype. This study aims to characterise electrophysiological properties of the phenomenon using MEG in time and frequency domains as well as to localise putative networks involved in the process at both sensor and source level. We also investigated the relationship between ASG measures and personality profiles in healthy participants in the light of its candidate endophenotype role in psychiatric disorders. Auditory evoked magnetic fields were recorded in twenty seven healthy participants by P50 ‘paired-click’ paradigm presented in pairs (conditioning stimulus S1- testing stimulus S2) at 80dB, separated by 250msec with inter trial interval of 7-10 seconds. Gating ratio in healthy adults ranged from 0.5 to 0.8 suggesting dimensional nature of P50 ASG. The brain regions active during this process were bilateral superior temporal gyrus (STG) and bilateral inferior frontal gyrus (IFG); activation was significantly stronger in IFG during S2 as compared to S1 (at p<0.05). Measures of effective connectivity between these regions using DCM modelling revealed the role of frontal cortex in modulating ASG as suggested by intracranial studies, indicating major role of inhibitory interneuron connections. Findings from this study identified a unique event-related oscillatory pattern for P50 ASG with alpha (STG)-beta (IFG) desynchronization and increase in cortical oscillatory gamma power (IFG) during S2 condition as compared to S1. These findings show that the main generator for P50 response is within temporal lobe and that inhibitory interneurons and gamma oscillations in the frontal cortex contributes substantially towards sensory gating. Our findings also show that ASG is a predictor of personality profiles (introvert vs extrovert dimension).
Resumo:
This study investigated the effects of augmented prenatal auditory stimulation on postnatal visual responsivity and neural organization in bobwhite quail (Colinus virginianus). I delivered conspecific embryonic vocalizations before, during, or after the development of a multisensory, midbrain audiovisual area, the optic tectum. Postnatal simultaneous choice tests revealed that hatchlings receiving augmented auditory stimulation during optic tectum development as embryos failed to show species-typical visual preferences for a conspecific maternal hen 72 hours after hatching. Auditory simultaneous choice tests showed no hatchlings had deficits in auditory function in any of the groups, indicating deficits were specific to visual function. ZENK protein expression confirmed differences in the amount of neural plasticity in multiple neuroanatomical regions of birds receiving stimulation during optic tecturn development, compared to unmanipulated birds. The results of these experiments support the notion that the timing of augmented prenatal auditory stimulation relative to optic tectum development can impact postnatal perceptual organization in an enduring way.^
Resumo:
Recent studies have established that yolk hormones of maternal origin have significant effects on the physiology and behavior of offspring in birds. Herrington (2012) demonstrated that an elevation of progesterone in yolk elevates emotional reactivity in bobwhite quail neonates. Chicks that hatched from progesterone treated eggs displayed increased latency in tonic immobility and did not emerge as quickly from a covered location into an open field compared to control groups. For the present study, three experimental groups were formed: chicks hatched from eggs with artificially elevated progesterone (P), chicks hatched from an oil-vehicle control group (V), and chicks hatched from a non-manipulated control group (C). Experiment 1 examined levels of progesterone with High Performance Liquid Chromatography/tandem Mass Spectroscopy (HPLC/MS) from prenatal day 1 to prenatal day 17 in bobwhite quail egg yolk. In Experiment 2, bobwhite quail embryos were passively exposed to an individual maternal assembly call for 24 hours prior to hatching. Chicks were then tested individually for their preference between the familiarized call and a novel call at 24 and 48 hours following hatching. For Experiment 3, newly hatched chicks were exposed to an individual maternal assembly call for 24-hrs. Chicks were then tested for their preference for the familiarized call at 24 and 48-hrs after hatch. Results of Experiment 1 showed that yolk progesterone levels were significantly elevated in treated eggs and were present in the egg yolk longer into prenatal development than the two control groups. Results from Experiment 2 indicated that chicks from the P group failed to demonstrate a preference for the familiar bobwhite maternal assembly call at 24 or 48-hrs after hatch following 24-hrs of prenatal exposure. In contrast, chicks from the C and V groups demonstrated a significant preference for the familiarized call. In Experiment 3, chicks from the P group showed an enhanced preference for the familiarized bobwhite maternal call compared to chicks from the C and V groups at 24 and 48-hrs after hatch. The results of these experiments suggest that elevated maternal yolk hormone levels in pre-incubated bobwhite quail eggs can influence auditory perceptual learning in embryos and neonates.^
Resumo:
The auditory evoked N1m-P2m response complex presents a challenging case for MEG source-modelling, because symmetrical, phase-locked activity occurs in the hemispheres both contralateral and ipsilateral to stimulation. Beamformer methods, in particular, can be susceptible to localisation bias and spurious sources under these conditions. This study explored the accuracy and efficiency of event-related beamformer source models for auditory MEG data under typical experimental conditions: monaural and diotic stimulation; and whole-head beamformer analysis compared to a half-head analysis using only sensors from the hemisphere contralateral to stimulation. Event-related beamformer localisations were also compared with more traditional single-dipole models. At the group level, the event-related beamformer performed equally well as the single-dipole models in terms of accuracy for both the N1m and the P2m, and in terms of efficiency (number of successful source models) for the N1m. The results yielded by the half-head analysis did not differ significantly from those produced by the traditional whole-head analysis. Any localisation bias caused by the presence of correlated sources is minimal in the context of the inter-individual variability in source localisations. In conclusion, event-related beamformers provide a useful alternative to equivalent-current dipole models in localisation of auditory evoked responses.
Resumo:
Once thought to be predominantly the domain of cortex, multisensory integration has now been found at numerous sub-cortical locations in the auditory pathway. Prominent ascending and descending connection within the pathway suggest that the system may utilize non-auditory activity to help filter incoming sounds as they first enter the ear. Active mechanisms in the periphery, particularly the outer hair cells (OHCs) of the cochlea and middle ear muscles (MEMs), are capable of modulating the sensitivity of other peripheral mechanisms involved in the transduction of sound into the system. Through indirect mechanical coupling of the OHCs and MEMs to the eardrum, motion of these mechanisms can be recorded as acoustic signals in the ear canal. Here, we utilize this recording technique to describe three different experiments that demonstrate novel multisensory interactions occurring at the level of the eardrum. 1) In the first experiment, measurements in humans and monkeys performing a saccadic eye movement task to visual targets indicate that the eardrum oscillates in conjunction with eye movements. The amplitude and phase of the eardrum movement, which we dub the Oscillatory Saccadic Eardrum Associated Response or OSEAR, depended on the direction and horizontal amplitude of the saccade and occurred in the absence of any externally delivered sounds. 2) For the second experiment, we use an audiovisual cueing task to demonstrate a dynamic change to pressure levels in the ear when a sound is expected versus when one is not. Specifically, we observe a drop in frequency power and variability from 0.1 to 4kHz around the time when the sound is expected to occur in contract to a slight increase in power at both lower and higher frequencies. 3) For the third experiment, we show that seeing a speaker say a syllable that is incongruent with the accompanying audio can alter the response patterns of the auditory periphery, particularly during the most relevant moments in the speech stream. These visually influenced changes may contribute to the altered percept of the speech sound. Collectively, we presume that these findings represent the combined effect of OHCs and MEMs acting in tandem in response to various non-auditory signals in order to manipulate the receptive properties of the auditory system. These influences may have a profound, and previously unrecognized, impact on how the auditory system processes sounds from initial sensory transduction all the way to perception and behavior. Moreover, we demonstrate that the entire auditory system is, fundamentally, a multisensory system.
Resumo:
Integrating information from multiple sources is a crucial function of the brain. Examples of such integration include multiple stimuli of different modalties, such as visual and auditory, multiple stimuli of the same modality, such as auditory and auditory, and integrating stimuli from the sensory organs (i.e. ears) with stimuli delivered from brain-machine interfaces.
The overall aim of this body of work is to empirically examine stimulus integration in these three domains to inform our broader understanding of how and when the brain combines information from multiple sources.
First, I examine visually-guided auditory, a problem with implications for the general problem in learning of how the brain determines what lesson to learn (and what lessons not to learn). For example, sound localization is a behavior that is partially learned with the aid of vision. This process requires correctly matching a visual location to that of a sound. This is an intrinsically circular problem when sound location is itself uncertain and the visual scene is rife with possible visual matches. Here, we develop a simple paradigm using visual guidance of sound localization to gain insight into how the brain confronts this type of circularity. We tested two competing hypotheses. 1: The brain guides sound location learning based on the synchrony or simultaneity of auditory-visual stimuli, potentially involving a Hebbian associative mechanism. 2: The brain uses a ‘guess and check’ heuristic in which visual feedback that is obtained after an eye movement to a sound alters future performance, perhaps by recruiting the brain’s reward-related circuitry. We assessed the effects of exposure to visual stimuli spatially mismatched from sounds on performance of an interleaved auditory-only saccade task. We found that when humans and monkeys were provided the visual stimulus asynchronously with the sound but as feedback to an auditory-guided saccade, they shifted their subsequent auditory-only performance toward the direction of the visual cue by 1.3-1.7 degrees, or 22-28% of the original 6 degree visual-auditory mismatch. In contrast when the visual stimulus was presented synchronously with the sound but extinguished too quickly to provide this feedback, there was little change in subsequent auditory-only performance. Our results suggest that the outcome of our own actions is vital to localizing sounds correctly. Contrary to previous expectations, visual calibration of auditory space does not appear to require visual-auditory associations based on synchrony/simultaneity.
My next line of research examines how electrical stimulation of the inferior colliculus influences perception of sounds in a nonhuman primate. The central nucleus of the inferior colliculus is the major ascending relay of auditory information before it reaches the forebrain, and thus an ideal target for understanding low-level information processing prior to the forebrain, as almost all auditory signals pass through the central nucleus of the inferior colliculus before reaching the forebrain. Thus, the inferior colliculus is the ideal structure to examine to understand the format of the inputs into the forebrain and, by extension, the processing of auditory scenes that occurs in the brainstem. Therefore, the inferior colliculus was an attractive target for understanding stimulus integration in the ascending auditory pathway.
Moreover, understanding the relationship between the auditory selectivity of neurons and their contribution to perception is critical to the design of effective auditory brain prosthetics. These prosthetics seek to mimic natural activity patterns to achieve desired perceptual outcomes. We measured the contribution of inferior colliculus (IC) sites to perception using combined recording and electrical stimulation. Monkeys performed a frequency-based discrimination task, reporting whether a probe sound was higher or lower in frequency than a reference sound. Stimulation pulses were paired with the probe sound on 50% of trials (0.5-80 µA, 100-300 Hz, n=172 IC locations in 3 rhesus monkeys). Electrical stimulation tended to bias the animals’ judgments in a fashion that was coarsely but significantly correlated with the best frequency of the stimulation site in comparison to the reference frequency employed in the task. Although there was considerable variability in the effects of stimulation (including impairments in performance and shifts in performance away from the direction predicted based on the site’s response properties), the results indicate that stimulation of the IC can evoke percepts correlated with the frequency tuning properties of the IC. Consistent with the implications of recent human studies, the main avenue for improvement for the auditory midbrain implant suggested by our findings is to increase the number and spatial extent of electrodes, to increase the size of the region that can be electrically activated and provide a greater range of evoked percepts.
My next line of research employs a frequency-tagging approach to examine the extent to which multiple sound sources are combined (or segregated) in the nonhuman primate inferior colliculus. In the single-sound case, most inferior colliculus neurons respond and entrain to sounds in a very broad region of space, and many are entirely spatially insensitive, so it is unknown how the neurons will respond to a situation with more than one sound. I use multiple AM stimuli of different frequencies, which the inferior colliculus represents using a spike timing code. This allows me to measure spike timing in the inferior colliculus to determine which sound source is responsible for neural activity in an auditory scene containing multiple sounds. Using this approach, I find that the same neurons that are tuned to broad regions of space in the single sound condition become dramatically more selective in the dual sound condition, preferentially entraining spikes to stimuli from a smaller region of space. I will examine the possibility that there may be a conceptual linkage between this finding and the finding of receptive field shifts in the visual system.
In chapter 5, I will comment on these findings more generally, compare them to existing theoretical models, and discuss what these results tell us about processing in the central nervous system in a multi-stimulus situation. My results suggest that the brain is flexible in its processing and can adapt its integration schema to fit the available cues and the demands of the task.
Resumo:
The development of species-typical perceptual preferences has been shown to depend on a variety of socially and ecologically derived sensory stimulation during both the pre- and postnatal periods. The prominent mechanism behind the development of these seemingly innate tendencies in young organisms has been hypothesized to be a domain-general pan-sensory selectivity process referred to as perceptual narrowing, whereby regularly experienced sensory stimuli are honed in upon, while simultaneously losing the ability to effectively discriminate between atypical or unfamiliar sensory stimulation. Previous work with precocial birds has been successful in preventing the development of species-typical perceptual preferences by denying the organism typical levels of social and/or self-produced stimulation. The current series of experiments explored the mechanism of perceptual narrowing to assess the malleability of a species-typical auditory preference in avian embryos. By providing a variety of different unimodal and bimodal presentations of a mixed-species vocalizations at the onset of prenatal auditory function, the following project aimed to 1) keep the perceptual window from narrowing, thereby interfering with the development of a species-typical auditory preference, 2) investigate how long differential prenatal stimulation can keep the perceptual window open postnatally, 3) explore how prenatal auditory enrichment effected preferences for novelty, and 4) assess whether prenatal auditory perceptual narrowing is affected by modality specific or amodal stimulus properties during early development. Results indicated that prenatal auditory enrichment significantly interferes with the emergence of a species-typical auditory preference and increases openness to novelty, at least temporarily. After accruing postnatal experience in an environment rich with species-typical auditory and multisensory cues, the effect of prenatal auditory enrichment rapidly was found to rapidly fade. Prenatal auditory enrichment with extraneous non-synchronous light exposure was shown to both keep the perceptual narrowing window open and impede learning in the postnatal environment, following hatching. Results are discussed in light of the role experience plays in perceptual narrowing during the perinatal period.