981 resultados para Auditory span


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Peer reviewed

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The auditory evoked N1m-P2m response complex presents a challenging case for MEG source-modelling, because symmetrical, phase-locked activity occurs in the hemispheres both contralateral and ipsilateral to stimulation. Beamformer methods, in particular, can be susceptible to localisation bias and spurious sources under these conditions. This study explored the accuracy and efficiency of event-related beamformer source models for auditory MEG data under typical experimental conditions: monaural and diotic stimulation; and whole-head beamformer analysis compared to a half-head analysis using only sensors from the hemisphere contralateral to stimulation. Event-related beamformer localisations were also compared with more traditional single-dipole models. At the group level, the event-related beamformer performed equally well as the single-dipole models in terms of accuracy for both the N1m and the P2m, and in terms of efficiency (number of successful source models) for the N1m. The results yielded by the half-head analysis did not differ significantly from those produced by the traditional whole-head analysis. Any localisation bias caused by the presence of correlated sources is minimal in the context of the inter-individual variability in source localisations. In conclusion, event-related beamformers provide a useful alternative to equivalent-current dipole models in localisation of auditory evoked responses.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Once thought to be predominantly the domain of cortex, multisensory integration has now been found at numerous sub-cortical locations in the auditory pathway. Prominent ascending and descending connection within the pathway suggest that the system may utilize non-auditory activity to help filter incoming sounds as they first enter the ear. Active mechanisms in the periphery, particularly the outer hair cells (OHCs) of the cochlea and middle ear muscles (MEMs), are capable of modulating the sensitivity of other peripheral mechanisms involved in the transduction of sound into the system. Through indirect mechanical coupling of the OHCs and MEMs to the eardrum, motion of these mechanisms can be recorded as acoustic signals in the ear canal. Here, we utilize this recording technique to describe three different experiments that demonstrate novel multisensory interactions occurring at the level of the eardrum. 1) In the first experiment, measurements in humans and monkeys performing a saccadic eye movement task to visual targets indicate that the eardrum oscillates in conjunction with eye movements. The amplitude and phase of the eardrum movement, which we dub the Oscillatory Saccadic Eardrum Associated Response or OSEAR, depended on the direction and horizontal amplitude of the saccade and occurred in the absence of any externally delivered sounds. 2) For the second experiment, we use an audiovisual cueing task to demonstrate a dynamic change to pressure levels in the ear when a sound is expected versus when one is not. Specifically, we observe a drop in frequency power and variability from 0.1 to 4kHz around the time when the sound is expected to occur in contract to a slight increase in power at both lower and higher frequencies. 3) For the third experiment, we show that seeing a speaker say a syllable that is incongruent with the accompanying audio can alter the response patterns of the auditory periphery, particularly during the most relevant moments in the speech stream. These visually influenced changes may contribute to the altered percept of the speech sound. Collectively, we presume that these findings represent the combined effect of OHCs and MEMs acting in tandem in response to various non-auditory signals in order to manipulate the receptive properties of the auditory system. These influences may have a profound, and previously unrecognized, impact on how the auditory system processes sounds from initial sensory transduction all the way to perception and behavior. Moreover, we demonstrate that the entire auditory system is, fundamentally, a multisensory system.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Integrating information from multiple sources is a crucial function of the brain. Examples of such integration include multiple stimuli of different modalties, such as visual and auditory, multiple stimuli of the same modality, such as auditory and auditory, and integrating stimuli from the sensory organs (i.e. ears) with stimuli delivered from brain-machine interfaces.

The overall aim of this body of work is to empirically examine stimulus integration in these three domains to inform our broader understanding of how and when the brain combines information from multiple sources.

First, I examine visually-guided auditory, a problem with implications for the general problem in learning of how the brain determines what lesson to learn (and what lessons not to learn). For example, sound localization is a behavior that is partially learned with the aid of vision. This process requires correctly matching a visual location to that of a sound. This is an intrinsically circular problem when sound location is itself uncertain and the visual scene is rife with possible visual matches. Here, we develop a simple paradigm using visual guidance of sound localization to gain insight into how the brain confronts this type of circularity. We tested two competing hypotheses. 1: The brain guides sound location learning based on the synchrony or simultaneity of auditory-visual stimuli, potentially involving a Hebbian associative mechanism. 2: The brain uses a ‘guess and check’ heuristic in which visual feedback that is obtained after an eye movement to a sound alters future performance, perhaps by recruiting the brain’s reward-related circuitry. We assessed the effects of exposure to visual stimuli spatially mismatched from sounds on performance of an interleaved auditory-only saccade task. We found that when humans and monkeys were provided the visual stimulus asynchronously with the sound but as feedback to an auditory-guided saccade, they shifted their subsequent auditory-only performance toward the direction of the visual cue by 1.3-1.7 degrees, or 22-28% of the original 6 degree visual-auditory mismatch. In contrast when the visual stimulus was presented synchronously with the sound but extinguished too quickly to provide this feedback, there was little change in subsequent auditory-only performance. Our results suggest that the outcome of our own actions is vital to localizing sounds correctly. Contrary to previous expectations, visual calibration of auditory space does not appear to require visual-auditory associations based on synchrony/simultaneity.

My next line of research examines how electrical stimulation of the inferior colliculus influences perception of sounds in a nonhuman primate. The central nucleus of the inferior colliculus is the major ascending relay of auditory information before it reaches the forebrain, and thus an ideal target for understanding low-level information processing prior to the forebrain, as almost all auditory signals pass through the central nucleus of the inferior colliculus before reaching the forebrain. Thus, the inferior colliculus is the ideal structure to examine to understand the format of the inputs into the forebrain and, by extension, the processing of auditory scenes that occurs in the brainstem. Therefore, the inferior colliculus was an attractive target for understanding stimulus integration in the ascending auditory pathway.

Moreover, understanding the relationship between the auditory selectivity of neurons and their contribution to perception is critical to the design of effective auditory brain prosthetics. These prosthetics seek to mimic natural activity patterns to achieve desired perceptual outcomes. We measured the contribution of inferior colliculus (IC) sites to perception using combined recording and electrical stimulation. Monkeys performed a frequency-based discrimination task, reporting whether a probe sound was higher or lower in frequency than a reference sound. Stimulation pulses were paired with the probe sound on 50% of trials (0.5-80 µA, 100-300 Hz, n=172 IC locations in 3 rhesus monkeys). Electrical stimulation tended to bias the animals’ judgments in a fashion that was coarsely but significantly correlated with the best frequency of the stimulation site in comparison to the reference frequency employed in the task. Although there was considerable variability in the effects of stimulation (including impairments in performance and shifts in performance away from the direction predicted based on the site’s response properties), the results indicate that stimulation of the IC can evoke percepts correlated with the frequency tuning properties of the IC. Consistent with the implications of recent human studies, the main avenue for improvement for the auditory midbrain implant suggested by our findings is to increase the number and spatial extent of electrodes, to increase the size of the region that can be electrically activated and provide a greater range of evoked percepts.

My next line of research employs a frequency-tagging approach to examine the extent to which multiple sound sources are combined (or segregated) in the nonhuman primate inferior colliculus. In the single-sound case, most inferior colliculus neurons respond and entrain to sounds in a very broad region of space, and many are entirely spatially insensitive, so it is unknown how the neurons will respond to a situation with more than one sound. I use multiple AM stimuli of different frequencies, which the inferior colliculus represents using a spike timing code. This allows me to measure spike timing in the inferior colliculus to determine which sound source is responsible for neural activity in an auditory scene containing multiple sounds. Using this approach, I find that the same neurons that are tuned to broad regions of space in the single sound condition become dramatically more selective in the dual sound condition, preferentially entraining spikes to stimuli from a smaller region of space. I will examine the possibility that there may be a conceptual linkage between this finding and the finding of receptive field shifts in the visual system.

In chapter 5, I will comment on these findings more generally, compare them to existing theoretical models, and discuss what these results tell us about processing in the central nervous system in a multi-stimulus situation. My results suggest that the brain is flexible in its processing and can adapt its integration schema to fit the available cues and the demands of the task.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The development of species-typical perceptual preferences has been shown to depend on a variety of socially and ecologically derived sensory stimulation during both the pre- and postnatal periods. The prominent mechanism behind the development of these seemingly innate tendencies in young organisms has been hypothesized to be a domain-general pan-sensory selectivity process referred to as perceptual narrowing, whereby regularly experienced sensory stimuli are honed in upon, while simultaneously losing the ability to effectively discriminate between atypical or unfamiliar sensory stimulation. Previous work with precocial birds has been successful in preventing the development of species-typical perceptual preferences by denying the organism typical levels of social and/or self-produced stimulation. The current series of experiments explored the mechanism of perceptual narrowing to assess the malleability of a species-typical auditory preference in avian embryos. By providing a variety of different unimodal and bimodal presentations of a mixed-species vocalizations at the onset of prenatal auditory function, the following project aimed to 1) keep the perceptual window from narrowing, thereby interfering with the development of a species-typical auditory preference, 2) investigate how long differential prenatal stimulation can keep the perceptual window open postnatally, 3) explore how prenatal auditory enrichment effected preferences for novelty, and 4) assess whether prenatal auditory perceptual narrowing is affected by modality specific or amodal stimulus properties during early development. Results indicated that prenatal auditory enrichment significantly interferes with the emergence of a species-typical auditory preference and increases openness to novelty, at least temporarily. After accruing postnatal experience in an environment rich with species-typical auditory and multisensory cues, the effect of prenatal auditory enrichment rapidly was found to rapidly fade. Prenatal auditory enrichment with extraneous non-synchronous light exposure was shown to both keep the perceptual narrowing window open and impede learning in the postnatal environment, following hatching. Results are discussed in light of the role experience plays in perceptual narrowing during the perinatal period.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Self-report measures of cognitive problems may have value, but there are indications that scores on such measures are influenced by other factors such as personality. In an online correlational study, 523 non-clinical volunteers completed measures of personality, digit span, and the Prospective and Retrospective Memory Questionnaire. Self-reported prospective and retrospective memory failures were associated positively with neuroticism and negatively with conscientiousness, but not with digit span performance. These findings are consistent with other indications that conscientiousness and neuroticism may underpin self-reports of cognitive problems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Four Ss were run in a visual span of apprehension experiment to determine whether second choices made following incorrect first responses are at the chance level, as implied by various high threshold models proposed for this situation. The relationships between response biases on first and second choices, and between first choice biases on trials with two or three possible responses, were also examined in terms of Luce's (1959) choice theory. The results were: (a) second choice performance in this task appears to be determined by response bias alone, i.e., second choices were at the chance level; (b)first and second choice response biases were not related according to Luce's choice axiom; and (c) the choice axiom predicted with reasonable accuracy the relationships between first choice response biases corresponding to trials with different numbers of possible response alternatives. © 1967 Psychonomic Society, Inc.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The study investigates the acoustic, articulatory and sociophonetic properties of the Swedish /iː/ variant known as 'Viby-i' in 13 speakers of Central Swedish from Stockholm, Gothenburg, Varberg, Jönköping and Katrineholm. The vowel is described in terms of its auditory quality, its acoustic F1 and F2 values, and its tongue configuration. A brief, qualitative description of lip position is also included. Variation in /iː/ production is mapped against five sociolinguistic factors: city, dialectal region, metropolitan vs. urban location, sex and socioeconomic rating. Articulatory data is collected using ultrasound tongue imaging (UTI), for which the study proposes and evaluates a methodology. The study shows that Viby-i varies in auditory strength between speakers, and that strong instances of the vowel are associated with a high F1 and low F2, a trend which becomes more pronounced as the strength of Viby-i increases. The articulation of Viby-i is characterised by a lowered and backed tongue body, sometimes accompanied by a double-bunched tongue shape. The relationship between tongue position and acoustic results appears to be non-linear, suggesting either a measurement error or the influence of additional articulatory factors. Preliminary images of the lips show that Viby-i is produced with a spread but lax lip posture. The lip data also reveals parts of the tongue, which in many speakers appears to be extremely fronted and braced against the lower teeth, or sometimes protruded, when producing Viby-i. No sociophonetic difference is found between speakers from different cities or dialect regions. Metropolitan speakers are found to have an auditorily and acoustically stronger Viby-i than urban speakers, but this pattern is not matched in tongue backing or lowering. Overall the data shows a weak trend towards higher-class females having stronger Viby-i, but these results are tentative due to the limited size and stratification of the sample. Further research is needed to fully explore the sociophonetic properties of Viby-i.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Current trends in speech-language pathology focus on early intervention as the preferred tool for promoting the best possible outcomes in children with language disorders. Neuroimaging techniques are being studied as promising tools for flagging at-risk infants. In this study, the auditory brainstem response (ABR) to the syllables /ba/ and /ga/ was examined in 41 infants between 3 and 12 months of age as a possible tool to predict language development in toddlerhood. The MacArthur-Bates Communicative Development Inventory (MCDI) was used to assess language development at 18 months of age. The current study compared the periodicity of the responses to the stop consonants and phase differences between /ba/ and /ga/ in both at-risk and low-risk groups. The study also examined whether there are correlations among ABR measures (periodicity and phase differentiation) and language development. The study found that these measures predict language development at 18 months.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Vocal differentiation is widely documented in birds and mammals but has been poorly investigated in other vertebrates, including fish, which represent the oldest extant vertebrate group. Neural circuitry controlling vocal behaviour is thought to have evolved from conserved brain areas that originated in fish, making this taxon key to understanding the evolution and development of the vertebrate vocal-auditory systems. This study examines ontogenetic changes in the vocal repertoire and whether vocal differentiation parallels auditory development in the Lusitanian toadfish Halobatrachus didactylus (Batrachoididae). This species exhibits a complex acoustic repertoire and is vocally active during early development. Vocalisations were recorded during social interactions for four size groups (fry: <2 cm; small juveniles: 2-4 cm; large juveniles: 5-7 cm; adults >25 cm, standard length). Auditory sensitivity of juveniles and adults was determined based on evoked potentials recorded from the inner ear saccule in response to pure tones of 75-945 Hz. We show an ontogenetic increment in the vocal repertoire from simple broadband-pulsed 'grunts' that later differentiate into four distinct vocalisations, including low-frequency amplitude-modulated 'boatwhistles'. Whereas fry emitted mostly single grunts, large juveniles exhibited vocalisations similar to the adult vocal repertoire. Saccular sensitivity revealed a three-fold enhancement at most frequencies tested from small to large juveniles; however, large juveniles were similar in sensitivity to adults. We provide the first clear evidence of ontogenetic vocal differentiation in fish, as previously described for higher vertebrates. Our results suggest a parallel development between the vocal motor pathway and the peripheral auditory system for acoustic social communication in fish.