980 resultados para auditory system


Relevância:

70.00% 70.00%

Publicador:

Resumo:

Auditory Training (AT) describes a regimen of varied listening exercises designed to improve an individual’s ability to perceive speech. The theory of AT is based on brain plasticity (the capacity of neurones in the central auditory system to alter their structure and function) in response to auditory stimulation. The practice of repeatedly listening to the speech sounds included in AT exercises is believed to drive the development of more efficient neuronal pathways, thereby improving auditory processing and speech discrimination. This critical review aims to assess whether auditory training can improve speech discrimination in adults with mild-moderate SNHL. The majority of patients attending Audiology services are adults with presbyacusis and it is therefore important to evaluate evidence of any treatment effect of AT in aural rehabilitation. Ideally this review would seek to appraise evidence of neurophysiological effects of AT so as to verify whether it does induce change in the CAS. However, due to the absence of such studies on this particular patient group, the outcome measure of speech discrimination, as a behavioural indicator of treatment effect is used instead. A review of available research was used to inform an argument for or against using AT in rehabilitative clinical practice. Six studies were identified and although the preliminary evidence indicates an improvement gained from a range of AT paradigms, the treatment effect size was modest and there remains a lack of large-sample RCTs. Future investigation into the efficacy of AT needs to employ neurophysiological studies using auditory evoked potentials in hearing-impaired adults in order to explore effects of AT on the CAS.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Once thought to be predominantly the domain of cortex, multisensory integration has now been found at numerous sub-cortical locations in the auditory pathway. Prominent ascending and descending connection within the pathway suggest that the system may utilize non-auditory activity to help filter incoming sounds as they first enter the ear. Active mechanisms in the periphery, particularly the outer hair cells (OHCs) of the cochlea and middle ear muscles (MEMs), are capable of modulating the sensitivity of other peripheral mechanisms involved in the transduction of sound into the system. Through indirect mechanical coupling of the OHCs and MEMs to the eardrum, motion of these mechanisms can be recorded as acoustic signals in the ear canal. Here, we utilize this recording technique to describe three different experiments that demonstrate novel multisensory interactions occurring at the level of the eardrum. 1) In the first experiment, measurements in humans and monkeys performing a saccadic eye movement task to visual targets indicate that the eardrum oscillates in conjunction with eye movements. The amplitude and phase of the eardrum movement, which we dub the Oscillatory Saccadic Eardrum Associated Response or OSEAR, depended on the direction and horizontal amplitude of the saccade and occurred in the absence of any externally delivered sounds. 2) For the second experiment, we use an audiovisual cueing task to demonstrate a dynamic change to pressure levels in the ear when a sound is expected versus when one is not. Specifically, we observe a drop in frequency power and variability from 0.1 to 4kHz around the time when the sound is expected to occur in contract to a slight increase in power at both lower and higher frequencies. 3) For the third experiment, we show that seeing a speaker say a syllable that is incongruent with the accompanying audio can alter the response patterns of the auditory periphery, particularly during the most relevant moments in the speech stream. These visually influenced changes may contribute to the altered percept of the speech sound. Collectively, we presume that these findings represent the combined effect of OHCs and MEMs acting in tandem in response to various non-auditory signals in order to manipulate the receptive properties of the auditory system. These influences may have a profound, and previously unrecognized, impact on how the auditory system processes sounds from initial sensory transduction all the way to perception and behavior. Moreover, we demonstrate that the entire auditory system is, fundamentally, a multisensory system.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Vocal differentiation is widely documented in birds and mammals but has been poorly investigated in other vertebrates, including fish, which represent the oldest extant vertebrate group. Neural circuitry controlling vocal behaviour is thought to have evolved from conserved brain areas that originated in fish, making this taxon key to understanding the evolution and development of the vertebrate vocal-auditory systems. This study examines ontogenetic changes in the vocal repertoire and whether vocal differentiation parallels auditory development in the Lusitanian toadfish Halobatrachus didactylus (Batrachoididae). This species exhibits a complex acoustic repertoire and is vocally active during early development. Vocalisations were recorded during social interactions for four size groups (fry: <2 cm; small juveniles: 2-4 cm; large juveniles: 5-7 cm; adults >25 cm, standard length). Auditory sensitivity of juveniles and adults was determined based on evoked potentials recorded from the inner ear saccule in response to pure tones of 75-945 Hz. We show an ontogenetic increment in the vocal repertoire from simple broadband-pulsed 'grunts' that later differentiate into four distinct vocalisations, including low-frequency amplitude-modulated 'boatwhistles'. Whereas fry emitted mostly single grunts, large juveniles exhibited vocalisations similar to the adult vocal repertoire. Saccular sensitivity revealed a three-fold enhancement at most frequencies tested from small to large juveniles; however, large juveniles were similar in sensitivity to adults. We provide the first clear evidence of ontogenetic vocal differentiation in fish, as previously described for higher vertebrates. Our results suggest a parallel development between the vocal motor pathway and the peripheral auditory system for acoustic social communication in fish.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The Lingodroids are a pair of mobile robots that evolve a language for places and relationships between places (based on distance and direction). Each robot in these studies has its own understanding of the layout of the world, based on its unique experiences and exploration of the environment. Despite having different internal representations of the world, the robots are able to develop a common lexicon for places, and then use simple sentences to explain and understand relationships between places even places that they could not physically experience, such as areas behind closed doors. By learning the language, the robots are able to develop representations for places that are inaccessible to them, and later, when the doors are opened, use those representations to perform goal-directed behavior.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The sensory systems of the New Zealand kiwi appear to be uniquely adapted to occupy a nocturnal ground-dwelling niche. In addition to well-developed tactile and olfactory systems, the auditory system shows specializations of the ear, which are maintained along the central nervous system. Here, we provide a detailed description of the auditory nerve, hair cells, and stereovillar bundle orientation of the hair cells in the North Island brown kiwi. The auditory nerve of the kiwi contained about 8,000 fibers. Using the number of hair cells and innervating nerve fibers to calculate a ratio of average innervation density showed that the afferent innervation ratio in kiwi was denser than in most other birds examined. The average diameters of cochlear afferent axons in kiwi showed the typical gradient across the tonotopic axis. The kiwi basilar papilla showed a clear differentiation of tall and short hair cells. The proportion of short hair cells was higher than in the emu and likely reflects a bias towards higher frequencies represented on the kiwi basilar papilla. The orientation of the stereovillar bundles in the kiwi basilar papilla showed a pattern similar to that in most other birds but was most similar to that of the emu. Overall, many features of the auditory nerve, hair cells, and stereovilli bundle orientation in the kiwi are typical of most birds examined. Some features of the kiwi auditory system do, however, support a high-frequency specialization, specifically the innervation density and generally small size of hair-cell somata, whereas others showed the presumed ancestral condition similar to that found in the emu.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Pitch discrimination is a fundamental property of the human auditory system. Our understanding of pitch-discrimination mechanisms is important from both theoretical and clinical perspectives. The discrimination of spectrally complex sounds is crucial in the processing of music and speech. Current methods of cognitive neuroscience can track the brain processes underlying sound processing either with precise temporal (EEG and MEG) or spatial resolution (PET and fMRI). A combination of different techniques is therefore required in contemporary auditory research. One of the problems in comparing the EEG/MEG and fMRI methods, however, is the fMRI acoustic noise. In the present thesis, EEG and MEG in combination with behavioral techniques were used, first, to define the ERP correlates of automatic pitch discrimination across a wide frequency range in adults and neonates and, second, they were used to determine the effect of recorded acoustic fMRI noise on those adult ERP and ERF correlates during passive and active pitch discrimination. Pure tones and complex 3-harmonic sounds served as stimuli in the oddball and matching-to-sample paradigms. The results suggest that pitch discrimination in adults, as reflected by MMN latency, is most accurate in the 1000-2000 Hz frequency range, and that pitch discrimination is facilitated further by adding harmonics to the fundamental frequency. Newborn infants are able to discriminate a 20% frequency change in the 250-4000 Hz frequency range, whereas the discrimination of a 5% frequency change was unconfirmed. Furthermore, the effect of the fMRI gradient noise on the automatic processing of pitch change was more prominent for tones with frequencies exceeding 500 Hz, overlapping with the spectral maximum of the noise. When the fundamental frequency of the tones was lower than the spectral maximum of the noise, fMRI noise had no effect on MMN and P3a, whereas the noise delayed and suppressed N1 and exogenous N2. Noise also suppressed the N1 amplitude in a matching-to-sample working memory task. However, the task-related difference observed in the N1 component, suggesting a functional dissociation between the processing of spatial and non-spatial auditory information, was partially preserved in the noise condition. Noise hampered feature coding mechanisms more than it hampered the mechanisms of change detection, involuntary attention, and the segregation of the spatial and non-spatial domains of working-memory. The data presented in the thesis can be used to develop clinical ERP-based frequency-discrimination protocols and combined EEG and fMRI experimental paradigms.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Crickets have two tympanal membranes on the tibiae of each foreleg. Among several field cricket species of the genus Gryllus (Gryllinae), the posterior tympanal membrane (PTM) is significantly larger than the anterior membrane (ATM). Laser Doppler vibrometric measurements have shown that the smaller ATM does not respond as much as the PTM to sound. Hence the PTM has been suggested to be the principal tympanal acoustic input to the auditory organ. In tree crickets (Oecanthinae), the ATM is slightly larger than the PTM. Both membranes are structurally complex, presenting a series of transverse folds on their surface, which are more pronounced on the ATM than on the PTM. The mechanical response of both membranes to acoustic stimulation was investigated using microscanning laser Doppler vibrometry. Only a small portion of the membrane surface deflects in response to sound. Both membranes exhibit similar frequency responses, and move out of phase with each other, producing compressions and rarefactions of the tracheal volume backing the tympanum. Therefore, unlike field crickets, tree crickets may have four instead of two functional tympanal membranes. This is interesting in the context of the outstanding question of the role of spiracular inputs in the auditory system of tree crickets.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Synchronising bushcricket males achieve synchrony by delaying their chirps in response to calling neighbours. In multi-male choruses, males that delay chirps in response to all their neighbours would remain silent most of the time and be unable to attract mates. This problem could be overcome if the afferent auditory system exhibited selective attention, and thus a male interacted only with a subset of neighbours. We investigated whether individuals of the bushcricket genus Mecopoda restricted their attention to louder chirps neurophysiologically, behaviourally and through spacing. We found that louder leading chirps were preferentially represented in the omega neuron but the representation of softer following chirps was not completely abolished. Following chirps that were 20 dB louder than leading chirps were better represented than leading chirps. During acoustic interactions, males synchronised with leading chirps even when the following chirps were 20 dB louder. Males did not restrict their attention to louder chirps during interactions but were affected by all chirps above a particular threshold. In the field, we found that males on average had only one or two neighbours whose calls were above this threshold. Selective attention is thus achieved in this bushcricket through spacing rather than neurophysiological filtering of softer signals.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Synchronising bushcricket males achieve synchrony by delaying their chirps in response to calling neighbours. In multi-male choruses, males that delay chirps in response to all their neighbours would remain silent most of the time and be unable to attract mates. This problem could be overcome if the afferent auditory system exhibited selective attention, and thus a male interacted only with a subset of neighbours. We investigated whether individuals of the bushcricket genus Mecopoda restricted their attention to louder chirps neurophysiologically, behaviourally and through spacing. We found that louder leading chirps were preferentially represented in the omega neuron but the representation of softer following chirps was not completely abolished. Following chirps that were 20 dB louder than leading chirps were better represented than leading chirps. During acoustic interactions, males synchronised with leading chirps even when the following chirps were 20 dB louder. Males did not restrict their attention to louder chirps during interactions but were affected by all chirps above a particular threshold. In the field, we found that males on average had only one or two neighbours whose calls were above this threshold. Selective attention is thus achieved in this bushcricket through spacing rather than neurophysiological filtering of softer signals.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Synchronising bushcricket males achieve synchrony by delaying their chirps in response to calling neighbours. In multi-male choruses, males that delay chirps in response to all their neighbours would remain silent most of the time and be unable to attract mates. This problem could be overcome if the afferent auditory system exhibited selective attention, and thus a male interacted only with a subset of neighbours. We investigated whether individuals of the bushcricket genus Mecopoda restricted their attention to louder chirps neurophysiologically, behaviourally and through spacing. We found that louder leading chirps were preferentially represented in the omega neuron but the representation of softer following chirps was not completely abolished. Following chirps that were 20 dB louder than leading chirps were better represented than leading chirps. During acoustic interactions, males synchronised with leading chirps even when the following chirps were 20 dB louder. Males did not restrict their attention to louder chirps during interactions but were affected by all chirps above a particular threshold. In the field, we found that males on average had only one or two neighbours whose calls were above this threshold. Selective attention is thus achieved in this bushcricket through spacing rather than neurophysiological filtering of softer signals.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Animals communicate in non-ideal and noisy conditions. The primary method they use to improve communication efficiency is sender-receiver matching: the receiver's sensory mechanism filters the impinging signal based on the expected signal. In the context of acoustic communication in crickets, such a match is made in the frequency domain. The males broadcast a mate attraction signal, the calling song, in a narrow frequency band centred on the carrier frequency (CF), and the females are most sensitive to sound close to this frequency. In tree crickets, however, the CF changes with temperature. The mechanisms used by female tree crickets to accommodate this change in CF were investigated at the behavioural and biomechanical level. At the behavioural level, female tree crickets were broadly tuned and responded equally to CFs produced within the naturally occurring range of temperatures (18 to 27 degrees C). To allow such a broad response, however, the transduction mechanisms that convert sound into mechanical and then neural signals must also have a broad response. The tympana of the female tree crickets exhibited a frequency response that was even broader than suggested by the behaviour. Their tympana vibrate with equal amplitude to frequencies spanning nearly an order of magnitude. Such a flat frequency response is unusual in biological systems and cannot be modelled as a simple mechanical system. This feature of the tree cricket auditory system not only has interesting implications for mate choice and species isolation but may also prove exciting for bio-mimetic applications such as the design of miniature low frequency microphones.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Binaural hearing studies show that the auditory system uses the phase-difference information in the auditory stimuli for localization of a sound source. Motivated by this finding, we present a method for demodulation of amplitude-modulated-frequency-modulated (AM-FM) signals using a ignal and its arbitrary phase-shifted version. The demodulation is achieved using two allpass filters, whose impulse responses are related through the fractional Hilbert transform (FrHT). The allpass filters are obtained by cosine-modulation of a zero-phase flat-top prototype halfband lowpass filter. The outputs of the filters are combined to construct an analytic signal (AS) from which the AM and FM are estimated. We show that, under certain assumptions on the signal and the filter structures, the AM and FM can be obtained exactly. The AM-FM calculations are based on the quasi-eigenfunction approximation. We then extend the concept to the demodulation of multicomponent signals using uniform and non-uniform cosine-modulated filterbank (FB) structures consisting of flat bandpass filters, including the uniform cosine-modulated, equivalent rectangular bandwidth (ERB), and constant-Q filterbanks. We validate the theoretical calculations by considering application on synthesized AM-FM signals and compare the performance in presence of noise with three other multiband demodulation techniques, namely, the Teager-energy-based approach, the Gabor's AS approach, and the linear transduction filter approach. We also show demodulation results for real signals.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

[ES]El objetivo de este proyecto es la implementación de un algoritmo de ocultación de datos para la señal de voz mediante el uso de su información de fase espectral. Cuando se trabaja con señales de voz lo más habitual es utilizar el módulo debido a su sencillez a la hora de manipularlo y porque está relacionado con la percepción. En este caso, se busca que la información oculta sea perceptualmente y estadísticamente indetectable y que a su vez degrade lo menos posible la calidad de la señal, por lo que modificar el módulo produciría efectos no deseados. Por lo tanto, el método más eficaz de conseguirlo es trabajando con la fase espectral, precisamente por el hecho de que el sistema auditivo humano es menos sensible ante modificaciones de fase. Esta característica es la que se aprovechará para introducir la información que se desea ocultar. Por último, se evaluará la técnica desarrollada de acuerdo a diferentes criterios. Mediante pruebas en las que se modificarán los valores de algunos parámetros se obtendrán resultados relacionados con la perceptibilidad, la robustez, el rendimiento o la capacidad entre otros, determinando así la configuración óptima del algoritmo.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Natural sounds are structured on many time-scales. A typical segment of speech, for example, contains features that span four orders of magnitude: Sentences ($\sim1$s); phonemes ($\sim10$−$1$ s); glottal pulses ($\sim 10$−$2$s); and formants ($\sim 10$−$3$s). The auditory system uses information from each of these time-scales to solve complicated tasks such as auditory scene analysis [1]. One route toward understanding how auditory processing accomplishes this analysis is to build neuroscience-inspired algorithms which solve similar tasks and to compare the properties of these algorithms with properties of auditory processing. There is however a discord: Current machine-audition algorithms largely concentrate on the shorter time-scale structures in sounds, and the longer structures are ignored. The reason for this is two-fold. Firstly, it is a difficult technical problem to construct an algorithm that utilises both sorts of information. Secondly, it is computationally demanding to simultaneously process data both at high resolution (to extract short temporal information) and for long duration (to extract long temporal information). The contribution of this work is to develop a new statistical model for natural sounds that captures structure across a wide range of time-scales, and to provide efficient learning and inference algorithms. We demonstrate the success of this approach on a missing data task.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Human listeners can identify vowels regardless of speaker size, although the sound waves for an adult and a child speaking the ’same’ vowel would differ enormously. The differences are mainly due to the differences in vocal tract length (VTL) and glottal pulse rate (GPR) which are both related to body size. Automatic speech recognition machines are notoriously bad at understanding children if they have been trained on the speech of an adult. In this paper, we propose that the auditory system adapts its analysis of speech sounds, dynamically and automatically to the GPR and VTL of the speaker on a syllable-to-syllable basis. We illustrate how this rapid adaptation might be performed with the aid of a computational version of the auditory image model, and we propose that an auditory preprocessor of this form would improve the robustness of speech recognisers.