944 resultados para auditory


Relevância:

20.00% 20.00%

Publicador:

Resumo:

A model of the auditory periphery assembled from analog network submodels of all the relevant anatomical structures is described. There is bidirectional coupling between networks representing the outer ear, middle ear and cochlea. A simple voltage source representation of the outer hair cells provides level-dependent basilar membrane curves. The networks are translated into efficient computational modules by means of wave digital filtering. A feedback unit regulates the average firing rate at the output of an inner hair cell module via a simplified modelling of the dynamics of the descending paths to the peripheral ear. This leads to a digital model of the entire auditory periphery with applications to both speech and hearing research.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We simultaneously recorded auditory evoked potentials (AEP) from the temporal cortex (TCx), the dorsolateral prefrontal cortex (dPFCx) and the parietal cortex (PCx) in the freely moving rhesus monkey to investigate state-dependent changes of the AEP. AEPs obtained during passive wakefulness, active wakefulness (AW), slow wave sleep and rapid-eye-movement sleep (REM) were compared. Results showed that AEP from all three cerebral areas were modulated by brain states. However, the amplitude of AEP from dPFCx and PCx significantly appeared greater attenuation than that from the TCx during AW and REM. These results indicate that the modulation of brain state on AEP from all three cerebral areas investigated is not uniform, which suggests that different cerebral areas have differential functional contributions during sleep-wake cycle. (C) 2002 Elsevier Science Ireland Ltd.. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Our previously observations showed that the amplitude of cortical evoked potentials to irrelevant auditory stimulus (probe) recorded from several different cerebral areas was differentially modulated by brain states. At present study, we simultaneously re

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Simultaneous tone-tone masking in conjunction with the envelope-following response (EFR) recording was used to obtain tuning curves in porpoises Phocoena phocoena and Neophocaena phocaenoides asiaeorientalis. The EFR was evoked by amplitude-modulated probes with a modulation rate of 1000 Hz and carrier frequencies from 22.5 to 140 kHz. Equivalent rectangular quality Q(ERB) of the obtained tuning curves varied from 8.3-8.6 at lower (22.5-32 kHz) probe frequencies to 44.8-47.4 at high (128-140 kHz) frequencies. The QERB dependence on probe frequency could be approximated by regression lines with a slope of 0.83 to 0.86 in log-log scale., which corresponded to almost frequency-proportional quality and almost constant bandwidth of 34 kHz. Thus, the frequency representation in the porpoise auditory system is much closer to a constant-bandwidth rather that to a constant-quality manner. (c) 2006 Acoustical Society of America.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objective. To compare the voice performance of children involved in street labor with regular children using perceptual-auditory and acoustic analyses.Methods. A controlled cross-sectional study was carried out on 7- to 10-year-old children of both genders. Children from both groups lived with their families and attended school regularly; however, child labor was evident in one group and not the other. A total of 200 potentially eligible street children, assisted by the Child Labor Elimination Programme (PETI), and 400 regular children were interviewed. Those with any vocal discomfort (106, 53% and 90, 22.5%) had their voices assessed for resonance, pitch, loudness, speech rate, maximum phonation time, and other acoustic measurements.Results. A total of 106 street children (study group [SG]) and 90 regular children (control group [CG]) were evaluated. the SG group demonstrated higher oral and nasal resonance, reduced loudness, a lower pitch, and a slower speech rate than the CG. the maximum phonation time, fundamental frequency, and upper harmonics were higher in the SG than the CG. Jitter and shimmer were higher in the CG than the SG.Conclusion. Using perceptual-auditory and acoustic analyses, we determined that there were differences in voice performance between the two groups, with street children having better quality perceptual and acoustic vocal parameters than regular children. We believe that this is due to the procedures and activities performed by the Child Labor Elimination Program (PETI), which helps children to cope with their living conditions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Multiple sound sources often contain harmonics that overlap and may be degraded by environmental noise. The auditory system is capable of teasing apart these sources into distinct mental objects, or streams. Such an "auditory scene analysis" enables the brain to solve the cocktail party problem. A neural network model of auditory scene analysis, called the AIRSTREAM model, is presented to propose how the brain accomplishes this feat. The model clarifies how the frequency components that correspond to a give acoustic source may be coherently grouped together into distinct streams based on pitch and spatial cues. The model also clarifies how multiple streams may be distinguishes and seperated by the brain. Streams are formed as spectral-pitch resonances that emerge through feedback interactions between frequency-specific spectral representaion of a sound source and its pitch. First, the model transforms a sound into a spatial pattern of frequency-specific activation across a spectral stream layer. The sound has multiple parallel representations at this layer. A sound's spectral representation activates a bottom-up filter that is sensitive to harmonics of the sound's pitch. The filter activates a pitch category which, in turn, activate a top-down expectation that allows one voice or instrument to be tracked through a noisy multiple source environment. Spectral components are suppressed if they do not match harmonics of the top-down expectation that is read-out by the selected pitch, thereby allowing another stream to capture these components, as in the "old-plus-new-heuristic" of Bregman. Multiple simultaneously occuring spectral-pitch resonances can hereby emerge. These resonance and matching mechanisms are specialized versions of Adaptive Resonance Theory, or ART, which clarifies how pitch representations can self-organize durin learning of harmonic bottom-up filters and top-down expectations. The model also clarifies how spatial location cues can help to disambiguate two sources with similar spectral cures. Data are simulated from psychophysical grouping experiments, such as how a tone sweeping upwards in frequency creates a bounce percept by grouping with a downward sweeping tone due to proximity in frequency, even if noise replaces the tones at their interection point. Illusory auditory percepts are also simulated, such as the auditory continuity illusion of a tone continuing through a noise burst even if the tone is not present during the noise, and the scale illusion of Deutsch whereby downward and upward scales presented alternately to the two ears are regrouped based on frequency proximity, leading to a bounce percept. Since related sorts of resonances have been used to quantitatively simulate psychophysical data about speech perception, the model strengthens the hypothesis the ART-like mechanisms are used at multiple levels of the auditory system. Proposals for developing the model to explain more complex streaming data are also provided.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A neural model of peripheral auditory processing is described and used to separate features of coarticulated vowels and consonants. After preprocessing of speech via a filterbank, the model splits into two parallel channels, a sustained channel and a transient channel. The sustained channel is sensitive to relatively stable parts of the speech waveform, notably synchronous properties of the vocalic portion of the stimulus it extends the dynamic range of eighth nerve filters using coincidence deteectors that combine operations of raising to a power, rectification, delay, multiplication, time averaging, and preemphasis. The transient channel is sensitive to critical features at the onsets and offsets of speech segments. It is built up from fast excitatory neurons that are modulated by slow inhibitory interneurons. These units are combined over high frequency and low frequency ranges using operations of rectification, normalization, multiplicative gating, and opponent processing. Detectors sensitive to frication and to onset or offset of stop consonants and vowels are described. Model properties are characterized by mathematical analysis and computer simulations. Neural analogs of model cells in the cochlear nucleus and inferior colliculus are noted, as are psychophysical data about perception of CV syllables that may be explained by the sustained transient channel hypothesis. The proposed sustained and transient processing seems to be an auditory analog of the sustained and transient processing that is known to occur in vision.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND: Myosin VIIA (MyoVIIA) is an unconventional myosin necessary for vertebrate audition [1]-[5]. Human auditory transduction occurs in sensory hair cells with a staircase-like arrangement of apical protrusions called stereocilia. In these hair cells, MyoVIIA maintains stereocilia organization [6]. Severe mutations in the Drosophila MyoVIIA orthologue, crinkled (ck), are semi-lethal [7] and lead to deafness by disrupting antennal auditory organ (Johnston's Organ, JO) organization [8]. ck/MyoVIIA mutations result in apical detachment of auditory transduction units (scolopidia) from the cuticle that transmits antennal vibrations as mechanical stimuli to JO. PRINCIPAL FINDINGS: Using flies expressing GFP-tagged NompA, a protein required for auditory organ organization in Drosophila, we examined the role of ck/MyoVIIA in JO development and maintenance through confocal microscopy and extracellular electrophysiology. Here we show that ck/MyoVIIA is necessary early in the developing antenna for initial apical attachment of the scolopidia to the articulating joint. ck/MyoVIIA is also necessary to maintain scolopidial attachment throughout adulthood. Moreover, in the adult JO, ck/MyoVIIA genetically interacts with the non-muscle myosin II (through its regulatory light chain protein and the myosin binding subunit of myosin II phosphatase). Such genetic interactions have not previously been observed in scolopidia. These factors are therefore candidates for modulating MyoVIIA activity in vertebrates. CONCLUSIONS: Our findings indicate that MyoVIIA plays evolutionarily conserved roles in auditory organ development and maintenance in invertebrates and vertebrates, enhancing our understanding of auditory organ development and function, as well as providing significant clues for future research.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Maps are a mainstay of visual, somatosensory, and motor coding in many species. However, auditory maps of space have not been reported in the primate brain. Instead, recent studies have suggested that sound location may be encoded via broadly responsive neurons whose firing rates vary roughly proportionately with sound azimuth. Within frontal space, maps and such rate codes involve different response patterns at the level of individual neurons. Maps consist of neurons exhibiting circumscribed receptive fields, whereas rate codes involve open-ended response patterns that peak in the periphery. This coding format discrepancy therefore poses a potential problem for brain regions responsible for representing both visual and auditory information. Here, we investigated the coding of auditory space in the primate superior colliculus(SC), a structure known to contain visual and oculomotor maps for guiding saccades. We report that, for visual stimuli, neurons showed circumscribed receptive fields consistent with a map, but for auditory stimuli, they had open-ended response patterns consistent with a rate or level-of-activity code for location. The discrepant response patterns were not segregated into different neural populations but occurred in the same neurons. We show that a read-out algorithm in which the site and level of SC activity both contribute to the computation of stimulus location is successful at evaluating the discrepant visual and auditory codes, and can account for subtle but systematic differences in the accuracy of auditory compared to visual saccades. This suggests that a given population of neurons can use different codes to support appropriate multimodal behavior.