9 resultados para Hard of hearing
em National Center for Biotechnology Information - NCBI
Resumo:
To elucidate the role of thyroid hormone receptors (TRs) α1 and β in the development of hearing, cochlear functions have been investigated in mice lacking TRα1 or TRβ. TRs are ligand-dependent transcription factors expressed in the developing organ of Corti, and loss of TRβ is known to impair hearing in mice and in humans. Here, TRα1-deficient (TRα1−/−) mice are shown to display a normal auditory-evoked brainstem response, indicating that only TRβ, and not TRα1, is essential for hearing. Because cochlear morphology was normal in TRβ−/− mice, we postulated that TRβ regulates functional rather than morphological development of the cochlea. At the onset of hearing, inner hair cells (IHCs) in wild-type mice express a fast-activating potassium conductance, IK,f, that transforms the immature IHC from a regenerative, spiking pacemaker to a high-frequency signal transmitter. Expression of IK,f was significantly retarded in TRβ−/− mice, whereas the development of the endocochlear potential and other cochlear functions, including mechanoelectrical transduction in hair cells, progressed normally. TRα1−/− mice expressed IK,f normally, in accord with their normal auditory-evoked brainstem response. These results establish that the physiological differentiation of IHCs depends on a TRβ-mediated pathway. When defective, this may contribute to deafness in congenital thyroid diseases.
Resumo:
Postmitotic hair-cell regeneration in the inner ear of birds provides an opportunity to study the effect of renewed auditory input on auditory perception, vocal production, and vocal learning in a vertebrate. We used behavioral conditioning to test both perception and vocal production in a small Australian parrot, the budgerigar. Results show that both auditory perception and vocal production are disrupted when hair cells are damaged or lost but that these behaviors return to near normal over time. Precision in vocal production completely recovers well before recovery of full auditory function. These results may have particular relevance for understanding the relation between hearing loss and human speech production especially where there is consideration of an auditory prosthetic device. The present results show, at least for a bird, that even limited recovery of auditory input soon after deafening can support full recovery of vocal precision.
Resumo:
Assistive technology involving voice communication is used primarily by people who are deaf, hard of hearing, or who have speech and/or language disabilities. It is also used to a lesser extent by people with visual or motor disabilities. A very wide range of devices has been developed for people with hearing loss. These devices can be categorized not only by the modality of stimulation [i.e., auditory, visual, tactile, or direct electrical stimulation of the auditory nerve (auditory-neural)] but also in terms of the degree of speech processing that is used. At least four such categories can be distinguished: assistive devices (a) that are not designed specifically for speech, (b) that take the average characteristics of speech into account, (c) that process articulatory or phonetic characteristics of speech, and (d) that embody some degree of automatic speech recognition. Assistive devices for people with speech and/or language disabilities typically involve some form of speech synthesis or symbol generation for severe forms of language disability. Speech synthesis is also used in text-to-speech systems for sightless persons. Other applications of assistive technology involving voice communication include voice control of wheelchairs and other devices for people with mobility disabilities.
Resumo:
The high sensitivity and sharp frequency discrimination of hearing depend on mechanical amplification in the cochlea. To explore the basis of this active process, we examined the pharmacological sensitivity of spontaneous otoacoustic emissions (SOAEs) in a lizard, the Tokay gecko. In a quiet environment, each ear produced a complex but stable pattern of emissions. These SOAEs were reversibly modulated by drugs that affect mammalian otoacoustic emissions, the salicylates and the aminoglycoside antibiotics. The effect of a single i.p. injection of sodium salicylate depended on the initial power of the emissions: ears with strong control SOAEs displayed suppression at all frequencies, whereas those with weak control emissions showed enhancement. Repeated oral administration of acetylsalicylic acid reduced all emissions. Single i.p. doses of gentamicin or kanamycin suppressed SOAEs below 2.6 kHz, while modulating those above 2.6 kHz in either of two ways. For ears whose emission power at 2.6–5.2 kHz encompassed more than half of the total, individual emissions displayed facilitation as great as 35-fold. For the remaining ears, emissions dropped to as little as one-sixth of their initial values. The similarity of the responses of reptilian and mammalian cochleas to pharmacological intervention provides further evidence for a common mechanism of cochlear amplification.
Resumo:
The membranous labyrinth of the inner ear establishes a precise geometrical topology so that it may subserve the functions of hearing and balance. How this geometry arises from a simple ectodermal placode is under active investigation. The placode invaginates to form the otic cup, which deepens before pinching off to form the otic vesicle. By the vesicle stage many genes expressed in the developing ear have assumed broad, asymmetrical expression domains. We have been exploring the possibility that these domains may reflect developmental compartments that are instrumental in specifying the location and identity of different parts of the ear. The boundaries between compartments are proposed to be the site of inductive interactions required for this specification. Our work has shown that sensory organs and the endolymphatic duct each arise near the boundaries of broader gene expression domains, lending support to this idea. A further prediction of the model, that the compartment boundaries will also represent lineage-restriction compartments, is supported in part by fate mapping the otic cup. Our data suggest that two lineage-restriction boundaries intersect at the dorsal pole of the otocyst, a convergence that may be critical for the specification of endolymphatic duct outgrowth. We speculate that the patterning information necessary to establish these two orthogonal boundaries may emanate, in part, from the hindbrain. The compartment boundary model of ear development now needs to be tested through a variety of experimental perturbations, such as the removal of boundaries, the generation of ectopic boundaries, and/or changes in compartment identity.
Resumo:
Within the mammalian inner ear there are six separate sensory regions that subserve the functions of hearing and balance, although how these sensory regions become specified remains unknown. Each sensory region is populated by two cell types, the mechanosensory hair cell and the supporting cell, which are arranged in a mosaic in which each hair cell is surrounded by supporting cells. The proposed mechanism for creating the sensory mosaic is lateral inhibition mediated by the Notch signaling pathway. However, one of the Notch ligands, Jagged1 (Jag1), does not show an expression pattern wholly consistent with a role in lateral inhibition, as it marks the sensory patches from very early in their development—presumably long before cells make their final fate decisions. It has been proposed that Jag1 has a role in specifying sensory versus nonsensory epithelium within the ear [Adam, J., Myat, A., Roux, I. L., Eddison, M., Henrique, D., Ish-Horowicz, D. & Lewis, J. (1998) Development (Cambridge, U.K.) 125, 4645–4654]. Here we provide experimental evidence that Notch signaling may be involved in specifying sensory regions by showing that a dominant mouse mutant headturner (Htu) contains a missense mutation in the Jag1 gene and displays missing posterior and sometimes anterior ampullae, structures that house the sensory cristae. Htu/+ mutants also demonstrate a significant reduction in the numbers of outer hair cells in the organ of Corti. Because lateral inhibition mediated by Notch predicts that disruptions in this pathway would lead to an increase in hair cells, we believe these data indicate an earlier role for Notch within the inner ear.
Resumo:
The dynamic responses of the hearing organ to acoustic overstimulation were investigated using the guinea pig isolated temporal bone preparation. The organ was loaded with the fluorescent Ca2+ indicator Fluo-3, and the cochlear electric responses to low-level tones were recorded through a microelectrode in the scala media. After overstimulation, the amplitude of the cochlear potentials decreased significantly. In some cases, rapid recovery was seen with the potentials returning to their initial amplitude. In 12 of 14 cases in which overstimulation gave a decrease in the cochlear responses, significant elevations of the cytoplasmic [Ca2+] in the outer hair cells were seen. [Ca2+] increases appeared immediately after terminating the overstimulation, with partial recovery taking place in the ensuing 30 min in some preparations. Such [Ca2+] changes were not seen in preparations that were stimulated at levels that did not cause an amplitude change in the cochlear potentials. The overstimulation also gave rise to a contraction, evident as a decrease of the width of the organ of Corti. The average contraction in 10 preparations was 9 μm (SE 2 μm). Partial or complete recovery was seen within 30–45 min after the overstimulation. The [Ca2+] changes and the contraction are likely to produce major functional alterations and consequently are suggested to be a factor contributing strongly to the loss of function seen after exposure to loud sounds.
Resumo:
Cerebral organization during sentence processing in English and in American Sign Language (ASL) was characterized by employing functional magnetic resonance imaging (fMRI) at 4 T. Effects of deafness, age of language acquisition, and bilingualism were assessed by comparing results from (i) normally hearing, monolingual, native speakers of English, (ii) congenitally, genetically deaf, native signers of ASL who learned English late and through the visual modality, and (iii) normally hearing bilinguals who were native signers of ASL and speakers of English. All groups, hearing and deaf, processing their native language, English or ASL, displayed strong and repeated activation within classical language areas of the left hemisphere. Deaf subjects reading English did not display activation in these regions. These results suggest that the early acquisition of a natural language is important in the expression of the strong bias for these areas to mediate language, independently of the form of the language. In addition, native signers, hearing and deaf, displayed extensive activation of homologous areas within the right hemisphere, indicating that the specific processing requirements of the language also in part determine the organization of the language systems of the brain.