8 resultados para Speech production. Second language learning. VoiceThread. Noticing
em National Center for Biotechnology Information - NCBI
Resumo:
Spoken language is one of the most compact and structured ways to convey information. The linguistic ability to structure individual words into larger sentence units permits speakers to express a nearly unlimited range of meanings. This ability is rooted in speakers' knowledge of syntax and in the corresponding process of syntactic encoding. Syntactic encoding is highly automatized, operates largely outside of conscious awareness, and overlaps closely in time with several other processes of language production. With the use of positron emission tomography we investigated the cortical activations during spoken language production that are related to the syntactic encoding process. In the paradigm of restrictive scene description, utterances varying in complexity of syntactic encoding were elicited. Results provided evidence that the left Rolandic operculum, caudally adjacent to Broca's area, is involved in both sentence-level and local (phrase-level) syntactic encoding during speaking.
Resumo:
The integration of speech recognition with natural language understanding raises issues of how to adapt natural language processing to the characteristics of spoken language; how to cope with errorful recognition output, including the use of natural language information to reduce recognition errors; and how to use information from the speech signal, beyond just the sequence of words, as an aid to understanding. This paper reviews current research addressing these questions in the Spoken Language Program sponsored by the Advanced Research Projects Agency (ARPA). I begin by reviewing some of the ways that spontaneous spoken language differs from standard written language and discuss methods of coping with the difficulties of spontaneous speech. I then look at how systems cope with errors in speech recognition and at attempts to use natural language information to reduce recognition errors. Finally, I discuss how prosodic information in the speech signal might be used to improve understanding.
Resumo:
Investigation of the three-generation KE family, half of whose members are affected by a pronounced verbal dyspraxia, has led to identification of their core deficit as one involving sequential articulation and orofacial praxis. A positron emission tomography activation study revealed functional abnormalities in both cortical and subcortical motor-related areas of the frontal lobe, while quantitative analyses of magnetic resonance imaging scans revealed structural abnormalities in several of these same areas, particularly the caudate nucleus, which was found to be abnormally small bilaterally. A recent linkage study [Fisher, S., Vargha-Khadem, F., Watkins, K. E., Monaco, A. P. & Pembry, M. E. (1998) Nat. Genet. 18, 168–170] localized the abnormal gene (SPCH1) to a 5.6-centiMorgan interval in the chromosomal band 7q31. The genetic mutation or deletion in this region has resulted in the abnormal development of several brain areas that appear to be critical for both orofacial movements and sequential articulation, leading to marked disruption of speech and expressive language.
Resumo:
Lesions to left frontal cortex in humans produce speech production impairments (nonfluent aphasia). These impairments vary from subject to subject and performance on certain speech production tasks can be relatively preserved in some patients. A possible explanation for preservation of function under these circumstances is that areas outside left prefrontal cortex are used to compensate for the injured brain area. We report here a direct demonstration of preserved language function in a stroke patient (LF1) apparently due to the activation of a compensatory brain pathway. We used functional brain imaging with positron emission tomography (PET) as a basis for this study.
Resumo:
Advances in digital speech processing are now supporting application and deployment of a variety of speech technologies for human/machine communication. In fact, new businesses are rapidly forming about these technologies. But these capabilities are of little use unless society can afford them. Happily, explosive advances in microelectronics over the past two decades have assured affordable access to this sophistication as well as to the underlying computing technology. The research challenges in speech processing remain in the traditionally identified areas of recognition, synthesis, and coding. These three areas have typically been addressed individually, often with significant isolation among the efforts. But they are all facets of the same fundamental issue--how to represent and quantify the information in the speech signal. This implies deeper understanding of the physics of speech production, the constraints that the conventions of language impose, and the mechanism for information processing in the auditory system. In ongoing research, therefore, we seek more accurate models of speech generation, better computational formulations of language, and realistic perceptual guides for speech processing--along with ways to coalesce the fundamental issues of recognition, synthesis, and coding. Successful solution will yield the long-sought dictation machine, high-quality synthesis from text, and the ultimate in low bit-rate transmission of speech. It will also open the door to language-translating telephony, where the synthetic foreign translation can be in the voice of the originating talker.
Resumo:
Postmitotic hair-cell regeneration in the inner ear of birds provides an opportunity to study the effect of renewed auditory input on auditory perception, vocal production, and vocal learning in a vertebrate. We used behavioral conditioning to test both perception and vocal production in a small Australian parrot, the budgerigar. Results show that both auditory perception and vocal production are disrupted when hair cells are damaged or lost but that these behaviors return to near normal over time. Precision in vocal production completely recovers well before recovery of full auditory function. These results may have particular relevance for understanding the relation between hearing loss and human speech production especially where there is consideration of an auditory prosthetic device. The present results show, at least for a bird, that even limited recovery of auditory input soon after deafening can support full recovery of vocal precision.
Resumo:
Assistive technology involving voice communication is used primarily by people who are deaf, hard of hearing, or who have speech and/or language disabilities. It is also used to a lesser extent by people with visual or motor disabilities. A very wide range of devices has been developed for people with hearing loss. These devices can be categorized not only by the modality of stimulation [i.e., auditory, visual, tactile, or direct electrical stimulation of the auditory nerve (auditory-neural)] but also in terms of the degree of speech processing that is used. At least four such categories can be distinguished: assistive devices (a) that are not designed specifically for speech, (b) that take the average characteristics of speech into account, (c) that process articulatory or phonetic characteristics of speech, and (d) that embody some degree of automatic speech recognition. Assistive devices for people with speech and/or language disabilities typically involve some form of speech synthesis or symbol generation for severe forms of language disability. Speech synthesis is also used in text-to-speech systems for sightless persons. Other applications of assistive technology involving voice communication include voice control of wheelchairs and other devices for people with mobility disabilities.
Resumo:
Speech interface technology, which includes automatic speech recognition, synthetic speech, and natural language processing, is beginning to have a significant impact on business and personal computer use. Today, powerful and inexpensive microprocessors and improved algorithms are driving commercial applications in computer command, consumer, data entry, speech-to-text, telephone, and voice verification. Robust speaker-independent recognition systems for command and navigation in personal computers are now available; telephone-based transaction and database inquiry systems using both speech synthesis and recognition are coming into use. Large-vocabulary speech interface systems for document creation and read-aloud proofing are expanding beyond niche markets. Today's applications represent a small preview of a rich future for speech interface technology that will eventually replace keyboards with microphones and loud-speakers to give easy accessibility to increasingly intelligent machines.