6 resultados para Inconsistent speech errors

em Aston University Research Archive


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Current models of word production assume that words are stored as linear sequences of phonemes which are structured into syllables only at the moment of production. This is because syllable structure is always recoverable from the sequence of phonemes. In contrast, we present theoretical and empirical evidence that syllable structure is lexically represented. Storing syllable structure would have the advantage of making representations more stable and resistant to damage. On the other hand, re-syllabifications affect only a minimal part of phonological representations and occur only in some languages and depending on speech register. Evidence for these claims comes from analyses of aphasic errors which not only respect phonotactic constraints, but also avoid transformations which move the syllabic structure of the word further away from the original structure, even when equating for segmental complexity. This is true across tasks, types of errors, and, crucially, types of patients. The same syllabic effects are shown by apraxic patients and by phonological patients who have more central difficulties in retrieving phonological representations. If syllable structure was only computed after phoneme retrieval, it would have no way to influence the errors of phonological patients. Our results have implications for psycholinguistic and computational models of language as well as for clinical and educational practices.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

WWe present the case of two aphasic patients: one with fluent speech, MM, and one with dysfluent speech, DB. Both patients make similar proportions of phonological errors in speech production and the errors have similar characteristics. A closer analysis, however, shows a number of differences. DB's phonological errors involve, for the most part, simplifications of syllabic structure; they affect consonants more than vowels; and, among vowels, they show effects of sonority/complexity. This error pattern may reflect articulatory difficulties. MM's errors, instead, show little effect of syllable structure, affect vowels at least as much as consonants and, and affect all different vowels to a similar extent. This pattern is consistent with a more central impairment involving the selection of the right phoneme among competing alternatives. We propose that, at this level, vowel selection may be more difficult than consonant selection because vowels belong to a smaller set of repeatedly activated units.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Single word production requires that phoneme activation is maintained while articulatory conversion is taking place. Word serial recall, connected speech and non-word production (repetition and spelling) are all assumed to involve a phonological output buffer. A crucial question is whether the same memory resources are also involved in single word production. We investigate this question by assessing length and positional effects in the single word repetition and reading of six aphasic patients. We expect a damaged buffer to result in error rates per phoneme which increase with word length and in position effects. Although our patients had trouble with phoneme activation (they made mainly errors of phoneme selection), they did not show the effects expected from a buffer impairment. These results show that phoneme activation cannot be automatically equated with a buffer. We hypothesize that the phonemes of existing words are kept active though permanent links to the word node. Thus, the sustained activation needed for their articulation will come from the lexicon and will have different characteristics from the activation needed for the short-term retention of an unbound set of units. We conclude that there is no need and no evidence for a phonological buffer in single word production.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis addresses the viability of automatic speech recognition for control room systems; with careful system design, automatic speech recognition (ASR) devices can be useful means for human computer interaction in specific types of task. These tasks can be defined as complex verbal activities, such as command and control, and can be paired with spatial tasks, such as monitoring, without detriment. It is suggested that ASR use be confined to routine plant operation, as opposed the critical incidents, due to possible problems of stress on the operators' speech.  It is proposed that using ASR will require operators to adapt a commonly used skill to cater for a novel use of speech. Before using the ASR device, new operators will require some form of training. It is shown that a demonstration by an experienced user of the device can lead to superior performance than instructions. Thus, a relatively cheap and very efficient form of operator training can be supplied by demonstration by experienced ASR operators. From a series of studies into speech based interaction with computers, it is concluded that the interaction be designed to capitalise upon the tendency of operators to use short, succinct, task specific styles of speech. From studies comparing different types of feedback, it is concluded that operators be given screen based feedback, rather than auditory feedback, for control room operation. Feedback will take two forms: the use of the ASR device will require recognition feedback, which will be best supplied using text; the performance of a process control task will require task feedback integrated into the mimic display. This latter feedback can be either textual or symbolic, but it is suggested that symbolic feedback will be more beneficial. Related to both interaction style and feedback is the issue of handling recognition errors. These should be corrected by simple command repetition practices, rather than use error handling dialogues. This method of error correction is held to be non intrusive to primary command and control operations. This thesis also addresses some of the problems of user error in ASR use, and provides a number of recommendations for its reduction.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

At present there is no standard assessment method for rating and comparing the quality of synthesized speech. This study assesses the suitability of Time Frequency Warping (TFW) modulation for use as a reference device for assessing synthesized speech. Time Frequency Warping modulation introduces timing errors into natural speech that produce perceptual errors similar to those found in synthetic speech. It is proposed that TFW modulation used in conjunction with a listening effort test would provide a standard assessment method for rating the quality of synthesized speech. This study identifies the most suitable TFW modulation variable parameter to be used for assessing synthetic speech and assess the results of several assessment tests that rate examples of synthesized speech in terms of the TFW variable parameter and listening effort. The study also attempts to identify the attributes of speech that differentiate synthetic, TFW modulated and natural speech.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Research on aphasia has struggled to identify apraxia of speech (AoS) as an independent deficit affecting a processing level separate from phonological assembly and motor implementation. This is because AoS is characterized by both phonological and phonetic errors and, therefore, can be interpreted as a combination of deficits at the phonological and the motoric level rather than as an independent impairment. We apply novel psycholinguistic analyses to the perceptually phonological errors made by 24 Italian aphasic patients. We show that only patients with relative high rate (>10%) of phonetic errors make sound errors which simplify the phonology of the target. Moreover, simplifications are strongly associated with other variables indicative of articulatory difficulties - such as a predominance of errors on consonants rather than vowels -but not with other measures - such as rate of words reproduced correctly or rates of lexical errors. These results indicate that sound errors cannot arise at a single phonological level because they are different in different patients. Instead, different patterns: (1) provide evidence for separate impairments and the existence of a level of articulatory planning/programming intermediate between phonological selection and motor implementation; (2) validate AoS as an independent impairment at this level, characterized by phonetic errors and phonological simplifications; (3) support the claim that linguistic principles of complexity have an articulatory basis since they only apply in patients with associated articulatory difficulties.