991 resultados para Inconsistent speech errors


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Developmental speech disorder is accounted for by theories derived from psychology, psycholinguistics, linguistics and medicine, with researchers developing assessment protocols that reflect their theoretical perspective. How theory and data analyses lead to different therapy approaches, however, is sometimes unclear. Here, we present a case management plan for a 7 year old boy with unintelligible speech. Assessment data were analysed to address seven case management questions regarding need for intervention, service delivery, differential diagnosis, intervention goals, generalization of therapeutic gains, discharge criteria and evaluation of efficacy. Jarrod was diagnosed as having inconsistent speech disorder that required intervention. He pronounced 88% of words differently when asked to name each word in the 25 word inconsistency test of the Diagnostic Evaluation of Articulation and Phonology three times, each trial separated by another activity. Other standardized assessments supported the diagnosis of inconsistent speech disorder that, according to previous research, is associated with a deficit in phonological assembly. Core vocabulary intervention was chosen as the most appropriate therapy technique. Its nature and a possible protocol for implementation is described.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In order to explore the impact of a degraded semantic system on the structure of language production, we analysed transcripts from autobiographical memory interviews to identify naturally-occurring speech errors by eight patients with semantic dementia (SD) and eight age-matched normal speakers. Relative to controls, patients were significantly more likely to (a) substitute and omit open class words, (b) substitute (but not omit) closed class words, (c) substitute incorrect complex morphological forms and (d) produce semantically and/or syntactically anomalous sentences. Phonological errors were scarce in both groups. The study confirms previous evidence of SD patients’ problems with open class content words which are replaced by higher frequency, less specific terms. It presents the first evidence that SD patients have problems with closed class items and make syntactic as well as semantic speech errors, although these grammatical abnormalities are mostly subtle rather than gross. The results can be explained by the semantic deficit which disrupts the representation of a pre-verbal message, lexical retrieval and the early stages of grammatical encoding.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this study was to compare the speech in subjects with cleft lip and palate, in whom three methods of the hard palate closure were used. One hundred and thirty-seven children (96 boys, 41 girls; mean age = 12 years, SD = 1·2) with complete unilateral cleft lip and palate (CUCLP) operated by a single surgeon with a one-stage method were evaluated. The management of the cleft lip and soft palate was comparable in all subjects; for hard palate repair, three different methods were used: bilateral von Langenbeck closure (b-vL group, n = 39), unilateral von Langenbeck closure (u-vL group, n = 56) and vomerplasty (v-p group, n = 42). Speech was assessed: (i) perceptually for the presence of a) hypernasality, b) compensatory articulations (CAs), c) audible nasal air emissions (ANE) and d) speech intelligibility; (ii) for the presence of compensatory facial grimacing, (iii) with clinical intra-oral evaluation and (iv) with videonasendoscopy. A total rate of hypernasality requiring pharyngoplasty was 5·1%; total incidence post-oral compensatory articulations (CAs) was 2·2%. The overall speech intelligibility was good in 84·7% of cases. Oronasal fistulas (ONFs) occurred in 15·7% b-vL subjects, 7·1% u-vL subjects and 50% v-p subjects (P < 0·001). No statistically significant intergroup differences for hypernasality, CAs and intelligibility were found (P > 0·1). In conclusion, the speech after early one-stage repair of CUCLP was satisfactory. The method of hard palate repair affected the incidence of ONFs, which, however, caused relatively mild and inconsistent speech errors.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The fields of Rhetoric and Communication usually assume a competent speaker who is able to speak well with conscious intent; however, what happens when intent and comprehension are intact but communicative facilities are impaired (e.g., by stroke or traumatic brain injury)? What might a focus on communicative success be able to tell us in those instances? This project considers this question in examining communication disorders through identifying and analyzing patterns of (dis) fluent speech between 10 aphasic and 10 non-aphasic adults. The analysis in this report is centered on a collection of data provided by the Aphasia Bank database. The database’s collection protocol guides aphasic and non-aphasic participants through a series of language assessments, and for my re-analysis of the database’s transcripts I consider communicative success is and how it is demonstrated during a re-telling of the Cinderella narrative. I conducted a thorough examination of a set of participant transcripts to understand the contexts in which speech errors occur, and how (dis) fluencies may follow from aphasic and non-aphasic participant’s speech patterns. An inductive mixed-methods approach, informed by grounded theory, qualitative, and linguistic analyses of the transcripts functioned as a means to balance the classification of data, providing a foundation for all sampling decisions. A close examination of the transcripts and the codes of the Aphasia Bank database suggest that while the coding is abundant and detailed, that further levels of coding and analysis may be needed to reveal underlying similarities and differences in aphasic vs. non-aphasic linguistic behavior. Through four successive levels of increasingly detailed analysis, I found that patterns of repair by aphasics and non-aphasics differed primarily in degree rather than kind. This finding may have therapeutic impact, in reassuring aphasics that they are on the right track to achieving communicative fluency.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

OBJETIVO: Descrever os índices articulatórios quanto aos diferentes tipos de erros e verificar a existência de um tipo de erro preferencial em crianças com transtorno fonológico, em função da presença ou não de histórico de otite média. MÉTODOS: Participaram deste estudo prospectivo e transversal, 21 sujeitos com idade entre 5 anos e 2 meses e 7 anos e 9 meses com diagnóstico de transtorno fonológico. Os sujeitos foram agrupados de acordo com a presença do histórico otite média. O grupo experimental 1 (GE1) foi composto por 14 sujeitos com histórico de otite média e o grupo experimental 2 (GE2) por sete sujeitos que não apresentaram histórico de otite média. Foram calculadas a quantidade de erros de fala (distorções, omissões e substituições) e os índices articulatórios. Os dados foram submetidos à análise estatística. RESULTADOS: Os grupos GE1 e GE2 diferiram quanto ao desempenho nos índices na comparação entre as duas provas de fonologia aplicadas. Observou-se em todas as análises que os índices que avaliam as substituições indicaram o tipo de erro mais cometido pelas crianças com transtorno fonológico. CONCLUSÃO: Os índices foram efetivos na indicação da substituição como o erro mais ocorrente em crianças com TF. A maior ocorrência de erros de fala observada na nomeação de figuras em crianças com histórico de otite média indica que tais erros, possivelmente, estão associados à dificuldade na representação fonológica causada pela perda auditiva transitória que vivenciaram.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Current models of word production assume that words are stored as linear sequences of phonemes which are structured into syllables only at the moment of production. This is because syllable structure is always recoverable from the sequence of phonemes. In contrast, we present theoretical and empirical evidence that syllable structure is lexically represented. Storing syllable structure would have the advantage of making representations more stable and resistant to damage. On the other hand, re-syllabifications affect only a minimal part of phonological representations and occur only in some languages and depending on speech register. Evidence for these claims comes from analyses of aphasic errors which not only respect phonotactic constraints, but also avoid transformations which move the syllabic structure of the word further away from the original structure, even when equating for segmental complexity. This is true across tasks, types of errors, and, crucially, types of patients. The same syllabic effects are shown by apraxic patients and by phonological patients who have more central difficulties in retrieving phonological representations. If syllable structure was only computed after phoneme retrieval, it would have no way to influence the errors of phonological patients. Our results have implications for psycholinguistic and computational models of language as well as for clinical and educational practices.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Secondary tasks such as cell phone calls or interaction with automated speech dialog systems (SDSs) increase the driver’s cognitive load as well as the probability of driving errors. This study analyzes speech production variations due to cognitive load and emotional state of drivers in real driving conditions. Speech samples were acquired from 24 female and 17 male subjects (approximately 8.5 h of data) while talking to a co-driver and communicating with two automated call centers, with emotional states (neutral, negative) and the number of necessary SDS query repetitions also labeled. A consistent shift in a number of speech production parameters (pitch, first format center frequency, spectral center of gravity, spectral energy spread, and duration of voiced segments) was observed when comparing SDS interaction against co-driver interaction; further increases were observed when considering negative emotion segments and the number of requested SDS query repetitions. A mel frequency cepstral coefficient based Gaussian mixture classifier trained on 10 male and 10 female sessions provided 91% accuracy in the open test set task of distinguishing co-driver interactions from SDS interactions, suggesting—together with the acoustic analysis—that it is possible to monitor the level of driver distraction directly from their speech.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Non-driving related cognitive load and variations of emotional state may impact a driver’s capability to control a vehicle and introduces driving errors. Availability of reliable cognitive load and emotion detection in drivers would benefit the design of active safety systems and other intelligent in-vehicle interfaces. In this study, speech produced by 68 subjects while driving in urban areas is analyzed. A particular focus is on speech production differences in two secondary cognitive tasks, interactions with a co-driver and calls to automated spoken dialog systems (SDS), and two emotional states during the SDS interactions - neutral/negative. A number of speech parameters are found to vary across the cognitive/emotion classes. Suitability of selected cepstral- and production-based features for automatic cognitive task/emotion classification is investigated. A fusion of GMM/SVM classifiers yields an accuracy of 94.3% in cognitive task and 81.3% in emotion classification.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes a speech coding technique that has been developed in order to provide a method of digitising speech at bit rates in the range 4. 8 to 8 kb/s, that is insensitive to the effects of acoustic background noise and bit errors on the digital link. The main aim has been to develop a coding scheme which provides speech quality and robustness against noise and errors that is similar to a 16000 b/s continuously variable slope delta (CVSD) coder, but which operates at half its data rate or less. A desirable aim was to keep the complexity of the coding scheme within the scope of what could reasonably be handled by current signal processing chips or by a single custom integrated circuit. Applications areas include mobile radio and small Satcomms terminals.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In recognition-based user interface, users’ satisfaction is determined not only by recognition accuracy but also by effort to correct recognition errors. In this paper, we introduce a crossmodal error correction technique, which allows users to correct errors of Chinese handwriting recognition by speech. The focus of the paper is a multimodal fusion algorithm supporting the crossmodal error correction. By fusing handwriting and speech recognition, the algorithm can correct errors in both character extraction and recognition of handwriting. The experimental result indicates that the algorithm is effective and efficient. Moreover, the evaluation also shows the correction technique can help users to correct errors in handwriting recognition more efficiently than the other two error correction techniques.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Existing work in Computer Science and Electronic Engineering demonstrates that Digital Signal Processing techniques can effectively identify the presence of stress in the speech signal. These techniques use datasets containing real or actual stress samples i.e. real-life stress such as 911 calls and so on. Studies that use simulated or laboratory-induced stress have been less successful and inconsistent. Pervasive, ubiquitous computing is increasingly moving towards voice-activated and voice-controlled systems and devices. Speech recognition and speaker identification algorithms will have to improve and take emotional speech into account. Modelling the influence of stress on speech and voice is of interest to researchers from many different disciplines including security, telecommunications, psychology, speech science, forensics and Human Computer Interaction (HCI). The aim of this work is to assess the impact of moderate stress on the speech signal. In order to do this, a dataset of laboratory-induced stress is required. While attempting to build this dataset it became apparent that reliably inducing measurable stress in a controlled environment, when speech is a requirement, is a challenging task. This work focuses on the use of a variety of stressors to elicit a stress response during tasks that involve speech content. Biosignal analysis (commercial Brain Computer Interfaces, eye tracking and skin resistance) is used to verify and quantify the stress response, if any. This thesis explains the basis of the author’s hypotheses on the elicitation of affectively-toned speech and presents the results of several studies carried out throughout the PhD research period. These results show that the elicitation of stress, particularly the induction of affectively-toned speech, is not a simple matter and that many modulating factors influence the stress response process. A model is proposed to reflect the author’s hypothesis on the emotional response pathways relating to the elicitation of stress with a required speech content. Finally the author provides guidelines and recommendations for future research on speech under stress. Further research paths are identified and a roadmap for future research in this area is defined.