960 resultados para speech language pathology
Resumo:
abstract With many visual speech animation techniques now available, there is a clear need for systematic perceptual evaluation schemes. We describe here our scheme and its application to a new video-realistic (potentially indistinguishable from real recorded video) visual-speech animation system, called Mary 101. Two types of experiments were performed: a) distinguishing visually between real and synthetic image- sequences of the same utterances, ("Turing tests") and b) gauging visual speech recognition by comparing lip-reading performance of the real and synthetic image-sequences of the same utterances ("Intelligibility tests"). Subjects that were presented randomly with either real or synthetic image-sequences could not tell the synthetic from the real sequences above chance level. The same subjects when asked to lip-read the utterances from the same image-sequences recognized speech from real image-sequences significantly better than from synthetic ones. However, performance for both, real and synthetic, were at levels suggested in the literature on lip-reading. We conclude from the two experiments that the animation of Mary 101 is adequate for providing a percept of a talking head. However, additional effort is required to improve the animation for lip-reading purposes like rehabilitation and language learning. In addition, these two tasks could be considered as explicit and implicit perceptual discrimination tasks. In the explicit task (a), each stimulus is classified directly as a synthetic or real image-sequence by detecting a possible difference between the synthetic and the real image-sequences. The implicit perceptual discrimination task (b) consists of a comparison between visual recognition of speech of real and synthetic image-sequences. Our results suggest that implicit perceptual discrimination is a more sensitive method for discrimination between synthetic and real image-sequences than explicit perceptual discrimination.
Resumo:
La audición es el segundo mecanismo sensorial más importante después de la visión para obtener información durante la operación de una aeronave. Les permite a los pilotos percibir, procesar identificar los sonidos del ambiente que los rodea. Así necesita oír bien tanto en vuelo como en tierra, especialmente entre 500 y 3000 Hz para la recepción del lenguaje hablado y de las señales auditivas. Objetivo: Determinar los cambios progresivos en el tiempo y las frecuencias auditivas que se afectan en las audiometrías de los pilotos militares de las fuerzas militares en los años 2009, 2010 y 2011. Material y Métodos: Se trata de un estudio longitudinal de cohorte en el cual se identificará el comportamiento de las audiometrías de la población de pilotos de las fuerzas militares de Colombia en los años 2009, 2010 y 2011. Se hará una revisión retrospectiva de dichas audiometrías. Para dicho fin se tomó la población de pilotos de fuerzas militares que fueron distribuidos en grupos de pilotos de aeronave de ala fija que corresponden a 47 pilotos y ala rotatoria que son 155. Conclusiones: Se encontró que la frecuencia mas alterada en la población total fue la de 6000 Hz, que en lo pilotos de ala fija las frecuencias más afectadas fueron las de 4000 Hz y la de 6000Hz, la frecuencia más afectada en los pilotos de ala rotatoria fueron las de 4000 Hz, 6000 Hz y 8000 Hz, con lo que se concluye que la exposición en los pilotos afecta las frecuencias altas en las audiometrías. Se observó una relación con el número de horas de vuelo y las alteraciones audiométricas encontrándose una alteración en los pilotos entre 1000 y 4000 horas de vuelo en las frecuencias de 4000 Hz, 6000 Hz y 8000 Hz y una alteración de las todas las frecuencias en aquellos pilotos con más de 5000 horas de vuelo en el año 2009, presentando posterior recuperación en los años posteriores sin poder determinar en este estudio las causas de dicha recuperación. Los pilotos de ala rotatoria presentaron un incremento sostenido en todas las frecuencias en comparación con los pilotos de ala fija.
Resumo:
This paper contains a speech discrimation test for hearing impaired children using Mandarin language.
Resumo:
This paper reviews the field of speech pathology and whether it is of benefit to helping profoundly retarded children develop verbal/functional language skills.
Resumo:
This paper contains a speech discrimination test in Southern Sotho language (an African language spoken in Southern Africa).
Resumo:
Infants' responses in speech sound discrimination tasks can be nonmonotonic over time. Stager and Werker (1997) reported such data in a bimodal habituation task. In this task, 8-month-old infants were capable of discriminations that involved minimal contrast pairs, whereas 14-month-old infants were not. It was argued that the older infants' attenuated performance was linked to their processing of the stimuli for meaning. The authors suggested that these data are diagnostic of a qualitative shift in infant cognition. We describe an associative connectionist model showing a similar decrement in discrimination without any qualitative shift in processing. The model suggests that responses to phonemic contrasts may be a nonmonotonic function of experience with language. The implications of this idea are discussed. The model also provides a formal framework for studying habituation-dishabituation behaviors in infancy.
Resumo:
This paper addresses the nature and cause of Specific Language Impairment (SLI) by reviewing recent research in sentence processing of children with SLI compared to typically developing (TD) children and research in infant speech perception. These studies have revealed that children with SLI are sensitive to syntactic, semantic, and real-world information, but do not show sensitivity to grammatical morphemes with low phonetic saliency, and they show longer reaction times than age-matched controls. TD children from the age of 4 show trace reactivation, but some children with SLI fail to show this effect, which resembles the pattern of adults and TD children with low working memory. Finally, findings from the German Language Development (GLAD) Project have revealed that a group of children at risk for SLI had a history of an auditory delay and impaired processing of prosodic information in the first months of their life, which is not detectable later in life. Although this is a single project that needs to be replicated with a larger group of children, it provides preliminary support for accounts of SLI which make an explicit link between an early deficit in the processing of phonology and later language deficits, and the Computational Complexity Hypothesis that argues that the language deficit in children with SLI lies in difficulties integrating different types of information at the interfaces.
Resumo:
This paper reports the pitch range and vowel duration data from a group of children with Williams syndrome (WS) in comparison with a group of typically developing children matched for chronological age (CA) and a group matched for receptive language abilities (LA). It is found that the speech of the WS group has a greater pitch range and that vowels tend to be longer in duration than in the speech of the typically developing children. These findings are in line with the impressionistic results reported by Reilly, Klima and Bellugi [17].
Resumo:
Background: Several authors have highlighted areas of overlap in symptoms and impairment among children with autism spectrum disorder (ASD) and children with specific language impairment (SLI). By contrast, loss of language and broadly defined regression have been reported as relatively specific to autism. We compare the incidence of language loss and language progression of children with autism and SLI. Methods: We used two complementary studies: the Special Needs and Autism Project (SNAP) and the Manchester Language Study (MLS) involving children with SLI. This yielded a combined sample of 368 children (305 males and 63 females) assessed in late childhood for autism, history of language loss, epilepsy, language abilities and nonverbal IQ. Results: language loss occurred in just 1% of children with SLI but in 15% of children classified as having autism or autism spectrum disorder. Loss was more common among children with autism rather than milder ASD and is much less frequently reported when language development is delayed. For children who lost language skills before their first phrases, the phrased speech milestone was postponed but long-term language skills were not significantly lower than children with autism but without loss. For the few who experienced language loss after acquiring phrased speech, subsequent cognitive performance is more uncertain. Conclusions: Language loss is highly specific to ASD. The underlying developmental abnormality may be more prevalent than raw data might suggest, its possible presence being hidden for children whose language development is delayed.
Resumo:
Models of normal word production are well specified about the effects of frequency of linguistic stimuli on lexical access, but are less clear regarding the same effects on later stages of word production, particularly word articulation. In aphasia, this lack of specificity of down-stream frequency effects is even more noticeable because there is relatively limited amount of data on the time course of frequency effects for this population. This study begins to fill this gap by comparing the effects of variation of word frequency (lexical, whole word) and bigram frequency (sub-lexical, within word) on word production abilities in ten normal speakers and eight mild–moderate individuals with aphasia. In an immediate repetition paradigm, participants repeated single monosyllabic words in which word frequency (high or low) was crossed with bigram frequency (high or low). Indices for mapping the time course for these effects included reaction time (RT) for linguistic processing and motor preparation, and word duration (WD) for speech motor performance (word articulation time). The results indicated that individuals with aphasia had significantly longer RT and WD compared to normal speakers. RT showed a significant main effect only for word frequency (i.e., high-frequency words had shorter RT). WD showed significant main effects of word and bigram frequency; however, contrary to our expectations, high-frequency items had longer WD. Further investigation of WD revealed that independent of the influence of word and bigram frequency, vowel type (tense or lax) had the expected effect on WD. Moreover, individuals with aphasia differed from control speakers in their ability to implement tense vowel duration, even though they could produce an appropriate distinction between tense and lax vowels. The results highlight the importance of using temporal measures to identify subtle deficits in linguistic and speech motor processing in aphasia, the crucial role of phonetic characteristics of stimuli set in studying speech production and the need for the language production models to account more explicitly for word articulation.
Resumo:
Spoken word recognition, during gating, appears intact in specific language impairment (SLI). This study used gating to investigate the process in adolescents with autism spectrum disorders plus language impairment (ALI). Adolescents with ALI, SLI, and typical language development (TLD), matched on nonverbal IQ listened to gated words that varied in frequency (low/high) and number of phonological onset neighbors (low/high density). Adolescents with ALI required more speech input to initially identify low-frequency words with low competitor density than those with SLI and those with TLD, who did not differ. These differences may be due to less well specified word form representations in ALI.
Resumo:
The effects of background English and Welsh speech on memory for visually-presented English words were contrasted amongst monolingual English speakers and bilingual Welsh-English speakers. Equivalent disruption to the English language task was observed amongst Welsh-speaking bilinguals from both English and Welsh speech, but English-speaking monolinguals displayed less disruption from the Welsh speech. An effect of the meaning of the background speech was therefore apparent amongst bilinguals even when the focal memory task was presented in a different language from the distracting speech. A second experiment tested only English-speaking monolinguals, using English as background speech, but varied the demands of the focal task. Participants were asked either to count the number of vowels in words visually presented for future recall, or to rate them for pleasantness, before subsequently being asked to recall the words. Greater disruption to recall was observed from meaningful background speech when participants initially rated the words for pleasantness than when they initially counted the vowels within the words. These results show that background speech is automatically analyzed for meaning, but whether the meaning of the background speech causes distraction is critically dependent upon the nature of the focal task. The data underscore the need to consider not only the nature of office noise, but also the demands and content of the work task when assessing the effects of office noise on work performance.
Resumo:
Background: Word deafness is a rare condition where pathologically degraded speech perception results in impaired repetition and comprehension but otherwise intact linguistic skills. Although impaired linguistic systems in aphasias resulting from damage to the neural language system (here termed central impairments), have been consistently shown to be amenable to external influences such as linguistic or contextual information (e.g. cueing effects in naming), it is not known whether similar influences can be shown for aphasia arising from damage to a perceptual system (here termed peripheral impairments). Aims: This study aimed to investigate the extent to which pathologically degraded speech perception could be facilitated or disrupted by providing visual as well as auditory information. Methods and Procedures: In three word repetition tasks, the participant with word deafness (AB) repeated words under different conditions: words were repeated in the context of a pictorial or written target, a distractor (semantic, unrelated, rhyme or phonological neighbour) or a blank page (nothing). Accuracy and error types were analysed. Results: AB was impaired at repetition in the blank condition, confirming her degraded speech perception. Repetition was significantly facilitated when accompanied by a picture or written example of the word and significantly impaired by the presence of a written rhyme. Errors in the blank condition were primarily formal whereas errors in the rhyme condition were primarily miscues (saying the distractor word rather than the target). Conclusions: Cross-modal input can both facilitate and further disrupt repetition in word deafness. The cognitive mechanisms behind these findings are discussed. Both top-down influence from the lexical layer on perceptual processes as well as intra-lexical competition within the lexical layer may play a role.
Resumo:
Background and aims: In addition to the well-known linguistic processing impairments in aphasia, oro-motor skills and articulatory implementation of speech segments are reported to be compromised to some degree in most types of aphasia. This study aimed to identify differences in the characteristics and coordination of lip movements in the production of a bilabial closure gesture between speech-like and nonspeech tasks in individuals with aphasia and healthy control subjects. Method and procedure: Upper and lower lip movement data were collected for a speech-like and a nonspeech task using an AG 100 EMMA system from five individuals with aphasia and five age and gender matched control subjects. Each task was produced at two rate conditions (normal and fast), and in a familiar and a less-familiar manner. Single articulator kinematic parameters (peak velocity, amplitude, duration, and cyclic spatio-temporal index) and multi-articulator coordination indices (average relative phase and variability of relative phase) were measured to characterize lip movements. Outcome and results: The results showed that when the two lips had similar task goals (bilabial closure) in speech-like versus nonspeech task, kinematic and coordination characteristics were not found to be different. However, when changes in rate were imposed on the bilabial gesture, only speech-like task showed functional adaptations, indicated by a greater decrease in amplitude and duration at fast rates. In terms of group differences, individuals with aphasia showed smaller amplitudes and longer movement durations for upper lip, higher spatio-temporal variability for both lips, and higher variability in lip coordination than the control speakers. Rate was an important factor in distinguishing the two groups, and individuals with aphasia were limited in implementing the rate changes. Conclusion and implications: The findings support the notion of subtle but robust differences in motor control characteristics between individuals with aphasia and the control participants, even in the context of producing bilabial closing gestures for a relatively simple speech-like task. The findings also highlight the functional differences between speech-like and nonspeech tasks, despite a common movement coordination goal for bilabial closure.