953 resultados para performativity of speech


Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: The aim of this retrospective study was to evaluate speech outcome and need of a pharyngeal flap in children born with nonsyndromic Pierre Robin Sequence (nsPRS) vs syndromic Pierre Robin Sequence (sPRS). METHODS: Pierre Robin Sequence was diagnosed when the triad microretrognathia, glossoptosis, and cleft palate were present. Children were classified at birth in 3 categories depending on respiratory and feeding problems. The Borel-Maisonny classification was used to score the velopharyngeal insufficiency. RESULTS: The study was based on 38 children followed from 1985 to 2006. For the 25 nsPRS, 9 (36%) pharyngeal flaps were performed with improvements of the phonatory score in the 3 categories. For the 13 sPRS, 3 (23%) pharyngeal flaps were performed with an improvement of the phonatory scores in the 3 children. There was no statistical difference between the nsPRS and sPRS groups (P = .3) even if we compared the children in the 3 categories (P = .2). CONCLUSIONS: Children born with nsPRS did not have a better prognosis of speech outcome than children born with sPRS. Respiratory and feeding problems at birth did not seem to be correlated with speech outcome. This is important when informing parents on the prognosis of long-term therapy

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The current study investigated the effects that barriers (both real and perceived) had on participation and completion of speech and language programs for preschool children with communication delays. I compared 36 families of preschool children with an identified communication delay that have completed services (completers) to 13 families that have not completed services (non-completers) prescribed by Speech and Language professionals. Data findings reported were drawn from an interview with the mother, a speech and language assessment of the child, and an extensive package of measures completed by the mother. Children ranged in age from 32 to 71 mos. These data were collected as part of a project funded by the Canadian Language and Literacy Research Networks of Centres of Excellence. Findings suggest that completers and non-completers shared commonalities in a number of parenting characteristics but differed significantly in two areas. Mothers in the noncompleting group were more permissive and had lower maternal education than mothers in the completing families. From a systemic standpoint, families also differed in the number of perceived barriers to treatment experienced during their time with Speech Services Niagara. Mothers in the non-completing group experienced more perceived barriers to treatment than completing mothers. Specifically, these mothers perceived more stressors and obstacles that competed with treatment, perceived more treatment demands and they perceived the relevance of treatment as less important than the completing group. Despite this, the findings suggest that non-completing families were 100% satisfied with services. Contrary to predictions, there were no significant differences in child characterisfics and economic characteristics between completers and non-completers. The findings in this study are considered exploratory and tentative due to the small sample size.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Medical fields requires fast, simple and noninvasive methods of diagnostic techniques. Several methods are available and possible because of the growth of technology that provides the necessary means of collecting and processing signals. The present thesis details the work done in the field of voice signals. New methods of analysis have been developed to understand the complexity of voice signals, such as nonlinear dynamics aiming at the exploration of voice signals dynamic nature. The purpose of this thesis is to characterize complexities of pathological voice from healthy signals and to differentiate stuttering signals from healthy signals. Efficiency of various acoustic as well as non linear time series methods are analysed. Three groups of samples are used, one from healthy individuals, subjects with vocal pathologies and stuttering subjects. Individual vowels/ and a continuous speech data for the utterance of the sentence "iruvarum changatimaranu" the meaning in English is "Both are good friends" from Malayalam language are recorded using a microphone . The recorded audio are converted to digital signals and are subjected to analysis.Acoustic perturbation methods like fundamental frequency (FO), jitter, shimmer, Zero Crossing Rate(ZCR) were carried out and non linear measures like maximum lyapunov exponent(Lamda max), correlation dimension (D2), Kolmogorov exponent(K2), and a new measure of entropy viz., Permutation entropy (PE) are evaluated for all three groups of the subjects. Permutation Entropy is a nonlinear complexity measure which can efficiently distinguish regular and complex nature of any signal and extract information about the change in dynamics of the process by indicating sudden change in its value. The results shows that nonlinear dynamical methods seem to be a suitable technique for voice signal analysis, due to the chaotic component of the human voice. Permutation entropy is well suited due to its sensitivity to uncertainties, since the pathologies are characterized by an increase in the signal complexity and unpredictability. Pathological groups have higher entropy values compared to the normal group. The stuttering signals have lower entropy values compared to the normal signals.PE is effective in charaterising the level of improvement after two weeks of speech therapy in the case of stuttering subjects. PE is also effective in characterizing the dynamical difference between healthy and pathological subjects. This suggests that PE can improve and complement the recent voice analysis methods available for clinicians. The work establishes the application of the simple, inexpensive and fast algorithm of PE for diagnosis in vocal disorders and stuttering subjects.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

abstract With many visual speech animation techniques now available, there is a clear need for systematic perceptual evaluation schemes. We describe here our scheme and its application to a new video-realistic (potentially indistinguishable from real recorded video) visual-speech animation system, called Mary 101. Two types of experiments were performed: a) distinguishing visually between real and synthetic image- sequences of the same utterances, ("Turing tests") and b) gauging visual speech recognition by comparing lip-reading performance of the real and synthetic image-sequences of the same utterances ("Intelligibility tests"). Subjects that were presented randomly with either real or synthetic image-sequences could not tell the synthetic from the real sequences above chance level. The same subjects when asked to lip-read the utterances from the same image-sequences recognized speech from real image-sequences significantly better than from synthetic ones. However, performance for both, real and synthetic, were at levels suggested in the literature on lip-reading. We conclude from the two experiments that the animation of Mary 101 is adequate for providing a percept of a talking head. However, additional effort is required to improve the animation for lip-reading purposes like rehabilitation and language learning. In addition, these two tasks could be considered as explicit and implicit perceptual discrimination tasks. In the explicit task (a), each stimulus is classified directly as a synthetic or real image-sequence by detecting a possible difference between the synthetic and the real image-sequences. The implicit perceptual discrimination task (b) consists of a comparison between visual recognition of speech of real and synthetic image-sequences. Our results suggest that implicit perceptual discrimination is a more sensitive method for discrimination between synthetic and real image-sequences than explicit perceptual discrimination.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper reviews a study to determine if loss of speech discrimination is related to age and patients with audiograms showing steep high-frequency losses.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This dissertation examines the frequency response that results in the maximum level of speech intelligibility for persons with noise-induced hearing loss.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper discusses a study on postlingual cochlear implantees and the effectiveness of the CST in evaluating enhancement of speech recognition abilities.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Speech-evoked auditory brainstem responses (ABRs) were acquired in quiet and in the presence of noise at two study sessions to investigate 1) test-retest variability and 2) subcortical representation of speech stimuli. Participants were adults with normal hearing in both ears who listened monaurally and adults with unilateral deafness. Results indicate consistency in responses across sessions and several differences between hearing groups for magnitudes of discrete components.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

While the beneficial effect of levodopa on traditional motor control tasks have been well documented over the decades. its effect on speech motor control has rarely been objectively examined and the existing literature remains inconclusive. This paper aims to examine the effect of levodopa on speech in patients with Parkinson's disease. It was hypothesized that levodopa would improve preparatory motor set related activity and alleviate hypophonia. Patients fasted and abstained from levodopa overnight. Motor examination and speech testing was performed the following day, pre-levodopa during their "off' state, then at hourly intervals post-medication to obtain the best "on" state. All speech stimuli showed a consistent tendency for increased loudness and faster rate during the "on" state, but this was accompanied by a greater extent of intensity decay. Pitch and articulation remained unchanged. Levodopa effectively upscaled the overall gain setting of vocal amplitude and tempo, similar to its well-known effect on limb movement. However, unlike limb movement, this effect on the final acoustic product of speech may or may not be advantageous, depending on the existing speech profile of individual patients. (C) 2007 Movement Disorder Society.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Two experiments examine the effect on an immediate recall test of simulating a reverberant auditory environment in which auditory distracters in the form of speech are played to the participants (the 'irrelevant sound effect'). An echo-intensive environment simulated by the addition of reverberation to the speech reduced the extent of 'changes in state' in the irrelevant speech stream by smoothing the profile of the waveform. In both experiments, the reverberant auditory environment produced significantly smaller irrelevant sound distraction effects than an echo-free environment. Results are interpreted in terms of changing-state hypothesis, which states that acoustic content of irrelevant sound, rather than phonology or semantics, determines the extent of the irrelevant sound effect (ISE). Copyright (C) 2007 John Wiley & Sons, Ltd.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Alterations of existing neural networks during healthy aging, resulting in behavioral deficits and changes in brain activity, have been described for cognitive, motor, and sensory functions. To investigate age-related changes in the neural circuitry underlying overt non-lexical speech production, functional MRI was performed in 14 healthy younger (21–32 years) and 14 healthy older individuals (62–84 years). The experimental task involved the acoustically cued overt production of the vowel /a/ and the polysyllabic utterance /pataka/. In younger and older individuals, overt speech production was associated with the activation of a widespread articulo-phonological network, including the primary motor cortex, the supplementary motor area, the cingulate motor areas, and the posterior superior temporal cortex, similar in the /a/ and /pataka/ condition. An analysis of variance with the factors age and condition revealed a significant main effect of age. Irrespective of the experimental condition, significantly greater activation was found in the bilateral posterior superior temporal cortex, the posterior temporal plane, and the transverse temporal gyri in younger compared to older individuals. Significantly greater activation was found in the bilateral middle temporal gyri, medial frontal gyri, middle frontal gyri, and inferior frontal gyri in older vs. younger individuals. The analysis of variance did not reveal a significant main effect of condition and no significant interaction of age and condition. These results suggest a complex reorganization of neural networks dedicated to the production of speech during healthy aging.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

To investigate the neural network of overt speech production, eventrelated fMRI was performed in 9 young healthy adult volunteers. A clustered image acquisition technique was chosen to minimize speechrelated movement artifacts. Functional images were acquired during the production of oral movements and of speech of increasing complexity (isolated vowel as well as monosyllabic and trisyllabic utterances). This imaging technique and behavioral task enabled depiction of the articulo-phonologic network of speech production from the supplementary motor area at the cranial end to the red nucleus at the caudal end. Speaking a single vowel and performing simple oral movements involved very similar activation of the corticaland subcortical motor systems. More complex, polysyllabic utterances were associated with additional activation in the bilateral cerebellum,reflecting increased demand on speech motor control, and additional activation in the bilateral temporal cortex, reflecting the stronger involvement of phonologic processing.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper describes the methodology used to compile a corpus called MorphoQuantics that contains a comprehensive set of 17,943 complex word types extracted from the spoken component of the British National Corpus (BNC). The categorisation of these complex words was derived primarily from the classification of Prefixes, Suffixes and Combining Forms proposed by Stein (2007). The MorphoQuantics corpus has been made available on a website of the same name; it lists 554 word-initial and 281 word-final morphemes in English, their etymology and meaning, and records the type and token frequencies of all the associated complex words containing these morphemes from the spoken element of the BNC, together with their Part of Speech. The results show that, although the number of word-initial affixes is nearly double that of word-final affixes, the relative number of each observed in the BNC is very similar; however, word-final affixes are more productive in that, on average, the frequency with which they attach to different bases is three times that of word-initial affixes. Finally, this paper considers how linguists, psycholinguists and psychologists may use MorphoQuantics to support their empirical work in first and second language acquisition, and clinical and educational research.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this thesis is to investigate computerized voice assessment methods to classify between the normal and Dysarthric speech signals. In this proposed system, computerized assessment methods equipped with signal processing and artificial intelligence techniques have been introduced. The sentences used for the measurement of inter-stress intervals (ISI) were read by each subject. These sentences were computed for comparisons between normal and impaired voice. Band pass filter has been used for the preprocessing of speech samples. Speech segmentation is performed using signal energy and spectral centroid to separate voiced and unvoiced areas in speech signal. Acoustic features are extracted from the LPC model and speech segments from each audio signal to find the anomalies. The speech features which have been assessed for classification are Energy Entropy, Zero crossing rate (ZCR), Spectral-Centroid, Mean Fundamental-Frequency (Meanf0), Jitter (RAP), Jitter (PPQ), and Shimmer (APQ). Naïve Bayes (NB) has been used for speech classification. For speech test-1 and test-2, 72% and 80% accuracies of classification between healthy and impaired speech samples have been achieved respectively using the NB. For speech test-3, 64% correct classification is achieved using the NB. The results direct the possibility of speech impairment classification in PD patients based on the clinical rating scale.