971 resultados para Speech Processing


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Speech melody or prosody subserves linguistic, emotional, and pragmatic functions in speech communication. Prosodic perception is based on the decoding of acoustic cues with a predominant function of frequency-related information perceived as speaker's pitch. Evaluation of prosodic meaning is a cognitive function implemented in cortical and subcortical networks that generate continuously updated affective or linguistic speaker impressions. Various brain-imaging methods allow delineation of neural structures involved in prosody processing. In contrast to functional magnetic resonance imaging techniques, DC (direct current, slow) components of the EEG directly measure cortical activation without temporal delay. Activation patterns obtained with this method are highly task specific and intraindividually reproducible. Studies presented here investigated the topography of prosodic stimulus processing in dependence on acoustic stimulus structure and linguistic or affective task demands, respectively. Data obtained from measuring DC potentials demonstrated that the right hemisphere has a predominant role in processing emotions from the tone of voice, irrespective of emotional valence. However, right hemisphere involvement is modulated by diverse speech and language-related conditions that are associated with a left hemisphere participation in prosody processing. The degree of left hemisphere involvement depends on several factors such as (i) articulatory demands on the perceiver of prosody (possibly, also the poser), (ii) a relative left hemisphere specialization in processing temporal cues mediating prosodic meaning, and (iii) the propensity of prosody to act on the segment level in order to modulate word or sentence meaning. The specific role of top-down effects in terms of either linguistically or affectively oriented attention on lateralization of stimulus processing is not clear and requires further investigations.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

OBJECTIVES The objectives of the present study were to investigate temporal/spectral sound-feature processing in preschool children (4 to 7 years old) with peripheral hearing loss compared with age-matched controls. The results verified the presence of statistical learning, which was diminished in children with hearing impairments (HIs), and elucidated possible perceptual mediators of speech production. DESIGN Perception and production of the syllables /ba/, /da/, /ta/, and /na/ were recorded in 13 children with normal hearing and 13 children with HI. Perception was assessed physiologically through event-related potentials (ERPs) recorded by EEG in a multifeature mismatch negativity paradigm and behaviorally through a discrimination task. Temporal and spectral features of the ERPs during speech perception were analyzed, and speech production was quantitatively evaluated using speech motor maximum performance tasks. RESULTS Proximal to stimulus onset, children with HI displayed a difference in map topography, indicating diminished statistical learning. In later ERP components, children with HI exhibited reduced amplitudes in the N2 and early parts of the late disciminative negativity components specifically, which are associated with temporal and spectral control mechanisms. Abnormalities of speech perception were only subtly reflected in speech production, as the lone difference found in speech production studies was a mild delay in regulating speech intensity. CONCLUSIONS In addition to previously reported deficits of sound-feature discriminations, the present study results reflect diminished statistical learning in children with HI, which plays an early and important, but so far neglected, role in phonological processing. Furthermore, the lack of corresponding behavioral abnormalities in speech production implies that impaired perceptual capacities do not necessarily translate into productive deficits.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Amyotrophic Lateral Sclerosis is a severe disease, which dramatically reduces the speech communication skills of patients as disease progresses. The present study is devoted to define accurate and objective estimates to characterize the loss of communication skills, to help clinicians and therapists in monitoring disease progression and in deciding on rehabilitation interventions. The methodology proposed is based on the perceptual (neuromorphic)definition of speech dinamics, concentrated in vowel sound in character and duration. We present the results from a longitudinal study carried out in an ALS patient during one year. Discussion addresses future actions.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Thesis (M.S.)--University of Illinois at Urbana-Champaign, 1977.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

There has been considerable recent research into the connection between Parkinson's disease (PD) and speech impairment. Recently, a wide range of speech signal processing algorithms (dysphonia measures) aiming to predict PD symptom severity using speech signals have been introduced. In this paper, we test how accurately these novel algorithms can be used to discriminate PD subjects from healthy controls. In total, we compute 132 dysphonia measures from sustained vowels. Then, we select four parsimonious subsets of these dysphonia measures using four feature selection algorithms, and map these feature subsets to a binary classification response using two statistical classifiers: random forests and support vector machines. We use an existing database consisting of 263 samples from 43 subjects, and demonstrate that these new dysphonia measures can outperform state-of-the-art results, reaching almost 99% overall classification accuracy using only ten dysphonia features. We find that some of the recently proposed dysphonia measures complement existing algorithms in maximizing the ability of the classifiers to discriminate healthy controls from PD subjects. We see these results as an important step toward noninvasive diagnostic decision support in PD.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Purpose: Both phonological (speech) and auditory (non-speech) stimuli have been shown to predict early reading skills. However, previous studies have failed to control for the level of processing required by tasks administered across the two levels of stimuli. For example, phonological tasks typically tap explicit awareness e.g., phoneme deletion, while auditory tasks usually measure implicit awareness e.g., frequency discrimination. Therefore, the stronger predictive power of speech tasks may be due to their higher processing demands, rather than the nature of the stimuli. Method: The present study uses novel tasks that control for level of processing (isolation, repetition and deletion) across speech (phonemes and nonwords) and non-speech (tones) stimuli. 800 beginning readers at the onset of literacy tuition (mean age 4 years and 7 months) were assessed on the above tasks as well as word reading and letter-knowledge in the first part of a three time-point longitudinal study. Results: Time 1 results reveal a significantly higher association between letter-sound knowledge and all of the speech compared to non-speech tasks. Performance was better for phoneme than tone stimuli, and worse for deletion than isolation and repetition across all stimuli. Conclusions: Results are consistent with phonological accounts of reading and suggest that level of processing required by the task is less important than stimuli type in predicting the earliest stage of reading.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

It is well established that speech, language and phonological skills are closely associated with literacy, and that children with a family risk of dyslexia (FRD) tend to show deficits in each of these areas in the preschool years. This paper examines what the relationships are between FRD and these skills, and whether deficits in speech, language and phonological processing fully account for the increased risk of dyslexia in children with FRD. One hundred and fifty-three 4-6-year-old children, 44 of whom had FRD, completed a battery of speech, language, phonology and literacy tasks. Word reading and spelling were retested 6 months later, and text reading accuracy and reading comprehension were tested 3 years later. The children with FRD were at increased risk of developing difficulties in reading accuracy, but not reading comprehension. Four groups were compared: good and poor readers with and without FRD. In most cases good readers outperformed poor readers regardless of family history, but there was an effect of family history on naming and nonword repetition regardless of literacy outcome, suggesting a role for speech production skills as an endophenotype of dyslexia. Phonological processing predicted spelling, while language predicted text reading accuracy and comprehension. FRD was a significant additional predictor of reading and spelling after controlling for speech production, language and phonological processing, suggesting that children with FRD show additional difficulties in literacy that cannot be fully explained in terms of their language and phonological skills. It is well established that speech, language and phonological skills are closely associated with literacy, and that children with a family risk of dyslexia (FRD) tend to show deficits in each of these areas in the preschool years. This paper examines what the relationships are between FRD and these skills, and whether deficits in speech, language and phonological processing fully account for the increased risk of dyslexia in children with FRD. One hundred and fifty-three 4-6-year-old children, 44 of whom had FRD, completed a battery of speech, language, phonology and literacy tasks. © 2014 John Wiley & Sons Ltd.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Research on aphasia has struggled to identify apraxia of speech (AoS) as an independent deficit affecting a processing level separate from phonological assembly and motor implementation. This is because AoS is characterized by both phonological and phonetic errors and, therefore, can be interpreted as a combination of deficits at the phonological and the motoric level rather than as an independent impairment. We apply novel psycholinguistic analyses to the perceptually phonological errors made by 24 Italian aphasic patients. We show that only patients with relative high rate (>10%) of phonetic errors make sound errors which simplify the phonology of the target. Moreover, simplifications are strongly associated with other variables indicative of articulatory difficulties - such as a predominance of errors on consonants rather than vowels -but not with other measures - such as rate of words reproduced correctly or rates of lexical errors. These results indicate that sound errors cannot arise at a single phonological level because they are different in different patients. Instead, different patterns: (1) provide evidence for separate impairments and the existence of a level of articulatory planning/programming intermediate between phonological selection and motor implementation; (2) validate AoS as an independent impairment at this level, characterized by phonetic errors and phonological simplifications; (3) support the claim that linguistic principles of complexity have an articulatory basis since they only apply in patients with associated articulatory difficulties.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Older adults frequently report that they can hear what they have been told but cannot understand the meaning. This is particularly true in noisy conditions, where the additional challenge of suppressing irrelevant noise (i.e. a competing talker) adds another layer of difficulty to their speech understanding. Hearing aids improve speech perception in quiet, but their success in noisy environments has been modest, suggesting that peripheral hearing loss may not be the only factor in the older adult’s perceptual difficulties. Recent animal studies have shown that auditory synapses and cells undergo significant age-related changes that could impact the integrity of temporal processing in the central auditory system. Psychoacoustic studies carried out in humans have also shown that hearing loss can explain the decline in older adults’ performance in quiet compared to younger adults, but these psychoacoustic measurements are not accurate in describing auditory deficits in noisy conditions. These results would suggest that temporal auditory processing deficits could play an important role in explaining the reduced ability of older adults to process speech in noisy environments. The goals of this dissertation were to understand how age affects neural auditory mechanisms and at which level in the auditory system these changes are particularly relevant for explaining speech-in-noise problems. Specifically, we used non-invasive neuroimaging techniques to tap into the midbrain and the cortex in order to analyze how auditory stimuli are processed in younger (our standard) and older adults. We will also attempt to investigate a possible interaction between processing carried out in the midbrain and cortex.