25 resultados para Speech-Language Pathology
Resumo:
Studies in sensory neuroscience reveal the critical importance of accurate sensory perception for cognitive development. There is considerable debate concerning the possible sensory correlates of phonological processing, the primary cognitive risk factor for developmental dyslexia. Across languages, children with dyslexia have a specific difficulty with the neural representation of the phonological structure of speech. The identification of a robust sensory marker of phonological difficulties would enable early identification of risk for developmental dyslexia and early targeted intervention. Here, we explore whether phonological processing difficulties are associated with difficulties in processing acoustic cues to speech rhythm. Speech rhythm is used across languages by infants to segment the speech stream into words and syllables. Early difficulties in perceiving auditory sensory cues to speech rhythm and prosody could lead developmentally to impairments in phonology. We compared matched samples of children with and without dyslexia, learning three very different spoken and written languages, English, Spanish, and Chinese. The key sensory cue measured was rate of onset of the amplitude envelope (rise time), known to be critical for the rhythmic timing of speech. Despite phonological and orthographic differences, for each language, rise time sensitivity was a significant predictor of phonological awareness, and rise time was the only consistent predictor of reading acquisition. The data support a language-universal theory of the neural basis of developmental dyslexia on the basis of rhythmic perception and syllable segmentation. They also suggest that novel remediation strategies on the basis of rhythm and music may offer benefits for phonological and linguistic development.
Resumo:
There are multiple reasons to expect that recognising the verbal content of emotional speech will be a difficult problem, and recognition rates reported in the literature are in fact low. Including information about prosody improves recognition rate for emotions simulated by actors, but its relevance to the freer patterns of spontaneous speech is unproven. This paper shows that recognition rate for spontaneous emotionally coloured speech can be improved by using a language model based on increased representation of emotional utterances. The models are derived by adapting an already existing corpus, the British National Corpus (BNC). An emotional lexicon is used to identify emotionally coloured words, and sentences containing these words are recombined with the BNC to form a corpus with a raised proportion of emotional material. Using a language model based on that technique improves recognition rate by about 20%. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
Speech recognition and language analysis of spontaneous speech arising in naturally spoken conversations are becoming the subject of much research. However, there is a shortage of spontaneous speech corpora that are freely available for academics. We therefore undertook the building of a natural conversation speech database, recording over 200 hours of conversations in English by over 600 local university students. With few exceptions, the students used their own cell phones from their own rooms or homes to speak to one another, and they were permitted to speak on any topic they chose. Although they knew that they were being recorded and that they would receive a small payment, their conversations in the corpus are probably very close to being natural and spontaneous. This paper describes a detailed case study of the problems we faced and the methods we used to make the recordings and control the collection of these social science data on a limited budget.
Resumo:
This paper studies single-channel speech separation, assuming unknown, arbitrary temporal dynamics for the speech signals to be separated. A data-driven approach is described, which matches each mixed speech segment against a composite training segment to separate the underlying clean speech segments. To advance the separation accuracy, the new approach seeks and separates the longest mixed speech segments with matching composite training segments. Lengthening the mixed speech segments to match reduces the uncertainty of the constituent training segments, and hence the error of separation. For convenience, we call the new approach Composition of Longest Segments, or CLOSE. The CLOSE method includes a data-driven approach to model long-range temporal dynamics of speech signals, and a statistical approach to identify the longest mixed speech segments with matching composite training segments. Experiments are conducted on the Wall Street Journal database, for separating mixtures of two simultaneous large-vocabulary speech utterances spoken by two different speakers. The results are evaluated using various objective and subjective measures, including the challenge of large-vocabulary continuous speech recognition. It is shown that the new separation approach leads to significant improvement in all these measures.
Resumo:
Temporal dynamics and speaker characteristics are two important features of speech that distinguish speech from noise. In this paper, we propose a method to maximally extract these two features of speech for speech enhancement. We demonstrate that this can reduce the requirement for prior information about the noise, which can be difficult to estimate for fast-varying noise. Given noisy speech, the new approach estimates clean speech by recognizing long segments of the clean speech as whole units. In the recognition, clean speech sentences, taken from a speech corpus, are used as examples. Matching segments are identified between the noisy sentence and the corpus sentences. The estimate is formed by using the longest matching segments found in the corpus sentences. Longer speech segments as whole units contain more distinct dynamics and richer speaker characteristics, and can be identified more accurately from noise than shorter speech segments. Therefore, estimation based on the longest recognized segments increases the noise immunity and hence the estimation accuracy. The new approach consists of a statistical model to represent up to sentence-long temporal dynamics in the corpus speech, and an algorithm to identify the longest matching segments between the noisy sentence and the corpus sentences. The algorithm is made more robust to noise uncertainty by introducing missing-feature based noise compensation into the corpus sentences. Experiments have been conducted on the TIMIT database for speech enhancement from various types of nonstationary noise including song, music, and crosstalk speech. The new approach has shown improved performance over conventional enhancement algorithms in both objective and subjective evaluations.
Resumo:
Using fMRI, we conducted two types of property generation task that involved language switching, with early bilingual speakers of Korean and Chinese. The first is a more conventional task in which a single language (L1 or L2) was used within each trial, but switched randomly from trial to trial. The other consists of a novel experimental design where language switching happens within each trial, alternating in the direction of the L1/L2 translation required. Our findings support a recently introduced cognitive model, the 'hodological' view of language switching proposed by Moritz-Gasser and Duffau. The nodes of a distributed neural network that this model proposes are consistent with the informative regions that we extracted in this study, using both GLM methods and Multivariate Pattern Analyses: the supplementary motor area, caudate, supramarginal gyrus and fusiform gyrus and other cortical areas.
Resumo:
This paper presents a novel method of audio-visual fusion for person identification where both the speech and facial modalities may be corrupted, and there is a lack of prior knowledge about the corruption. Furthermore, we assume there is a limited amount of training data for each modality (e.g., a short training speech segment and a single training facial image for each person). A new representation and a modified cosine similarity are introduced for combining and comparing bimodal features with limited training data as well as vastly differing data rates and feature sizes. Optimal feature selection and multicondition training are used to reduce the mismatch between training and testing, thereby making the system robust to unknown bimodal corruption. Experiments have been carried out on a bimodal data set created from the SPIDRE and AR databases with variable noise corruption of speech and occlusion in the face images. The new method has demonstrated improved recognition accuracy.
Resumo:
Language experience clearly affects the perception of speech, but little is known about whether these differences in perception extend to non-speech sounds. In this study, we investigated rhythmic perception of non-linguistic sounds in speakers of French and German using a grouping task, in which complexity (variability in sounds, presence of pauses) was manipulated. In this task, participants grouped sequences of auditory chimeras formed from musical instruments. These chimeras mimic the complexity of speech without being speech. We found that, while showing the same overall grouping preferences, the German speakers showed stronger biases than the French speakers in grouping complex sequences. Sound variability reduced all participants' biases, resulting in the French group showing no grouping preference for the most variable sequences, though this reduction was attenuated by musical experience. In sum, this study demonstrates that linguistic experience, musical experience, and complexity affect rhythmic grouping of non-linguistic sounds and suggests that experience with acoustic cues in a meaningful context (language or music) is necessary for developing a robust grouping preference that survives acoustic variability.
Resumo:
Background and aims: Machine learning techniques for the text mining of cancer-related clinical documents have not been sufficiently explored. Here some techniques are presented for the pre-processing of free-text breast cancer pathology reports, with the aim of facilitating the extraction of information relevant to cancer staging.
Materials and methods: The first technique was implemented using the freely available software RapidMiner to classify the reports according to their general layout: ‘semi-structured’ and ‘unstructured’. The second technique was developed using the open source language engineering framework GATE and aimed at the prediction of chunks of the report text containing information pertaining to the cancer morphology, the tumour size, its hormone receptor status and the number of positive nodes. The classifiers were trained and tested respectively on sets of 635 and 163 manually classified or annotated reports, from the Northern Ireland Cancer Registry.
Results: The best result of 99.4% accuracy – which included only one semi-structured report predicted as unstructured – was produced by the layout classifier with the k nearest algorithm, using the binary term occurrence word vector type with stopword filter and pruning. For chunk recognition, the best results were found using the PAUM algorithm with the same parameters for all cases, except for the prediction of chunks containing cancer morphology. For semi-structured reports the performance ranged from 0.97 to 0.94 and from 0.92 to 0.83 in precision and recall, while for unstructured reports performance ranged from 0.91 to 0.64 and from 0.68 to 0.41 in precision and recall. Poor results were found when the classifier was trained on semi-structured reports but tested on unstructured.
Conclusions: These results show that it is possible and beneficial to predict the layout of reports and that the accuracy of prediction of which segments of a report may contain certain information is sensitive to the report layout and the type of information sought.