166 resultados para Speech disorders
Resumo:
Investigations of the factor structure of the Alcohol Use Disorders Identification Test (AUDIT) have produced conflicting results. The current study assessed the factor structure of the AUDIT for a group of Mentally Disordered Offenders (MDOs) and examined the pattern of scoring in specific subgroups. The sample comprised 2005 MDOs who completed a battery of tests including the AUDIT. Confirmatory factor analyses revealed that a two-factor solution – alcohol consumption and alcohol-related consequences – provided the best data fit for AUDIT scores. A three-factor solution provided an equally good fit, but the second and third factors were highly correlated and a measure of parsimony also favoured the two-factor solution. This study provides useful information on the factor structure of the AUDIT amongst a large MDO population, while also highlighting the difficulties associated with the presence of people with mental health problems in the criminal justice system.
Resumo:
The issue of inherited disorders in pedigree dogs is not a recent phenomenon and reports of suspected genetic defects associated with breeding practices date back to Charles Darwin's time. In recent years, much information on the array of inherited defects has been assimilated and the true extent of the problem has come to light. Historically, the direction of research funding in the field of canine genetic disease has been largely influenced by the potential transferability of findings to human medicine, economic benefit and importance of dogs for working purposes. More recently, the argument for a more canine welfare-orientated approach has been made, targeting research efforts at the alleviation of the most suffering in the greatest number of animals.
Resumo:
There are multiple reasons to expect that recognising the verbal content of emotional speech will be a difficult problem, and recognition rates reported in the literature are in fact low. Including information about prosody improves recognition rate for emotions simulated by actors, but its relevance to the freer patterns of spontaneous speech is unproven. This paper shows that recognition rate for spontaneous emotionally coloured speech can be improved by using a language model based on increased representation of emotional utterances. The models are derived by adapting an already existing corpus, the British National Corpus (BNC). An emotional lexicon is used to identify emotionally coloured words, and sentences containing these words are recombined with the BNC to form a corpus with a raised proportion of emotional material. Using a language model based on that technique improves recognition rate by about 20%. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
Speech recognition and language analysis of spontaneous speech arising in naturally spoken conversations are becoming the subject of much research. However, there is a shortage of spontaneous speech corpora that are freely available for academics. We therefore undertook the building of a natural conversation speech database, recording over 200 hours of conversations in English by over 600 local university students. With few exceptions, the students used their own cell phones from their own rooms or homes to speak to one another, and they were permitted to speak on any topic they chose. Although they knew that they were being recorded and that they would receive a small payment, their conversations in the corpus are probably very close to being natural and spontaneous. This paper describes a detailed case study of the problems we faced and the methods we used to make the recordings and control the collection of these social science data on a limited budget.
Resumo:
This paper studies single-channel speech separation, assuming unknown, arbitrary temporal dynamics for the speech signals to be separated. A data-driven approach is described, which matches each mixed speech segment against a composite training segment to separate the underlying clean speech segments. To advance the separation accuracy, the new approach seeks and separates the longest mixed speech segments with matching composite training segments. Lengthening the mixed speech segments to match reduces the uncertainty of the constituent training segments, and hence the error of separation. For convenience, we call the new approach Composition of Longest Segments, or CLOSE. The CLOSE method includes a data-driven approach to model long-range temporal dynamics of speech signals, and a statistical approach to identify the longest mixed speech segments with matching composite training segments. Experiments are conducted on the Wall Street Journal database, for separating mixtures of two simultaneous large-vocabulary speech utterances spoken by two different speakers. The results are evaluated using various objective and subjective measures, including the challenge of large-vocabulary continuous speech recognition. It is shown that the new separation approach leads to significant improvement in all these measures.