936 resultados para Humoristic speech
Resumo:
A modified comb filtering technique is proposed which can be used to reduce framing noise generated when speech signals are transform-coded or vector-quantized. Application of this filter to 9. 6 kbit/s speech in a vector transform coder has been found to improve the perceptual quality of the coded speech.
Resumo:
Research has been undertaken to investigate the use of artificial neural network (ANN) techniques to improve the performance of a low bit-rate vector transform coder. Considerable improvements in the perceptual quality of the coded speech have been obtained. New ANN-based methods for vector quantiser (VQ) design and for the adaptive updating of VQ codebook are introduced for use in speech coding applications.
Resumo:
There is considerable interest in creating embedded, speech recognition hardware using the weighted finite state transducer (WFST) technique but there are performance and memory usage challenges. Two system optimization techniques are presented to address this; one approach improves token propagation by removing the WFST epsilon input arcs; another one-pass, adaptive pruning algorithm gives a dramatic reduction in active nodes to be computed. Results for memory and bandwidth are given for a 5,000 word vocabulary giving a better practical performance than conventional WFST; this is then exploited in an adaptive pruning algorithm that reduces the active nodes from 30,000 down to 4,000 with only a 2 percent sacrifice in speech recognition accuracy; these optimizations lead to a more simplified design with deterministic performance.
Resumo:
This paper presents the maximum weighted stream posterior (MWSP) model as a robust and efficient stream integration method for audio-visual speech recognition in environments, where the audio or video streams may be subjected to unknown and time-varying corruption. A significant advantage of MWSP is that it does not require any specific measurements of the signal in either stream to calculate appropriate stream weights during recognition, and as such it is modality-independent. This also means that MWSP complements and can be used alongside many of the other approaches that have been proposed in the literature for this problem. For evaluation we used the large XM2VTS database for speaker-independent audio-visual speech recognition. The extensive tests include both clean and corrupted utterances with corruption added in either/both the video and audio streams using a variety of types (e.g., MPEG-4 video compression) and levels of noise. The experiments show that this approach gives excellent performance in comparison to another well-known dynamic stream weighting approach and also compared to any fixed-weighted integration approach in both clean conditions or when noise is added to either stream. Furthermore, our experiments show that the MWSP approach dynamically selects suitable integration weights on a frame-by-frame basis according to the level of noise in the streams and also according to the naturally fluctuating relative reliability of the modalities even in clean conditions. The MWSP approach is shown to maintain robust recognition performance in all tested conditions, while requiring no prior knowledge about the type or level of noise.
Resumo:
Temporal dynamics and speaker characteristics are two important features of speech that distinguish speech from noise. In this paper, we propose a method to maximally extract these two features of speech for speech enhancement. We demonstrate that this can reduce the requirement for prior information about the noise, which can be difficult to estimate for fast-varying noise. Given noisy speech, the new approach estimates clean speech by recognizing long segments of the clean speech as whole units. In the recognition, clean speech sentences, taken from a speech corpus, are used as examples. Matching segments are identified between the noisy sentence and the corpus sentences. The estimate is formed by using the longest matching segments found in the corpus sentences. Longer speech segments as whole units contain more distinct dynamics and richer speaker characteristics, and can be identified more accurately from noise than shorter speech segments. Therefore, estimation based on the longest recognized segments increases the noise immunity and hence the estimation accuracy. The new approach consists of a statistical model to represent up to sentence-long temporal dynamics in the corpus speech, and an algorithm to identify the longest matching segments between the noisy sentence and the corpus sentences. The algorithm is made more robust to noise uncertainty by introducing missing-feature based noise compensation into the corpus sentences. Experiments have been conducted on the TIMIT database for speech enhancement from various types of nonstationary noise including song, music, and crosstalk speech. The new approach has shown improved performance over conventional enhancement algorithms in both objective and subjective evaluations.
Resumo:
This paper considers the separation and recognition of overlapped speech sentences assuming single-channel observation. A system based on a combination of several different techniques is proposed. The system uses a missing-feature approach for improving crosstalk/noise robustness, a Wiener filter for speech enhancement, hidden Markov models for speech reconstruction, and speaker-dependent/-independent modeling for speaker and speech recognition. We develop the system on the Speech Separation Challenge database, involving a task of separating and recognizing two mixing sentences without assuming advanced knowledge about the identity of the speakers nor about the signal-to-noise ratio. The paper is an extended version of a previous conference paper submitted for the challenge.
Resumo:
Three experiments measured the effects of age on informational masking of speech by competing speech. The experiments were designed to minimize the energetic contributions of the competing speech so that informational masking could be measured with no large corrections for energetic masking. Experiment 1 used a "speech-in-speech-in-noise" design, in which the competing speech was presented in noise at a signal-to-noise ratio (SNR) of -4 dB. This ensured that the noise primarily contributed the energetic masking but the competing speech contributed the informational masking. Equal amounts of informational masking (3 dB) were observed for young and elderly listeners, although less was found for hearing-impaired listeners. Experiment 2 tested a range of SNRs in this design and showed that informational masking increased with SNR up to about an SNR of -4 dB, but decreased thereafter. Experiment 3 further reduced the energetic contribution of the competing speech by filtering it into different frequency bands from the target speech. The elderly listeners again showed approximately the same amount of informational masking (4-5 dB), although some elderly listeners had particular difficulty understanding these stimuli in any condition. On the whole, these results suggest that young and elderly listeners were equally susceptible to informational masking. © 2009 Acoustical Society of America.
Resumo:
Many of the items in the “Speech, Spatial, and Qualities of Hearing” scale questionnaire [S. Gatehouse and W. Noble, Int. J. Audiol.43, 85–99 (2004)] are concerned with speech understanding in a variety of backgrounds, both speech and nonspeech. To study if this self-report data reflected informational masking, previously collected data on 414 people were analyzed. The lowest scores (greatest difficulties) were found for the two items in which there were two speech targets, with successively higher scores for competing speech (six items), energetic masking (one item), and no masking (three items). The results suggest significant masking by competing speech in everyday listening situations.
Resumo:
In this paper, I critically assess John Rawls' repeated claim that the duty of civility is only a moral duty and should not be enforced by law. In the first part of the paper, I examine and reject the view that Rawls' position may be due to the practical difficulties that the legal enforcement of the duty of civility might entail. I thus claim that Rawls' position must be driven by deeper normative reasons grounded in a conception of free speech. In the second part of the paper, I therefore examine various arguments for free speech and critically assess whether they are consistent with Rawls' political liberalism. I first focus on the arguments from truth and self-fulfilment. Both arguments, I argue, rely on comprehensive doctrines and therefore cannot provide a freestanding political justification for free speech. Freedom of speech, I claim, can be justified instead on the basis of Rawls' political conception of the person and of the two moral powers. However, Rawls' wide view of public reason already allows scope for the kind of free speech necessary for the exercise of the two moral powers and therefore cannot explain Rawls' opposition to the legal enforcement of the duty of civility. Such opposition, I claim, can only be explained on the basis of a defence of unconstrained freedom of speech grounded in the ideas of democracy and political legitimacy. Yet, I conclude, while public reason and the duty of civility are essential to political liberalism, unconstrained freedom of speech is not. Rawls and political liberals could therefore renounce unconstrained freedom of speech, and endorse the legal enforcement of the duty of civility, while remaining faithful to political liberalism.
Resumo:
This article examines what is wrong with some expressive acts, ‘insults’. Their putative wrongfulness is distinguished from the causing of indirect harms, aggregated harms, contextual harms, and damaging misrepresentations. The article clarifies what insults are, making use of work by Neu and Austin, and argues that their wrongfulness cannot lie in the hurt that is caused to those at whom such acts are directed. Rather it must lie in what they seek to do, namely to denigrate the other. The causing of offence is at most evidence that an insult has been communicated; it is not independent grounds of proscription or constraint. The victim of an insult may know that she has been insulted but not accept or agree with the insult, and thereby submit to the insulter. Hence insults need not, as Waldron argues they do, occasion dignitary harms. They do not of themselves subvert their victims' equal moral status. The claim that hateful speech endorses inequality should not be conflated with a claim that such speech directly subverts equality.
Thus, ‘wounding words’ should not unduly trouble the liberal defender of free speech either on the grounds of preventing offence or on those of avoiding dignitary harms.
Resumo:
The comparator account holds that processes of motor prediction contribute to the sense of agency by attenuating incoming sensory information and that disruptions to this process contribute to misattributions of agency in schizophrenia. Over the last 25 years this simple and powerful model has gained widespread support not only as it relates to bodily actions but also as an account of misattributions of agency for inner speech, potentially explaining the etiology of auditory verbal hallucination (AVH). In this paper we provide a detailed analysis of the traditional comparator account for inner speech, pointing out serious problems with the specification of inner speech on which it is based and highlighting inconsistencies in the interpretation of the electrophysiological evidence commonly cited in its favor. In light of these analyses we propose a new comparator account of misattributed inner speech. The new account follows leading models of motor imagery in proposing that inner speech is not attenuated by motor prediction, but rather derived directly from it. We describe how failures of motor prediction would therefore directly affect the phenomenology of inner speech and trigger a mismatch in the comparison between motor prediction and motor intention, contributing to abnormal feelings of agency. We argue that the new account fits with the emerging phenomenological evidence that AVHs are both distinct from ordinary inner speech and heterogeneous. Finally, we explore the possibility that the new comparator account may extend to explain disruptions across a range of imagistic modalities, and outline avenues for future research.