93 resultados para Speech recogntion


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents the maximum weighted stream posterior (MWSP) model as a robust and efficient stream integration method for audio-visual speech recognition in environments, where the audio or video streams may be subjected to unknown and time-varying corruption. A significant advantage of MWSP is that it does not require any specific measurements of the signal in either stream to calculate appropriate stream weights during recognition, and as such it is modality-independent. This also means that MWSP complements and can be used alongside many of the other approaches that have been proposed in the literature for this problem. For evaluation we used the large XM2VTS database for speaker-independent audio-visual speech recognition. The extensive tests include both clean and corrupted utterances with corruption added in either/both the video and audio streams using a variety of types (e.g., MPEG-4 video compression) and levels of noise. The experiments show that this approach gives excellent performance in comparison to another well-known dynamic stream weighting approach and also compared to any fixed-weighted integration approach in both clean conditions or when noise is added to either stream. Furthermore, our experiments show that the MWSP approach dynamically selects suitable integration weights on a frame-by-frame basis according to the level of noise in the streams and also according to the naturally fluctuating relative reliability of the modalities even in clean conditions. The MWSP approach is shown to maintain robust recognition performance in all tested conditions, while requiring no prior knowledge about the type or level of noise.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Temporal dynamics and speaker characteristics are two important features of speech that distinguish speech from noise. In this paper, we propose a method to maximally extract these two features of speech for speech enhancement. We demonstrate that this can reduce the requirement for prior information about the noise, which can be difficult to estimate for fast-varying noise. Given noisy speech, the new approach estimates clean speech by recognizing long segments of the clean speech as whole units. In the recognition, clean speech sentences, taken from a speech corpus, are used as examples. Matching segments are identified between the noisy sentence and the corpus sentences. The estimate is formed by using the longest matching segments found in the corpus sentences. Longer speech segments as whole units contain more distinct dynamics and richer speaker characteristics, and can be identified more accurately from noise than shorter speech segments. Therefore, estimation based on the longest recognized segments increases the noise immunity and hence the estimation accuracy. The new approach consists of a statistical model to represent up to sentence-long temporal dynamics in the corpus speech, and an algorithm to identify the longest matching segments between the noisy sentence and the corpus sentences. The algorithm is made more robust to noise uncertainty by introducing missing-feature based noise compensation into the corpus sentences. Experiments have been conducted on the TIMIT database for speech enhancement from various types of nonstationary noise including song, music, and crosstalk speech. The new approach has shown improved performance over conventional enhancement algorithms in both objective and subjective evaluations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper considers the separation and recognition of overlapped speech sentences assuming single-channel observation. A system based on a combination of several different techniques is proposed. The system uses a missing-feature approach for improving crosstalk/noise robustness, a Wiener filter for speech enhancement, hidden Markov models for speech reconstruction, and speaker-dependent/-independent modeling for speaker and speech recognition. We develop the system on the Speech Separation Challenge database, involving a task of separating and recognizing two mixing sentences without assuming advanced knowledge about the identity of the speakers nor about the signal-to-noise ratio. The paper is an extended version of a previous conference paper submitted for the challenge.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Three experiments measured the effects of age on informational masking of speech by competing speech. The experiments were designed to minimize the energetic contributions of the competing speech so that informational masking could be measured with no large corrections for energetic masking. Experiment 1 used a "speech-in-speech-in-noise" design, in which the competing speech was presented in noise at a signal-to-noise ratio (SNR) of -4 dB. This ensured that the noise primarily contributed the energetic masking but the competing speech contributed the informational masking. Equal amounts of informational masking (3 dB) were observed for young and elderly listeners, although less was found for hearing-impaired listeners. Experiment 2 tested a range of SNRs in this design and showed that informational masking increased with SNR up to about an SNR of -4 dB, but decreased thereafter. Experiment 3 further reduced the energetic contribution of the competing speech by filtering it into different frequency bands from the target speech. The elderly listeners again showed approximately the same amount of informational masking (4-5 dB), although some elderly listeners had particular difficulty understanding these stimuli in any condition. On the whole, these results suggest that young and elderly listeners were equally susceptible to informational masking. © 2009 Acoustical Society of America.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Many of the items in the “Speech, Spatial, and Qualities of Hearing” scale questionnaire [S. Gatehouse and W. Noble, Int. J. Audiol.43, 85–99 (2004)] are concerned with speech understanding in a variety of backgrounds, both speech and nonspeech. To study if this self-report data reflected informational masking, previously collected data on 414 people were analyzed. The lowest scores (greatest difficulties) were found for the two items in which there were two speech targets, with successively higher scores for competing speech (six items), energetic masking (one item), and no masking (three items). The results suggest significant masking by competing speech in everyday listening situations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, I critically assess John Rawls' repeated claim that the duty of civility is only a moral duty and should not be enforced by law. In the first part of the paper, I examine and reject the view that Rawls' position may be due to the practical difficulties that the legal enforcement of the duty of civility might entail. I thus claim that Rawls' position must be driven by deeper normative reasons grounded in a conception of free speech. In the second part of the paper, I therefore examine various arguments for free speech and critically assess whether they are consistent with Rawls' political liberalism. I first focus on the arguments from truth and self-fulfilment. Both arguments, I argue, rely on comprehensive doctrines and therefore cannot provide a freestanding political justification for free speech. Freedom of speech, I claim, can be justified instead on the basis of Rawls' political conception of the person and of the two moral powers. However, Rawls' wide view of public reason already allows scope for the kind of free speech necessary for the exercise of the two moral powers and therefore cannot explain Rawls' opposition to the legal enforcement of the duty of civility. Such opposition, I claim, can only be explained on the basis of a defence of unconstrained freedom of speech grounded in the ideas of democracy and political legitimacy. Yet, I conclude, while public reason and the duty of civility are essential to political liberalism, unconstrained freedom of speech is not. Rawls and political liberals could therefore renounce unconstrained freedom of speech, and endorse the legal enforcement of the duty of civility, while remaining faithful to political liberalism.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This article examines what is wrong with some expressive acts, ‘insults’. Their putative wrongfulness is distinguished from the causing of indirect harms, aggregated harms, contextual harms, and damaging misrepresentations. The article clarifies what insults are, making use of work by Neu and Austin, and argues that their wrongfulness cannot lie in the hurt that is caused to those at whom such acts are directed. Rather it must lie in what they seek to do, namely to denigrate the other. The causing of offence is at most evidence that an insult has been communicated; it is not independent grounds of proscription or constraint. The victim of an insult may know that she has been insulted but not accept or agree with the insult, and thereby submit to the insulter. Hence insults need not, as Waldron argues they do, occasion dignitary harms. They do not of themselves subvert their victims' equal moral status. The claim that hateful speech endorses inequality should not be conflated with a claim that such speech directly subverts equality.

Thus, ‘wounding words’ should not unduly trouble the liberal defender of free speech either on the grounds of preventing offence or on those of avoiding dignitary harms.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The comparator account holds that processes of motor prediction contribute to the sense of agency by attenuating incoming sensory information and that disruptions to this process contribute to misattributions of agency in schizophrenia. Over the last 25 years this simple and powerful model has gained widespread support not only as it relates to bodily actions but also as an account of misattributions of agency for inner speech, potentially explaining the etiology of auditory verbal hallucination (AVH). In this paper we provide a detailed analysis of the traditional comparator account for inner speech, pointing out serious problems with the specification of inner speech on which it is based and highlighting inconsistencies in the interpretation of the electrophysiological evidence commonly cited in its favor. In light of these analyses we propose a new comparator account of misattributed inner speech. The new account follows leading models of motor imagery in proposing that inner speech is not attenuated by motor prediction, but rather derived directly from it. We describe how failures of motor prediction would therefore directly affect the phenomenology of inner speech and trigger a mismatch in the comparison between motor prediction and motor intention, contributing to abnormal feelings of agency. We argue that the new account fits with the emerging phenomenological evidence that AVHs are both distinct from ordinary inner speech and heterogeneous. Finally, we explore the possibility that the new comparator account may extend to explain disruptions across a range of imagistic modalities, and outline avenues for future research.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

While current speech recognisers give acceptable performance in carefully controlled environments, their performance degrades rapidly when they are applied in more realistic situations. Generally, the environmental noise may be classified into two classes: the wide-band noise and narrow band noise. While the multi-band model has been shown to be capable of dealing with speech corrupted by narrow-band noise, it is ineffective for wide-band noise. In this paper, we suggest a combination of the frequency-filtering technique with the probabilistic union model in the multi-band approach. The new system has been tested on the TIDIGITS database, corrupted by white noise, noise collected from a railway station, and narrow-band noise, respectively. The results have shown that this approach is capable of dealing with noise of narrow-band or wide-band characteristics, assuming no knowledge about the noisy environment.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a novel method of audio-visual fusion for person identification where both the speech and facial modalities may be corrupted, and there is a lack of prior knowledge about the corruption. Furthermore, we assume there is a limited amount of training data for each modality (e.g., a short training speech segment and a single training facial image for each person). A new representation and a modified cosine similarity are introduced for combining and comparing bimodal features with limited training data as well as vastly differing data rates and feature sizes. Optimal feature selection and multicondition training are used to reduce the mismatch between training and testing, thereby making the system robust to unknown bimodal corruption. Experiments have been carried out on a bimodal data set created from the SPIDRE and AR databases with variable noise corruption of speech and occlusion in the face images. The new method has demonstrated improved recognition accuracy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a new approach to speech enhancement from single-channel measurements involving both noise and channel distortion (i.e., convolutional noise), and demonstrates its applications for robust speech recognition and for improving noisy speech quality. The approach is based on finding longest matching segments (LMS) from a corpus of clean, wideband speech. The approach adds three novel developments to our previous LMS research. First, we address the problem of channel distortion as well as additive noise. Second, we present an improved method for modeling noise for speech estimation. Third, we present an iterative algorithm which updates the noise and channel estimates of the corpus data model. In experiments using speech recognition as a test with the Aurora 4 database, the use of our enhancement approach as a preprocessor for feature extraction significantly improved the performance of a baseline recognition system. In another comparison against conventional enhancement algorithms, both the PESQ and the segmental SNR ratings of the LMS algorithm were superior to the other methods for noisy speech enhancement.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a new approach to single-channel speech enhancement involving both noise and channel distortion (i.e., convolutional noise). The approach is based on finding longest matching segments (LMS) from a corpus of clean, wideband speech. The approach adds three novel developments to our previous LMS research. First, we address the problem of channel distortion as well as additive noise. Second, we present an improved method for modeling noise. Third, we present an iterative algorithm for improved speech estimates. In experiments using speech recognition as a test with the Aurora 4 database, the use of our enhancement approach as a preprocessor for feature extraction significantly improved the performance of a baseline recognition system. In another comparison against conventional enhancement algorithms, both the PESQ and the segmental SNR ratings of the LMS algorithm were superior to the other methods for noisy speech enhancement. Index Terms: corpus-based speech model, longest matching segment, speech enhancement, speech recognition