979 resultados para Speech enhancement systems
Resumo:
Supported by Contract AT(11-1)-1018 and Contract AT(11-1)-2118 with U.S. Atomic Energy Commission.
Resumo:
Cover title.
Resumo:
We propose a novel analysis alternative, based on two Fourier Transforms for emotion recognition from speech -- Fourier analysis allows for display and synthesizes different signals, in terms of power spectral density distributions -- A spectrogram of the voice signal is obtained performing a short time Fourier Transform with Gaussian windows, this spectrogram portraits frequency related features, such as vocal tract resonances and quasi-periodic excitations during voiced sounds -- Emotions induce such characteristics in speech, which become apparent in spectrogram time-frequency distributions -- Later, the signal time-frequency representation from spectrogram is considered an image, and processed through a 2-dimensional Fourier Transform in order to perform the spatial Fourier analysis from it -- Finally features related with emotions in voiced speech are extracted and presented
Resumo:
Quasielastic excitation functions for the (16,18)O + (60)Ni systems were measured at energies near and below the Coulomb barrier, at the backward angle theta(LAB) = 161 degrees. The corresponding quasielastic barrier distributions were derived. The data were compared with predictions from coupled channel calculations using a double-folding potential as a bare potential. For the (16)O-induced scattering, good agreement was obtained for the barrier distribution by using the projectile default nuclear matter diffuseness obtained from the Sao Paulo potential systematic, that is, 0.56 fm. However, for the (18)O-induced scattering, good agreement was obtained only when the projectile nuclear matter diffuseness was changed to 0.62 fm. Therefore, in this paper we show how near-barrier quasielastic scattering can be used as a sensitive tool to derive nuclear matter diffuseness.
Resumo:
El Nadal és una època plena de simbolisme, on les converses amb familiars i amics són protagonistes. Hi ha una frase bíblica,"I el Verb es va fer home i va conviure amb nosaltres" (Joan1,1-14), que sempre m'hi fa pensar, atès que en un context religiós el mot verb pot significar precisament això, "expressió d'idees i pensaments mitjançant la paraula" (DIEC2). Actualment al nostre planeta es parlen més de 6.800 idiomes. Hi ha idiomes tonals, com el mandarí i el ioruba, on el to amb què es pronuncia una paraula afecta el seu significat [...].
Resumo:
The applications of Automatic Vowel Recognition (AVR), which is a sub-part of fundamental importance in most of the speech processing systems, vary from automatic interpretation of spoken language to biometrics. State-of-the-art systems for AVR are based on traditional machine learning models such as Artificial Neural Networks (ANNs) and Support Vector Machines (SVMs), however, such classifiers can not deal with efficiency and effectiveness at the same time, existing a gap to be explored when real-time processing is required. In this work, we present an algorithm for AVR based on the Optimum-Path Forest (OPF), which is an emergent pattern recognition technique recently introduced in literature. Adopting a supervised training procedure and using speech tags from two public datasets, we observed that OPF has outperformed ANNs, SVMs, plus other classifiers, in terms of training time and accuracy. ©2010 IEEE.
Resumo:
In many movies of scientific fiction, machines were capable of speaking with humans. However mankind is still far away of getting those types of machines, like the famous character C3PO of Star Wars. During the last six decades the automatic speech recognition systems have been the target of many studies. Throughout these years many technics were developed to be used in applications of both software and hardware. There are many types of automatic speech recognition system, among which the one used in this work were the isolated word and independent of the speaker system, using Hidden Markov Models as the recognition system. The goals of this work is to project and synthesize the first two steps of the speech recognition system, the steps are: the speech signal acquisition and the pre-processing of the signal. Both steps were developed in a reprogrammable component named FPGA, using the VHDL hardware description language, owing to the high performance of this component and the flexibility of the language. In this work it is presented all the theory of digital signal processing, as Fast Fourier Transforms and digital filters and also all the theory of speech recognition using Hidden Markov Models and LPC processor. It is also presented all the results obtained for each one of the blocks synthesized e verified in hardware
Resumo:
Current text-to-speech systems are developed using studio-recorded speech in a neutral style or based on acted emotions. However, the proliferation of media sharing sites would allow developing a new generation of speech-based systems which could cope with spontaneous and styled speech. This paper proposes an architecture to deal with realistic recordings and carries out some experiments on unsupervised speaker diarization. In order to maximize the speaker purity of the clusters while keeping a high speaker coverage, the paper evaluates the F-measure of a diarization module, achieving high scores (>85%) especially when the clusters are longer than 30 seconds, even for the more spontaneous and expressive styles (such as talk shows or sports).
Resumo:
El objetivo del presente proyecto es proporcionar una actividad de la pronunciación y repaso de vocabulario en lengua inglesa para la plataforma Moodle alojada en la página web de Integrated Language Learning Lab (ILLLab). La página web ILLLab tiene el objetivo de que los alumnos de la EUIT de Telecomunicación de la UPM con un nivel de inglés A2 según el Marco Común Europeo de Referencia para las Lenguas (MCERL), puedan trabajar de manera autónoma para avanzar hacia el nivel B2 en inglés. La UPM exige estos conocimientos de nivel de inglés para cursar la asignatura English for Professional and Academic Communication (EPAC) de carácter obligatorio e impartida en el séptimo semestre del Grado en Ingeniería de Telecomunicaciones. Asimismo, se persigue abordar el problema de las escasas actividades de expresión oral de las plataformas de autoaprendizaje se dedican a la formación en idiomas y, más concretamente, al inglés. Con ese fin, se proporciona una herramienta basada en sistemas de reconocimiento de voz para que el usuario practique la pronunciación de las palabras inglesas. En el primer capítulo del trabajo se introduce la aplicación Traffic Lights, explicando sus orígenes y en qué consiste. En el segundo capítulo se abordan aspectos teóricos relacionados con el reconocimiento de voz y se comenta sus funciones principales y las aplicaciones actuales para las que se usa. El tercer capítulo ofrece una explicación detallada de los diferentes lenguajes utilizados para la realización del proyecto, así como de su código desarrollado. En el cuarto capítulo se plantea un manual de usuario de la aplicación, exponiendo al usuario cómo funciona la aplicación y un ejemplo de uso. Además, se añade varias secciones para el administrador de la aplicación, en las que se especifica cómo agregar nuevas palabras en la base de datos y hacer cambios en el tiempo estimado que el usuario tiene para acabar una partida del juego. ABSTRACT: The objective of the present project is to provide an activity of pronunciation and vocabulary review in English language within the platform Moodle hosted at the Integrated Language Learning Lab (ILLLab) website. The ILLLab website has the aim to provide students at the EUIT of Telecommunication in the UPM with activities to develop their A2 level according to the Common European Framework of Reference for Languages (CEFR). In the platform, students can work independently to advance towards a B2 level in English. The UPM requires this level of English proficiency for enrolling in the compulsory subject English for Professional and Academic Communication (EPAC) taught in the seventh semester of the Degree in Telecommunications Engineering. Likewise, this project tries to provide alternatives to solve the problem of scarce speaking activities included in the learning platforms that offer language courses, and specifically, English language courses. For this purpose, it provides a tool based on speech recognition systems so that the user can practice the pronunciation of English words. The first chapter of the project introduces the application Traffic Lights, explaining its origins and what it is. The second chapter deals with theoretical aspects related with speech recognition and comments their main features and current applications for which it is generally used. The third chapter provides a detailed explanation of the different programming languages used for the implementation of the project and reviews its code development. The fourth chapter presents an application user manual, exposing to the user how the application works and an example of use. Also, several sections are added addressed to the application administrator, which specify how to add new words to the database and how to make changes in the original stings as could be the estimated time that the user has to finish the game.
Resumo:
National Highway Traffic Safety Administration, Washington, D.C.
Resumo:
Thesis--University of Illinois at Urbana-Champaign.
Resumo:
Thesis (M.A.)--University of Illinois at Urbana-Champaign.
Resumo:
Primary objective: The aims of this preliminary study were to explore the suitability for and benefits of commencing dysarthria treatment for people with traumatic brain injury (TBI) while in post-traumatic amnesia ( PTA). It was hypothesized that behaviours in PTA don't preclude participation and dysarthria characteristics would improve post-treatment. Research design: A series of comprehensive case analyses. Methods and procedures: Two participants with severe TBI received dysarthria treatment focused on motor speech deficits until emergence from PTA. A checklist of neurobehavioural sequelae of TBI was rated during therapy and perceptual and motor speech assessments were administered before and after therapy. Main outcomes and results: Results revealed that certain behaviours affected the quality of therapy but didn't preclude the provision of therapy. Treatment resulted in physiological improvements in some speech sub-systems for both participants, with varying functional speech outcomes. Conclusions: These findings suggest that dysarthria treatment can begin and provide short-term benefits to speech production during the late stages of PTA post-TBI.
Resumo:
The present thesis focuses on the overall structure of the language of two types of Speech Exchange Systems (SES) : Interview (INT) and Conversation (CON). The linguistic structure of INT and CON are quantitatively investigated on three different but interrelated levels of analysis : Lexis, Syntax and Information Structure. The corpus of data 1n vest1gated for the project consists of eight sessions of pairs of conversants in carefully planned interviews followed by unplanned, surreptitiously recorded conversational encounters of the same pairs of speakers. The data comprise a total of approximately 15.200 words of INT talk and of about 19.200 words in CON. Taking account of the debatable assumption that the language of SES might be complex on certain linguistic levels (e.g. syntax) (Halliday 1979) and might be simple on others (e.g. lexis) in comparison to written discourse, the thesis sets out to investigate this complexity using a statistical approach to the computation of the structures recurrent in the language of INT and CON. The findings indicate clearly the presence of linguistic complexity in both types. They also show the language of INT to be slightly more syntactically and lexically complex than that of CON. Lexical density seems to be relatively high in both types of spoken discourse. The language of INT seems to be more complex than that of CON on the level of information structure too. This is manifested in the greater use of Inferable and other linguistically complex entities of discourse. Halliday's suggestion that the language of SES is syntactically complex is confirmed but not the one that the more casual the conversation is the more syntactically complex it becomes. The results of the analysis point to the general conclusion that the linguistic complexity of types of SES is not only in the high recurrence of syntactic structures, but also in the combination of these features with each other and with other linguistic and extralinguistic features. The linguistic analysis of the language of SES can be useful in understanding and pinpointing the intricacies of spoken discourse in general and will help discourse analysts and applied linguists in exploiting it both for theoretical and pedagogical purposes.
Resumo:
While humans can easily segregate and track a speaker's voice in a loud noisy environment, most modern speech recognition systems still perform poorly in loud background noise. The computational principles behind auditory source segregation in humans is not yet fully understood. In this dissertation, we develop a computational model for source segregation inspired by auditory processing in the brain. To support the key principles behind the computational model, we conduct a series of electro-encephalography experiments using both simple tone-based stimuli and more natural speech stimulus. Most source segregation algorithms utilize some form of prior information about the target speaker or use more than one simultaneous recording of the noisy speech mixtures. Other methods develop models on the noise characteristics. Source segregation of simultaneous speech mixtures with a single microphone recording and no knowledge of the target speaker is still a challenge. Using the principle of temporal coherence, we develop a novel computational model that exploits the difference in the temporal evolution of features that belong to different sources to perform unsupervised monaural source segregation. While using no prior information about the target speaker, this method can gracefully incorporate knowledge about the target speaker to further enhance the segregation.Through a series of EEG experiments we collect neurological evidence to support the principle behind the model. Aside from its unusual structure and computational innovations, the proposed model provides testable hypotheses of the physiological mechanisms of the remarkable perceptual ability of humans to segregate acoustic sources, and of its psychophysical manifestations in navigating complex sensory environments. Results from EEG experiments provide further insights into the assumptions behind the model and provide motivation for future single unit studies that can provide more direct evidence for the principle of temporal coherence.