983 resultados para Digit speech recognition
Resumo:
El objetivo del presente proyecto es proporcionar una actividad de la pronunciación y repaso de vocabulario en lengua inglesa para la plataforma Moodle alojada en la página web de Integrated Language Learning Lab (ILLLab). La página web ILLLab tiene el objetivo de que los alumnos de la EUIT de Telecomunicación de la UPM con un nivel de inglés A2 según el Marco Común Europeo de Referencia para las Lenguas (MCERL), puedan trabajar de manera autónoma para avanzar hacia el nivel B2 en inglés. La UPM exige estos conocimientos de nivel de inglés para cursar la asignatura English for Professional and Academic Communication (EPAC) de carácter obligatorio e impartida en el séptimo semestre del Grado en Ingeniería de Telecomunicaciones. Asimismo, se persigue abordar el problema de las escasas actividades de expresión oral de las plataformas de autoaprendizaje se dedican a la formación en idiomas y, más concretamente, al inglés. Con ese fin, se proporciona una herramienta basada en sistemas de reconocimiento de voz para que el usuario practique la pronunciación de las palabras inglesas. En el primer capítulo del trabajo se introduce la aplicación Traffic Lights, explicando sus orígenes y en qué consiste. En el segundo capítulo se abordan aspectos teóricos relacionados con el reconocimiento de voz y se comenta sus funciones principales y las aplicaciones actuales para las que se usa. El tercer capítulo ofrece una explicación detallada de los diferentes lenguajes utilizados para la realización del proyecto, así como de su código desarrollado. En el cuarto capítulo se plantea un manual de usuario de la aplicación, exponiendo al usuario cómo funciona la aplicación y un ejemplo de uso. Además, se añade varias secciones para el administrador de la aplicación, en las que se especifica cómo agregar nuevas palabras en la base de datos y hacer cambios en el tiempo estimado que el usuario tiene para acabar una partida del juego. ABSTRACT: The objective of the present project is to provide an activity of pronunciation and vocabulary review in English language within the platform Moodle hosted at the Integrated Language Learning Lab (ILLLab) website. The ILLLab website has the aim to provide students at the EUIT of Telecommunication in the UPM with activities to develop their A2 level according to the Common European Framework of Reference for Languages (CEFR). In the platform, students can work independently to advance towards a B2 level in English. The UPM requires this level of English proficiency for enrolling in the compulsory subject English for Professional and Academic Communication (EPAC) taught in the seventh semester of the Degree in Telecommunications Engineering. Likewise, this project tries to provide alternatives to solve the problem of scarce speaking activities included in the learning platforms that offer language courses, and specifically, English language courses. For this purpose, it provides a tool based on speech recognition systems so that the user can practice the pronunciation of English words. The first chapter of the project introduces the application Traffic Lights, explaining its origins and what it is. The second chapter deals with theoretical aspects related with speech recognition and comments their main features and current applications for which it is generally used. The third chapter provides a detailed explanation of the different programming languages used for the implementation of the project and reviews its code development. The fourth chapter presents an application user manual, exposing to the user how the application works and an example of use. Also, several sections are added addressed to the application administrator, which specify how to add new words to the database and how to make changes in the original stings as could be the estimated time that the user has to finish the game.
Resumo:
We present an approach to adapt dynamically the language models (LMs) used by a speech recognizer that is part of a spoken dialogue system. We have developed a grammar generation strategy that automatically adapts the LMs using the semantic information that the user provides (represented as dialogue concepts), together with the information regarding the intentions of the speaker (inferred by the dialogue manager, and represented as dialogue goals). We carry out the adaptation as a linear interpolation between a background LM, and one or more of the LMs associated to the dialogue elements (concepts or goals) addressed by the user. The interpolation weights between those models are automatically estimated on each dialogue turn, using measures such as the posterior probabilities of concepts and goals, estimated as part of the inference procedure to determine the actions to be carried out. We propose two approaches to handle the LMs related to concepts and goals. Whereas in the first one we estimate a LM for each one of them, in the second one we apply several clustering strategies to group together those elements that share some common properties, and estimate a LM for each cluster. Our evaluation shows how the system can estimate a dynamic model adapted to each dialogue turn, which helps to improve the performance of the speech recognition (up to a 14.82% of relative improvement), which leads to an improvement in both the language understanding and the dialogue management tasks.
Diseño de un videojuego orientado a mejorar el proceso de enseñanza-aprendizaje de la lengua inglesa
Resumo:
Desde que el proceso de la globalización empezó a tener efectos en la sociedad actual, la lengua inglesa se ha impuesto como primera opción de comunicación entre las grandes empresas y sobre todo en el ámbito de los negocios. Por estos motivos se hace necesario el conocimiento de esta lengua que con el paso de los años ha ido creciendo en número de hablantes. Cada vez son más las personas que quieren dominar la lengua inglesa. El aprendizaje en esta doctrina se va iniciando en edades muy tempranas, facilitando y mejorando así la adquisición de una base de conocimientos con todas las destrezas que tiene la lengua inglesa: lectura, escritura, expresión oral y comprensión oral. Con este proyecto se quiso mejorar el proceso de enseñanza-aprendizaje de la lengua inglesa en un rango de población menor de 13 años. Se propuso crear un método de aprendizaje que motivara al usuario y le reportase una ayuda constante durante su progreso en el conocimiento de la lengua inglesa. El mejor método que se pensó para llevar a cabo este objetivo fue la realización de un videojuego que cumpliese todas las características propuestas anteriormente. Un videojuego de aprendizaje en inglés, que además incluyese algo tan novedoso como el reconocimiento de voz para mejorar la expresión oral del usuario, ayudaría a la población a mejorar el nivel de inglés básico en todas las destrezas así como el establecimiento de una base sólida que serviría para asentar mejor futuros conocimientos más avanzados. ABSTRACT Since Globalization began to have an effect on today's society, the English language has emerged as the first choice for communication among companies and especially in the field of business. Therefore, the command of this language, which over the years has grown in number of speakers, has become more and more necessary. Increasingly people want to master the English language. They start learning at very early age, thus facilitating and improving the acquisition of a new knowledge like English language. The skills of English must be practiced are: reading, writing, listening and speaking. If people learnt all these skills, they could achieve a high level of English. In this project, the aim is to improve the process of teaching and learning English in a range of population less than 13 years. To do so, an interactive learning video game that motivates the users and brings them constant help during their progress in the learning of the English language is designed. The video game designed to learn English, also includes some novelties from the point of view of the technology used as is speech recognition. The aim of this integration is to improve speaking skills of users, who will therefore improve the standard of English in all four basic learning skills and establish a solid base that would facilitate the acquisition of future advanced knowledge.
Resumo:
El presente PFC tiene como objetivo el desarrollo de un gestor domótico basado en el dictado de voz de la red social WhatsApp. Dicho gestor no solo sustituirá el concepto dañino de que la integración de la domótica hoy en día es cara e inservible sino que acercará a aquellas personas con una discapacidad a tener una mejora en la calidad de vida. Estas personas, con un simple comando de voz a su aplicación WhatsApp de su terminal móvil, podrán activar o desactivar todos los elementos domóticos que su vivienda tenga instalados, “activar lámpara”, “encender Horno”, “abrir Puerta”… Todo a un muy bajo precio y utilizando tecnologías OpenSource El objetivo principal de este PFC es ayudar a la gente con una discapacidad a tener mejor calidad de vida, haciéndose independiente en las labores del hogar, ya que será el hogar quien haga las labores. La accesibilidad de este servicio, es por tanto, la mayor de las metas. Para conseguir accesibilidad para todas las personas, se necesita un servicio barato y de fácil aprendizaje. Se elige la red social WhatsApp como interprete, ya que no necesita de formación al ser una aplicación usada mayoritariamente en España y por la capacidad del dictado de voz, y se eligen las tecnologías OpenSource por ser la gran mayoría de ellas gratuitas o de pago solo el hardware. La utilización de la Red social WhatsApp se justifica por sí sola, en septiembre de 2015 se registraron 900 millones de usuarios. Este dato es fruto, también, de la reciente adquisición por parte de Facebook y hace que cumpla el primer requisito de accesibilidad para el servicio domotico que se presenta. Desde hace casi 5 años existe una API liberada de WhatsApp, que la comunidad OpenSource ha utilizado, para crear sus propios clientes o aplicaciones de envío de mensajes, usando la infraestructura de la red social. La empresa no lo aprueba abiertamente, pero la liberación de la API fue legal y su uso también lo es. Por otra parte la empresa se reserva el derecho de bloquear cuentas por el uso fraudulento de su infraestructura. Las tecnologías OpenSource utilizadas han sido, distribuciones Linux (Raspbian) y lenguajes de programación PHP, Python y BASHSCRIPT, todo cubierto por la comunidad, ofreciendo soporte y escalabilidad. Es por ello que se utiliza, como matriz y gestor domotico central, una RaspberryPI. Los servicios que el gestor ofrece en su primera versión incluyen el control domotico de la iluminación eléctrica general o personal, el control de todo tipo de electrodomésticos, el control de accesos para la puerta principal de entrada y el control de medios audiovisuales. ABSTRACT. This final thesis aims to develop a domotic manager based on the speech recognition capacity implemented in the social network, WhatsApp. This Manager not only banish the wrong idea about how expensive and useless is a domotic installation, this manager will give an opportunity to handicapped people to improve their quality of life. These people, with a simple voice command to their own WhatsApp, could enable or disable all the domotics devices installed in their living places. “On Lamp”, “ON Oven”, “Open Door”… This service reduce considerably the budgets because the use of OpenSource Technologies. The main achievement of this thesis is help handicapped people improving their quality of life, making independent from the housework. The house will do the work. The accessibility is, by the way, the goal to achieve. To get accessibility to a width range, we need a cheap, easy to learn and easy to use service. The social Network WhatsApp is one part of the answer, this app does not need explanation because is used all over the world, moreover, integrates the speech recognition capacity. The OpenSource technologies is the other part of the answer due to the low costs or, even, the free costs of their implementations. The use of the social network WhatsApp is explained by itself. In September 2015 were registered around 900 million users, of course, the recent acquisition by Facebook has helped in this astronomic number and match the first law of this service about the accessibility. Since five years exists, in the internet, a free WhatsApp API. The OpenSource community has used this API to develop their own messaging apps or desktop-clients, using the WhatsApp infrastructure. The company does not approve officially, however le API freedom is legal and the use of the API is legal too. On the other hand, the company can block accounts who makes a fraudulent use of his infrastructure. OpenSource technologies used in this thesis are: Linux distributions (Raspbian) and programming languages PHP, Python and BASHCSRIPT, all of these technologies are covered by the community offering support and scalability. Due to that, it is used a RaspberryPI as the Central Domotic Manager. The domotic services that currently this manager achieve are: Domotic lighting control, electronic devices control, access control to the main door and Media Control.
Resumo:
The scientific bases for human-machine communication by voice are in the fields of psychology, linguistics, acoustics, signal processing, computer science, and integrated circuit technology. The purpose of this paper is to highlight the basic scientific and technological issues in human-machine communication by voice and to point out areas of future research opportunity. The discussion is organized around the following major issues in implementing human-machine voice communication systems: (i) hardware/software implementation of the system, (ii) speech synthesis for voice output, (iii) speech recognition and understanding for voice input, and (iv) usability factors related to how humans interact with machines.
Resumo:
Optimism is growing that the near future will witness rapid growth in human-computer interaction using voice. System prototypes have recently been built that demonstrate speaker-independent real-time speech recognition, and understanding of naturally spoken utterances with vocabularies of 1000 to 2000 words, and larger. Already, computer manufacturers are building speech recognition subsystems into their new product lines. However, before this technology can be broadly useful, a substantial knowledge base is needed about human spoken language and performance during computer-based spoken interaction. This paper reviews application areas in which spoken interaction can play a significant role, assesses potential benefits of spoken interaction with machines, and compares voice with other modalities of human-computer interaction. It also discusses information that will be needed to build a firm empirical foundation for the design of future spoken and multimodal interfaces. Finally, it argues for a more systematic and scientific approach to investigating spoken input and performance with future language technology.
Resumo:
The Colloquium on Human-Machine Communication by Voice highlighted the global technical community's focus on the problems and promise of voice-processing technology, particularly, speech recognition and speech synthesis. Clearly, there are many areas in both the research and development of these technologies that can be advanced significantly. However, it is also true that there are many applications of these technologies that are capable of commercialization now. Early successful commercialization of new technology is vital to ensure continuing interest in its development. This paper addresses efforts to commercialize speech technologies in two markets: telecommunications and aids for the handicapped.
Resumo:
As the telecommunications industry evolves over the next decade to provide the products and services that people will desire, several key technologies will become commonplace. Two of these, automatic speech recognition and text-to-speech synthesis, will provide users with more freedom on when, where, and how they access information. While these technologies are currently in their infancy, their capabilities are rapidly increasing and their deployment in today's telephone network is expanding. The economic impact of just one application, the automation of operator services, is well over $100 million per year. Yet there still are many technical challenges that must be resolved before these technologies can be deployed ubiquitously in products and services throughout the worldwide telephone network. These challenges include: (i) High level of accuracy. The technology must be perceived by the user as highly accurate, robust, and reliable. (ii) Easy to use. Speech is only one of several possible input/output modalities for conveying information between a human and a machine, much like a computer terminal or Touch-Tone pad on a telephone. It is not the final product. Therefore, speech technologies must be hidden from the user. That is, the burden of using the technology must be on the technology itself. (iii) Quick prototyping and development of new products and services. The technology must support the creation of new products and services based on speech in an efficient and timely fashion. In this paper I present a vision of the voice-processing industry with a focus on the areas with the broadest base of user penetration: speech recognition, text-to-speech synthesis, natural language processing, and speaker recognition technologies. The current and future applications of these technologies in the telecommunications industry will be examined in terms of their strengths, limitations, and the degree to which user needs have been or have yet to be met. Although noteworthy gains have been made in areas with potentially small user bases and in the more mature speech-coding technologies, these subjects are outside the scope of this paper.
Resumo:
This paper describes a range of opportunities for military and government applications of human-machine communication by voice, based on visits and contacts with numerous user organizations in the United States. The applications include some that appear to be feasible by careful integration of current state-of-the-art technology and others that will require a varying mix of advances in speech technology and in integration of the technology into applications environments. Applications that are described include (1) speech recognition and synthesis for mobile command and control; (2) speech processing for a portable multifunction soldier's computer; (3) speech- and language-based technology for naval combat team tactical training; (4) speech technology for command and control on a carrier flight deck; (5) control of auxiliary systems, and alert and warning generation, in fighter aircraft and helicopters; and (6) voice check-in, report entry, and communication for law enforcement agents or special forces. A phased approach for transfer of the technology into applications is advocated, where integration of applications systems is pursued in parallel with advanced research to meet future needs.
Resumo:
The deployment of systems for human-to-machine communication by voice requires overcoming a variety of obstacles that affect the speech-processing technologies. Problems encountered in the field might include variation in speaking style, acoustic noise, ambiguity of language, or confusion on the part of the speaker. The diversity of these practical problems encountered in the "real world" leads to the perceived gap between laboratory and "real-world" performance. To answer the question "What applications can speech technology support today?" the concept of the "degree of difficulty" of an application is introduced. The degree of difficulty depends not only on the demands placed on the speech recognition and speech synthesis technologies but also on the expectations of the user of the system. Experience has shown that deployment of effective speech communication systems requires an iterative process. This paper discusses general deployment principles, which are illustrated by several examples of human-machine communication systems.
Resumo:
This paper describes the state of the art in applications of voice-processing technologies. In the first part, technologies concerning the implementation of speech recognition and synthesis algorithms are described. Hardware technologies such as microprocessors and DSPs (digital signal processors) are discussed. Software development environment, which is a key technology in developing applications software, ranging from DSP software to support software also is described. In the second part, the state of the art of algorithms from the standpoint of applications is discussed. Several issues concerning evaluation of speech recognition/synthesis algorithms are covered, as well as issues concerning the robustness of algorithms in adverse conditions.
Resumo:
This talk, which was the keynote address of the NAS Colloquium on Human-Machine Communication by Voice, discusses the past, present, and future of human-machine communications, especially speech recognition and speech synthesis. Progress in these technologies is reviewed in the context of the general progress in computer and communications technologies.
Resumo:
A perda auditiva no idoso acarreta em dificuldade na percepção da fala. O teste comumente utilizado na logoaudiometria é a pesquisa do índice de reconhecimento de fala máximo (IR-Max) em uma única intensidade de apresentação da fala. Entretanto, o procedimento mais adequado seria a realização do teste em diversas intensidades, visto que o índice de acerto depende da intensidade da fala no momento do teste e está relacionado com o grau e configuração da perda auditiva. A imprecisão na obtenção do IR-Max poderá gerar uma hipótese diagnóstica errônea e o insucesso no processo de intervenção na perda auditiva. Objetivo: Verificar a interferência do nível de apresentação da fala, no teste de reconhecimento de fala, em idosos com perda auditiva sensorioneural com diferentes configurações audiométricas. Métodos: Participaram 64 idosos, 120 orelhas (61 do gênero feminino e 59 do gênero masculino), idade entre 60 e 88 anos, divididos em grupos: G1- composto por 23 orelhas com configuração horizontal, G2- 55 orelhas com configuração descendente, G3- 42 orelhas com configuração abrupta. Os critérios de inclusão foram: perda auditiva sensorioneural de grau leve a severo, não usuário de aparelho de amplificação sonora individual (AASI), ou com tempo de uso inferior a dois meses, e ausência de alterações cognitivas. Foram realizados os seguintes procedimentos: pesquisas do limiar de reconhecimento de fala (LRF), do índice de reconhecimento de fala (IRF) em diversas intensidades e do nível de máximo conforto (MCL) e desconforto (UCL) para a fala. Para tal, foram utilizadas listas com 11 monossílabos, para diminuir a duração do teste. A análise estatística foi composta pelo teste Análise de Variância (ANOVA) e teste de Tukey. Resultados: A configuração descendente foi a de maior ocorrência. Indivíduos com configuração horizontal apresentaram índice médio de acerto mais elevado de reconhecimento de fala. Ao considerar o total avaliado, 27,27% dos indivíduos com configuração horizontal revelaram o IR-Max no MCL, assim como 38,18% com configuração descendente e 26,19% com configuração abrupta. O IR-Max foi encontrado no UCL, em 40,90% dos indivíduos com configuração horizontal, 45,45% com configuração descendente e 28,20% com configuração abrupta. Respectivamente, o maior e o menor índice médio de acerto foram encontrados em: G1- 30 e 40 dBNS; G2- 50 e 10 dBNS; G3- 45 e 10 dBNS. Não há uma única intensidade de fala a ser utilizada em todos os tipos de configurações audiométricas, entretanto, os níveis de sensação que identificaram os maiores índices médios de acerto foram: G1- 20 a 30 dBNS, G2- 20 a 50 dBNS; G3- 45 dBNS. O MCL e o UCL-5 dB para a fala não foram eficazes para determinar o IR-Max. Conclusões: O nível de apresentação teve influência no desempenho no reconhecimento de fala para monossílabos em idosos com perda auditiva sensorioneural em todas as configurações audiométricas. A perda auditiva de grau moderado e a configuração audiométrica descendente foram mais frequentes nessa população, seguida da abrupta e horizontal.
Resumo:
Cette thèse contribue a la recherche vers l'intelligence artificielle en utilisant des méthodes connexionnistes. Les réseaux de neurones récurrents sont un ensemble de modèles séquentiels de plus en plus populaires capable en principe d'apprendre des algorithmes arbitraires. Ces modèles effectuent un apprentissage en profondeur, un type d'apprentissage machine. Sa généralité et son succès empirique en font un sujet intéressant pour la recherche et un outil prometteur pour la création de l'intelligence artificielle plus générale. Le premier chapitre de cette thèse donne un bref aperçu des sujets de fonds: l'intelligence artificielle, l'apprentissage machine, l'apprentissage en profondeur et les réseaux de neurones récurrents. Les trois chapitres suivants couvrent ces sujets de manière de plus en plus spécifiques. Enfin, nous présentons quelques contributions apportées aux réseaux de neurones récurrents. Le chapitre \ref{arxiv1} présente nos travaux de régularisation des réseaux de neurones récurrents. La régularisation vise à améliorer la capacité de généralisation du modèle, et joue un role clé dans la performance de plusieurs applications des réseaux de neurones récurrents, en particulier en reconnaissance vocale. Notre approche donne l'état de l'art sur TIMIT, un benchmark standard pour cette tâche. Le chapitre \ref{cpgp} présente une seconde ligne de travail, toujours en cours, qui explore une nouvelle architecture pour les réseaux de neurones récurrents. Les réseaux de neurones récurrents maintiennent un état caché qui représente leurs observations antérieures. L'idée de ce travail est de coder certaines dynamiques abstraites dans l'état caché, donnant au réseau une manière naturelle d'encoder des tendances cohérentes de l'état de son environnement. Notre travail est fondé sur un modèle existant; nous décrivons ce travail et nos contributions avec notamment une expérience préliminaire.
Resumo:
This work focuses on Machine Translation (MT) and Speech-to-Speech Translation, two emerging technologies that allow users to automatically translate written and spoken texts. The first part of this work provides a theoretical framework for the evaluation of Google Translate and Microsoft Translator, which is at the core of this study. Chapter one focuses on Machine Translation, providing a definition of this technology and glimpses of its history. In this chapter we will also learn how MT works, who uses it, for what purpose, what its pros and cons are, and how machine translation quality can be defined and assessed. Chapter two deals with Speech-to-Speech Translation by focusing on its history, characteristics and operation, potential uses and limits deriving from the intrinsic difficulty of translating spoken language. After describing the future prospects for SST, the final part of this chapter focuses on the quality assessment of Speech-to-Speech Translation applications. The last part of this dissertation describes the evaluation test carried out on Google Translate and Microsoft Translator, two mobile translation apps also providing a Speech-to-Speech Translation service. Chapter three illustrates the objectives, the research questions, the participants, the methodology and the elaboration of the questionnaires used to collect data. The collected data and the results of the evaluation of the automatic speech recognition subsystem and the language translation subsystem are presented in chapter four and finally analysed and compared in chapter five, which provides a general description of the performance of the evaluated apps and possible explanations for each set of results. In the final part of this work suggestions are made for future research and reflections on the usability and usefulness of the evaluated translation apps are provided.