983 resultados para Digit speech recognition
Resumo:
Sistemas de reconhecimento e síntese de voz são constituídos por módulos que dependem da língua e, enquanto existem muitos recursos públicos para alguns idiomas (p.e. Inglês e Japonês), os recursos para Português Brasileiro (PB) ainda são escassos. Outro aspecto é que, para um grande número de tarefas, a taxa de erro dos sistemas de reconhecimento de voz atuais ainda é elevada, quando comparada à obtida por seres humanos. Assim, apesar do sucesso das cadeias escondidas de Markov (HMM), é necessária a pesquisa por novos métodos. Este trabalho tem como motivação esses dois fatos e se divide em duas partes. A primeira descreve o desenvolvimento de recursos e ferramentas livres para reconhecimento e síntese de voz em PB, consistindo de bases de dados de áudio e texto, um dicionário fonético, um conversor grafema-fone, um separador silábico e modelos acústico e de linguagem. Todos os recursos construídos encontram-se publicamente disponíveis e, junto com uma interface de programação proposta, têm sido usados para o desenvolvimento de várias novas aplicações em tempo-real, incluindo um módulo de reconhecimento de voz para a suíte de aplicativos para escritório OpenOffice.org. São apresentados testes de desempenho dos sistemas desenvolvidos. Os recursos aqui produzidos e disponibilizados facilitam a adoção da tecnologia de voz para PB por outros grupos de pesquisa, desenvolvedores e pela indústria. A segunda parte do trabalho apresenta um novo método para reavaliar (rescoring) o resultado do reconhecimento baseado em HMMs, o qual é organizado em uma estrutura de dados do tipo lattice. Mais especificamente, o sistema utiliza classificadores discriminativos que buscam diminuir a confusão entre pares de fones. Para cada um desses problemas binários, são usadas técnicas de seleção automática de parâmetros para escolher a representaçãao paramétrica mais adequada para o problema em questão.
Resumo:
O reconhecimento automático de voz vem sendo cada vez mais útil e possível. Quando se trata de línguas como a Inglesa, encontram-se no mercado excelentes reconhecedores. Porem, a situação não e a mesma para o Português Brasileiro, onde os principais reconhecedores para ditado em sistemas desktop que já existiram foram descontinuados. A presente dissertação alinha-se com os objetivos do Laboratório de Processamento de Sinais da Universidade Federal do Pará, que é o desenvolvimento de um reconhecedor automático de voz para Português Brasileiro. Mais especificamente, as principais contribuições dessa dissertação são: o desenvolvimento de alguns recursos necessários para a construção de um reconhecedor, tais como: bases de áudio transcrito e API para desenvolvimento de aplicações; e o desenvolvimento de duas aplicações: uma para ditado em sistema desktop e outra para atendimento automático em um call center. O Coruja, sistema desenvolvido no LaPS para reconhecimento de voz em Português Brasileiro. Este alem de conter todos os recursos para fornecer reconhecimento de voz em Português Brasileiro possui uma API para desenvolvimento de aplicativos. O aplicativo desenvolvido para ditado e edição de textos em desktop e o SpeechOO, este possibilita o ditado para a ferramenta Writer do pacote LibreOffice, alem de permitir a edição e formatação de texto com comandos de voz. Outra contribuição deste trabalho e a utilização de reconhecimento automático de voz em call centers, o Coruja foi integrado ao software Asterisk e a principal aplicação desenvolvida foi uma unidade de resposta audível com reconhecimento de voz para o atendimento de um call center nacional que atende mais de 3 mil ligações diárias.
Resumo:
Este trabalho visa propor uma solução contendo um sistema de reconhecimento de fala automático em nuvem. Dessa forma, não há necessidade de um reconhecedor sendo executado na própria máquina cliente, pois o mesmo estará disponível através da Internet. Além do reconhecimento automático de voz em nuvem, outra vertente deste trabalho é alta disponibilidade. A importância desse tópico se d´a porque o ambiente servidor onde se planeja executar o reconhecimento em nuvem não pode ficar indisponível ao usuário. Dos vários aspectos que requerem robustez, tal como a própria conexão de Internet, o escopo desse trabalho foi definido como os softwares livres que permitem a empresas aumentarem a disponibilidade de seus serviços. Dentre os resultados alcançados e para as condições simuladas, mostrou-se que o reconhecedor de voz em nuvem desenvolvido pelo grupo atingiu um desempenho próximo ao do Google.
Resumo:
In many movies of scientific fiction, machines were capable of speaking with humans. However mankind is still far away of getting those types of machines, like the famous character C3PO of Star Wars. During the last six decades the automatic speech recognition systems have been the target of many studies. Throughout these years many technics were developed to be used in applications of both software and hardware. There are many types of automatic speech recognition system, among which the one used in this work were the isolated word and independent of the speaker system, using Hidden Markov Models as the recognition system. The goals of this work is to project and synthesize the first two steps of the speech recognition system, the steps are: the speech signal acquisition and the pre-processing of the signal. Both steps were developed in a reprogrammable component named FPGA, using the VHDL hardware description language, owing to the high performance of this component and the flexibility of the language. In this work it is presented all the theory of digital signal processing, as Fast Fourier Transforms and digital filters and also all the theory of speech recognition using Hidden Markov Models and LPC processor. It is also presented all the results obtained for each one of the blocks synthesized e verified in hardware
Resumo:
To report the audiological outcomes of cochlear implantation in two patients with severe to profound sensorineural hearing loss secondary to superficial siderosis of the CNS and discuss some programming peculiarities that were found in these cases. Retrospective review. Data concerning clinical presentation, diagnosis and audiological assessment pre- and post-implantation were collected of two patients with superficial siderosis of the CNS. Both patients showed good hearing thresholds but variable speech perception outcomes. One patient did not achieve open-set speech recognition, but the other achieved 70% speech recognition in quiet. Electrical compound action potentials could not be elicited in either patient. Map parameters showed the need for increased charge. Electrode impedances showed high longitudinal variability. The implants were fairly beneficial in restoring hearing and improving communication abilities although many reprogramming sessions have been required. The hurdle in programming was the need of frequent adjustments due to the physiologic variations in electrical discharges and neural conduction, besides the changes in the impedances. Patients diagnosed with superficial siderosis may achieve limited results in speech perception scores due to both cochlear and retrocochlear reasons. Careful counseling about the results must be given to the patients and their families before the cochlear implantation indication.
Resumo:
In the present days it is critical to identify the factors that contribute to the quality of the audiologic care provided. The hearing aid fitting model proposed by the Brazilian Unified Health System (SUS) implies multidisciplinary care. This leads to some relevant and current questions. OBJECTIVE: To evaluate and compare the results of the hearing aid fitting model proposed by the SUS with a more compact and streamlined care. METHOD: We conducted a prospective longitudinal study with 174 participants randomly assigned to two groups: SUS Group and Streamline Group. For both groups we assessed key areas related to hearing aid fitting through the International Outcome Inventory for Hearing Aids (IOI-HA) questionnaire, in addition to evaluating the results of Speech Recognition Index (SRI) 3 and 9 months after fitting. RESULTS: Both groups had the same improvement related to the speech recognition after nine months of AASI use, and the IOI-HA didn't show any statically significant difference on three and nine months. CONCLUSION: The two strategies of care did not differ, from the clinical point of view, as regards the hearing aid fitting results obtained upon the evaluation of patients in the short and medium term, thus changes in the current model of care should be considered.
Resumo:
[EN]Perceptual User Interfaces (PUIs) aim at facilitating human-computer interaction with the aid of human-like capacities (computer vision, speech recognition, etc.). In PUIs, the human face is a central element, since it conveys not only identity but also other important information, particularly with respect to the user’s mood or emotional state. This paper describes both a face detector and a smile detector for PUIs. Both are suitable for real-time interaction.
Resumo:
La tesi è stata incentrata sul gioco «Indovina chi?» per l’identificazione da parte del robot Nao di un personaggio tramite la sua descrizione. In particolare la descrizione avviene tramite domande e risposte L’obiettivo della tesi è la progettazione di un sistema in grado di capire ed elaborare dei dati comunicati usando un sottoinsieme del linguaggio naturale, estrapolarne le informazioni chiave e ottenere un riscontro con informazioni date in precedenza. Si è quindi programmato il robot Nao in modo che sia in grado di giocare una partita di «Indovina chi?» contro un umano comunicando tramite il linguaggio naturale. Sono state implementate regole di estrazione e categorizzazione per la comprensione del testo utilizzando Cogito, una tecnologia brevettata dall'azienda Expert System. In questo modo il robot è in grado di capire le risposte e rispondere alle domande formulate dall'umano mediante il linguaggio naturale. Per il riconoscimento vocale è stata utilizzata l'API di Google e PyAudio per l'utilizzo del microfono. Il programma è stato implementato in Python e i dati dei personaggi sono memorizzati in un database che viene interrogato e modificato dal robot. L'algoritmo del gioco si basa su calcoli probabilistici di vittoria del robot e sulla scelta delle domande da proporre in base alle risposte precedentemente ricevute dall'umano. Le regole semantiche realizzate danno la possibilità al giocatore di formulare frasi utilizzando il linguaggio naturale, inoltre il robot è in grado di distinguere le informazioni che riguardano il personaggio da indovinare senza farsi ingannare. La percentuale di vittoria del robot ottenuta giocando 20 partite è stata del 50%. Il data base è stato sviluppato in modo da poter realizzare un identikit completo di una persona, oltre a quello dei personaggi del gioco. È quindi possibile ampliare il progetto per altri scopi, oltre a quello del gioco, nel campo dell'identificazione.
Resumo:
The level of improvement in the audiological results of Baha(®) users mainly depends on the patient's preoperative hearing thresholds and the type of Baha sound processor used. This investigation shows correlations between the preoperative hearing threshold and postoperative aided thresholds and audiological results in speech understanding in quiet of 84 Baha users with unilateral conductive hearing loss, bilateral conductive hearing loss and bilateral mixed hearing loss. Secondly, speech understanding in noise of 26 Baha users with different Baha sound processors (Compact, Divino, and BP100) is investigated. Linear regression between aided sound field thresholds and bone conduction (BC) thresholds of the better ear shows highest correlation coefficients and the steepest slope. Differences between better BC thresholds and aided sound field thresholds are smallest for mid-frequencies (1 and 2 kHz) and become larger at 0.5 and 4 kHz. For Baha users, the gain in speech recognition in quiet can be expected to lie in the order of magnitude of the gain in their hearing threshold. Compared to its predecessor sound processors Baha(®) Compact and Baha(®) Divino, Baha(®) BP100 improves speech understanding in noise significantly by +0.9 to +4.6 dB signal-to-noise ratio, depending on the setting and the use of directional microphone. For Baha users with unilateral and bilateral conductive hearing loss and bilateral mixed hearing loss, audiological results in aided sound field thresholds can be estimated with the better BC hearing threshold. The benefit in speech understanding in quiet can be expected to be similar to the gain in their sound field hearing threshold. The most recent technology of Baha sound processor improves speech understanding in noise by an order of magnitude that is well perceived by users and which can be very useful in everyday life.
Resumo:
Audio-visual documents obtained from German TV news are classified according to the IPTC topic categorization scheme. To this end usual text classification techniques are adapted to speech, video, and non-speech audio. For each of the three modalities word analogues are generated: sequences of syllables for speech, “video words” based on low level color features (color moments, color correlogram and color wavelet), and “audio words” based on low-level spectral features (spectral envelope and spectral flatness) for non-speech audio. Such audio and video words provide a means to represent the different modalities in a uniform way. The frequencies of the word analogues represent audio-visual documents: the standard bag-of-words approach. Support vector machines are used for supervised classification in a 1 vs. n setting. Classification based on speech outperforms all other single modalities. Combining speech with non-speech audio improves classification. Classification is further improved by supplementing speech and non-speech audio with video words. Optimal F-scores range between 62% and 94% corresponding to 50% - 84% above chance. The optimal combination of modalities depends on the category to be recognized. The construction of audio and video words from low-level features provide a good basis for the integration of speech, non-speech audio and video.
Resumo:
The article presents the design process of intelligent virtual human patients that are used for the enhancement of clinical skills. The description covers the development from conceptualization and character creation to technical components and the application in clinical research and training. The aim is to create believable social interactions with virtual agents that help the clinician to develop skills in symptom and ability assessment, diagnosis, interview techniques and interpersonal communication. The virtual patient fulfills the requirements of a standardized patient producing consistent, reliable and valid interactions in portraying symptoms and behaviour related to a specific clinical condition.
Resumo:
Crowdsourcing linguistic phenomena with smartphone applications is relatively new. Apps have been used to train acoustic models for automatic speech recognition (de Vries et al. 2014) and to archive endangered languages (Iwaidja Inyaman Team 2012). Leemann and Kolly (2013) developed a free app for iOS—Dialäkt Äpp (DÄ) (>78k downloads)—to document language change in Swiss German. Here, we present results of sound change based on DÄ data. DÄ predicts the users’ dialects: for 16 variables, users select their dialectal variant. DÄ then tells users which dialect they speak. Underlying this prediction are maps from the Linguistic Atlas of German-speaking Switzerland (SDS, 1962-2003), which documents the linguistic situation around 1950. If predicted wrongly, users indicate their actual dialect. With this information, the 16 variables can be assessed for language change. Results revealed robustness of phonetic variables; lexical and morphological variables were more prone to change. Phonetic variables like to lift (variants: /lupfə, lʏpfə, lipfə/) revealed SDS agreement scores of nearly 85%, i.e., little sound change. Not all phonetic variables are equally robust: ladle (variants: /xælə, xællə, xæuə, xæɫə, xæɫɫə/) exhibited significant sound change. We will illustrate the results using maps that show details of the sound changes at hand.
Resumo:
Este trabajo de Tesis ha abordado el objetivo de dar robustez y mejorar la Detección de Actividad de Voz en entornos acústicos adversos con el fin de favorecer el comportamiento de muchas aplicaciones vocales, por ejemplo aplicaciones de telefonía basadas en reconocimiento automático de voz, aplicaciones en sistemas de transcripción automática, aplicaciones en sistemas multicanal, etc. En especial, aunque se han tenido en cuenta todos los tipos de ruido, se muestra especial interés en el estudio de las voces de fondo, principal fuente de error de la mayoría de los Detectores de Actividad en la actualidad. Las tareas llevadas a cabo poseen como punto de partida un Detector de Actividad basado en Modelos Ocultos de Markov, cuyo vector de características contiene dos componentes: la energía normalizada y la variación de la energía. Las aportaciones fundamentales de esta Tesis son las siguientes: 1) ampliación del vector de características de partida dotándole así de información espectral, 2) ajuste de los Modelos Ocultos de Markov al entorno y estudio de diferentes topologías y, finalmente, 3) estudio e inclusión de nuevas características, distintas de las del punto 1, para filtrar los pulsos de pronunciaciones que proceden de las voces de fondo. Los resultados de detección, teniendo en cuenta los tres puntos anteriores, muestran con creces los avances realizados y son significativamente mejores que los resultados obtenidos, bajo las mismas condiciones, con otros detectores de actividad de referencia. This work has been focused on improving the robustness at Voice Activity Detection in adverse acoustic environments in order to enhance the behavior of many vocal applications, for example telephony applications based on automatic speech recognition, automatic transcription applications, multichannel systems applications, and so on. In particular, though all types of noise have taken into account, this research has special interest in the study of pronunciations coming from far-field speakers, the main error source of most activity detectors today. The tasks carried out have, as starting point, a Hidden Markov Models Voice Activity Detector which a feature vector containing two components: normalized energy and delta energy. The key points of this Thesis are the following: 1) feature vector extension providing spectral information, 2) Hidden Markov Models adjustment to environment and study of different Hidden Markov Model topologies and, finally, 3) study and inclusion of new features, different from point 1, to reject the pronunciations coming from far-field speakers. Detection results, taking into account the above three points, show the advantages of using this method and are significantly better than the results obtained under the same conditions by other well-known voice activity detectors.
Resumo:
Los objetivos de este proyecto son proporcionar la teoría, los ejercicios y otros recursos necesarios para que los alumnos de la EUIT de Telecomunicación con un nivel A1 en el Marco Común Europeo de Referencia para las Lenguas (MCERL) puedan obtener el nivel A2 en inglés sin necesidad de asistir a clases ni matricularse en cursos presenciales. La plataforma utilizada para conseguir este fin es Moodle, siendo utilizada en la página web de ILLLab. Este curso online sirve para alcanzar los conocimientos requeridos en la asignatura optativa Introduction to English for Professional and Academic Communication I que parte del nivel B1. Se realiza una propuesta de la gramática con sus correspondientes ejemplos y ejercicios basados todos ellos en adaptaciones de actividades publicadas en un corpus de libros de texto. Se añaden recursos (pequeñas lecturas, videos, enlaces) que se consideran apropiados para el tema tratado. Por otro lado, también se persigue solucionar el problema de los cursos de idiomas basados en e-learning ya que no proporcionan las herramientas necesarias para poner en práctica la expresión oral. Para ello, se aporta una aplicación basada en técnicas de reconocimiento de voz, con tres actividades en las que los resultados han de darse de forma hablada y con la correcta pronunciación. Así, se busca dar una base de conocimientos y experiencias prácticas para futuros proyectos basados en herramientas de síntesis y reconocimiento de voz, además de buscar un nuevo enfoque en el estudio de idiomas. Abstract: The objectives of this project are to provide the theory, exercises and other resources for students at the EUIT Telecommunications with A1 level in the Common European Framework of Reference for Languages (MCERL) in order to get A2 level in English without attending face-to-face courses. The platform used to achieve this aim is Moodle, which is currently being used in ILLLab website. This online course is due to attain the knowledge required in the optional subject Introduction to English for Professional and Academic Communication I which is based on the B1 level. It is a proposal of grammar with corresponding examples and exercises all based on adaptations of activities posted on a corpus of textbooks. It also adds resources (short readings, videos or links) that are appropriate for the subject. On the other hand, this project aims to solve the problem of language courses based on e-learning because these do not usually provide the student with the necessary tools to practice speaking. For this, we develop an application based on speech recognition techniques and propose three activities to practice speaking, and pronunciation. The proposal seeks to provide knowledge and practical experience for future projects based on synthesis tools and voice recognition, and means a new approach to e-learning courses for the study of languages.
Resumo:
The design and development of spoken interaction systems has been a thoroughly studied research scope for the last decades. The aim is to obtain systems with the ability to interact with human agents with a high degree of naturalness and efficiency, allowing them to carry out the actions they desire using speech, as it is the most natural means of communication between humans. To achieve that degree of naturalness, it is not enough to endow systems with the ability to accurately understand the user’s utterances and to properly react to them, even considering the information provided by the user in his or her previous interactions. The system has also to be aware of the evolution of the conditions under which the interaction takes place, in order to act the most coherent way as possible at each moment. Consequently, one of the most important features of the system is that it has to be context-aware. This context awareness of the system can be reflected in the modification of the behaviour of the system taking into account the current situation of the interaction. For instance, the system should decide which action it has to carry out, or the way to perform it, depending on the user that requests it, on the way that the user addresses the system, on the characteristics of the environment in which the interaction takes place, and so on. In other words, the system has to adapt its behaviour to these evolving elements of the interaction. Moreover that adaptation has to be carried out, if possible, in such a way that the user: i) does not perceive that the system has to make any additional effort, or to devote interaction time to perform tasks other than carrying out the requested actions, and ii) does not have to provide the system with any additional information to carry out the adaptation, which could imply a lesser efficiency of the interaction, since users should devote several interactions only to allow the system to become adapted. In the state-of-the-art spoken dialogue systems, researchers have proposed several disparate strategies to adapt the elements of the system to different conditions of the interaction (such as the acoustic characteristics of a specific user’s speech, the actions previously requested, and so on). Nevertheless, to our knowledge there is not any consensus on the procedures to carry out these adaptation. The approaches are to an extent unrelated from one another, in the sense that each one considers different pieces of information, and the treatment of that information is different taking into account the adaptation carried out. In this regard, the main contributions of this Thesis are the following ones: Definition of a contextualization framework. We propose a unified approach that can cover any strategy to adapt the behaviour of a dialogue system to the conditions of the interaction (i.e. the context). In our theoretical definition of the contextualization framework we consider the system’s context as all the sources of variability present at any time of the interaction, either those ones related to the environment in which the interaction takes place, or to the human agent that addresses the system at each moment. Our proposal relies on three aspects that any contextualization approach should fulfill: plasticity (i.e. the system has to be able to modify its behaviour in the most proactive way taking into account the conditions under which the interaction takes place), adaptivity (i.e. the system has also to be able to consider the most appropriate sources of information at each moment, both environmental and user- and dialogue-dependent, to effectively adapt to the conditions aforementioned), and transparency (i.e. the system has to carry out the contextualizaton-related tasks in such a way that the user neither perceives them nor has to do any effort in providing the system with any information that it needs to perform that contextualization). Additionally, we could include a generality aspect to our proposed framework: the main features of the framework should be easy to adopt in any dialogue system, regardless of the solution proposed to manage the dialogue. Once we define the theoretical basis of our contextualization framework, we propose two cases of study on its application in a spoken dialogue system. We focus on two aspects of the interaction: the contextualization of the speech recognition models, and the incorporation of user-specific information into the dialogue flow. One of the modules of a dialogue system that is more prone to be contextualized is the speech recognition system. This module makes use of several models to emit a recognition hypothesis from the user’s speech signal. Generally speaking, a recognition system considers two types of models: an acoustic one (that models each of the phonemes that the recognition system has to consider) and a linguistic one (that models the sequences of words that make sense for the system). In this work we contextualize the language model of the recognition system in such a way that it takes into account the information provided by the user in both his or her current utterance and in the previous ones. These utterances convey information useful to help the system in the recognition of the next utterance. The contextualization approach that we propose consists of a dynamic adaptation of the language model that is used by the recognition system. We carry out this adaptation by means of a linear interpolation between several models. Instead of training the best interpolation weights, we make them dependent on the conditions of the dialogue. In our approach, the system itself will obtain these weights as a function of the reliability of the different elements of information available, such as the semantic concepts extracted from the user’s utterance, the actions that he or she wants to carry out, the information provided in the previous interactions, and so on. One of the aspects more frequently addressed in Human-Computer Interaction research is the inclusion of user specific characteristics into the information structures managed by the system. The idea is to take into account the features that make each user different from the others in order to offer to each particular user different services (or the same service, but in a different way). We could consider this approach as a user-dependent contextualization of the system. In our work we propose the definition of a user model that contains all the information of each user that could be potentially useful to the system at a given moment of the interaction. In particular we will analyze the actions that each user carries out throughout his or her interaction. The objective is to determine which of these actions become the preferences of that user. We represent the specific information of each user as a feature vector. Each of the characteristics that the system will take into account has a confidence score associated. With these elements, we propose a probabilistic definition of a user preference, as the action whose likelihood of being addressed by the user is greater than the one for the rest of actions. To include the user dependent information into the dialogue flow, we modify the information structures on which the dialogue manager relies to retrieve information that could be needed to solve the actions addressed by the user. Usage preferences become another source of contextual information that will be considered by the system towards a more efficient interaction (since the new information source will help to decrease the need of the system to ask users for additional information, thus reducing the number of turns needed to carry out a specific action). To test the benefits of the contextualization framework that we propose, we carry out an evaluation of the two strategies aforementioned. We gather several performance metrics, both objective and subjective, that allow us to compare the improvements of a contextualized system against the baseline one. We will also gather the user’s opinions as regards their perceptions on the behaviour of the system, and its degree of adaptation to the specific features of each interaction. Resumen El diseño y el desarrollo de sistemas de interacción hablada ha sido objeto de profundo estudio durante las pasadas décadas. El propósito es la consecución de sistemas con la capacidad de interactuar con agentes humanos con un alto grado de eficiencia y naturalidad. De esta manera, los usuarios pueden desempeñar las tareas que deseen empleando la voz, que es el medio de comunicación más natural para los humanos. A fin de alcanzar el grado de naturalidad deseado, no basta con dotar a los sistemas de la abilidad de comprender las intervenciones de los usuarios y reaccionar a ellas de manera apropiada (teniendo en consideración, incluso, la información proporcionada en previas interacciones). Adicionalmente, el sistema ha de ser consciente de las condiciones bajo las cuales transcurre la interacción, así como de la evolución de las mismas, de tal manera que pueda actuar de la manera más coherente en cada instante de la interacción. En consecuencia, una de las características primordiales del sistema es que debe ser sensible al contexto. Esta capacidad del sistema de conocer y emplear el contexto de la interacción puede verse reflejada en la modificación de su comportamiento debida a las características actuales de la interacción. Por ejemplo, el sistema debería decidir cuál es la acción más apropiada, o la mejor manera de llevarla a término, dependiendo del usuario que la solicita, del modo en el que lo hace, etcétera. En otras palabras, el sistema ha de adaptar su comportamiento a tales elementos mutables (o dinámicos) de la interacción. Dos características adicionales son requeridas a dicha adaptación: i) el usuario no ha de percibir que el sistema dedica recursos (temporales o computacionales) a realizar tareas distintas a las que aquél le solicita, y ii) el usuario no ha de dedicar esfuerzo alguno a proporcionar al sistema información adicional para llevar a cabo la interacción. Esto último implicaría una menor eficiencia de la interacción, puesto que los usuarios deberían dedicar parte de la misma a proporcionar información al sistema para su adaptación, sin ningún beneficio inmediato. En los sistemas de diálogo hablado propuestos en la literatura, se han propuesto diferentes estrategias para llevar a cabo la adaptación de los elementos del sistema a las diferentes condiciones de la interacción (tales como las características acústicas del habla de un usuario particular, o a las acciones a las que se ha referido con anterioridad). Sin embargo, no existe una estrategia fija para proceder a dicha adaptación, sino que las mismas no suelen guardar una relación entre sí. En este sentido, cada una de ellas tiene en cuenta distintas fuentes de información, la cual es tratada de manera diferente en función de las características de la adaptación buscada. Teniendo en cuenta lo anterior, las contribuciones principales de esta Tesis son las siguientes: Definición de un marco de contextualización. Proponemos un criterio unificador que pueda cubrir cualquier estrategia de adaptación del comportamiento de un sistema de diálogo a las condiciones de la interacción (esto es, el contexto de la misma). En nuestra definición teórica del marco de contextualización consideramos el contexto del sistema como todas aquellas fuentes de variabilidad presentes en cualquier instante de la interacción, ya estén relacionadas con el entorno en el que tiene lugar la interacción, ya dependan del agente humano que se dirige al sistema en cada momento. Nuestra propuesta se basa en tres aspectos que cualquier estrategia de contextualización debería cumplir: plasticidad (es decir, el sistema ha de ser capaz de modificar su comportamiento de la manera más proactiva posible, teniendo en cuenta las condiciones en las que tiene lugar la interacción), adaptabilidad (esto es, el sistema ha de ser capaz de considerar la información oportuna en cada instante, ya dependa del entorno o del usuario, de tal manera que adecúe su comportamiento de manera eficaz a las condiciones mencionadas), y transparencia (que implica que el sistema ha de desarrollar las tareas relacionadas con la contextualización de tal manera que el usuario no perciba la manera en que dichas tareas se llevan a cabo, ni tampoco deba proporcionar al sistema con información adicional alguna). De manera adicional, incluiremos en el marco propuesto el aspecto de la generalidad: las características del marco de contextualización han de ser portables a cualquier sistema de diálogo, con independencia de la solución propuesta en los mismos para gestionar el diálogo. Una vez hemos definido las características de alto nivel de nuestro marco de contextualización, proponemos dos estrategias de aplicación del mismo a un sistema de diálogo hablado. Nos centraremos en dos aspectos de la interacción a adaptar: los modelos empleados en el reconocimiento de habla, y la incorporación de información específica de cada usuario en el flujo de diálogo. Uno de los módulos de un sistema de diálogo más susceptible de ser contextualizado es el sistema de reconocimiento de habla. Este módulo hace uso de varios modelos para generar una hipótesis de reconocimiento a partir de la señal de habla. En general, un sistema de reconocimiento emplea dos tipos de modelos: uno acústico (que modela cada uno de los fonemas considerados por el reconocedor) y uno lingüístico (que modela las secuencias de palabras que tienen sentido desde el punto de vista de la interacción). En este trabajo contextualizamos el modelo lingüístico del reconocedor de habla, de tal manera que tenga en cuenta la información proporcionada por el usuario, tanto en su intervención actual como en las previas. Estas intervenciones contienen información (semántica y/o discursiva) que puede contribuir a un mejor reconocimiento de las subsiguientes intervenciones del usuario. La estrategia de contextualización propuesta consiste en una adaptación dinámica del modelo de lenguaje empleado en el reconocedor de habla. Dicha adaptación se lleva a cabo mediante una interpolación lineal entre diferentes modelos. En lugar de entrenar los mejores pesos de interpolación, proponemos hacer los mismos dependientes de las condiciones actuales de cada diálogo. El propio sistema obtendrá estos pesos como función de la disponibilidad y relevancia de las diferentes fuentes de información disponibles, tales como los conceptos semánticos extraídos a partir de la intervención del usuario, o las acciones que el mismo desea ejecutar. Uno de los aspectos más comúnmente analizados en la investigación de la Interacción Persona-Máquina es la inclusión de las características específicas de cada usuario en las estructuras de información empleadas por el sistema. El objetivo es tener en cuenta los aspectos que diferencian a cada usuario, de tal manera que el sistema pueda ofrecer a cada uno de ellos el servicio más apropiado (o un mismo servicio, pero de la manera más adecuada a cada usuario). Podemos considerar esta estrategia como una contextualización dependiente del usuario. En este trabajo proponemos la definición de un modelo de usuario que contenga toda la información relativa a cada usuario, que pueda ser potencialmente utilizada por el sistema en un momento determinado de la interacción. En particular, analizaremos aquellas acciones que cada usuario decide ejecutar a lo largo de sus diálogos con el sistema. Nuestro objetivo es determinar cuáles de dichas acciones se convierten en las preferencias de cada usuario. La información de cada usuario quedará representada mediante un vector de características, cada una de las cuales tendrá asociado un valor de confianza. Con ambos elementos proponemos una definición probabilística de una preferencia de uso, como aquella acción cuya verosimilitud es mayor que la del resto de acciones solicitadas por el usuario. A fin de incluir la información dependiente de usuario en el flujo de diálogo, llevamos a cabo una modificación de las estructuras de información en las que se apoya el gestor de diálogo para recuperar información necesaria para resolver ciertos diálogos. En dicha modificación las preferencias de cada usuario pasarán a ser una fuente adicional de información contextual, que será tenida en cuenta por el sistema en aras de una interacción más eficiente (puesto que la nueva fuente de información contribuirá a reducir la necesidad del sistema de solicitar al usuario información adicional, dando lugar en consecuencia a una reducción del número de intervenciones necesarias para llevar a cabo una acción determinada). Para determinar los beneficios de las aplicaciones del marco de contextualización propuesto, llevamos a cabo una evaluación de un sistema de diálogo que incluye las estrategias mencionadas. Hemos recogido diversas métricas, tanto objetivas como subjetivas, que nos permiten determinar las mejoras aportadas por un sistema contextualizado en comparación con el sistema sin contextualizar. De igual manera, hemos recogido las opiniones de los participantes en la evaluación acerca de su percepción del comportamiento del sistema, y de su capacidad de adaptación a las condiciones concretas de cada interacción.