966 resultados para using voice


Relevância:

70.00% 70.00%

Publicador:

Resumo:

Optimism is growing that the near future will witness rapid growth in human-computer interaction using voice. System prototypes have recently been built that demonstrate speaker-independent real-time speech recognition, and understanding of naturally spoken utterances with vocabularies of 1000 to 2000 words, and larger. Already, computer manufacturers are building speech recognition subsystems into their new product lines. However, before this technology can be broadly useful, a substantial knowledge base is needed about human spoken language and performance during computer-based spoken interaction. This paper reviews application areas in which spoken interaction can play a significant role, assesses potential benefits of spoken interaction with machines, and compares voice with other modalities of human-computer interaction. It also discusses information that will be needed to build a firm empirical foundation for the design of future spoken and multimodal interfaces. Finally, it argues for a more systematic and scientific approach to investigating spoken input and performance with future language technology.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Neste trabalho contemplamos o emprego de detectores de voz como uma etapa de pré- processamento de uma técnica de separação cega de sinais implementada no domínio do tempo, que emprega estatísticas de segunda ordem para a separação de misturas convolutivas e determinadas. Seu algoritmo foi adaptado para realizar a separação tanto em banda cheia quanto em sub-bandas, considerando a presença e a ausência de instantes de silêncio em misturas de sinais de voz. A ideia principal consiste em detectar trechos das misturas que contenham atividade de voz, evitando que o algoritmo de separação seja acionado na ausência de voz, promovendo ganho de desempenho e redução do custo computacional.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A importância e preocupação dedicadas à autonomia e independência das pessoas idosas e dos pacientes que sofrem de algum tipo de deficiência tem vindo a aumentar significativamente ao longo das últimas décadas. As cadeiras de rodas inteligentes (CRI) são tecnologias que podem ajudar este tipo de população a aumentar a sua autonomia, sendo atualmente uma área de investigação bastante ativa. Contudo, a adaptação das CRIs a pacientes específicos e a realização de experiências com utilizadores reais são assuntos de estudo ainda muito pouco aprofundados. A cadeira de rodas inteligente, desenvolvida no âmbito do Projeto IntellWheels, é controlada a alto nível utilizando uma interface multimodal flexível, recorrendo a comandos de voz, expressões faciais, movimentos de cabeça e através de joystick. Este trabalho teve como finalidade a adaptação automática da CRI atendendo às características dos potenciais utilizadores. Foi desenvolvida uma metodologia capaz de criar um modelo do utilizador. A investigação foi baseada num sistema de recolha de dados que permite obter e armazenar dados de voz, expressões faciais, movimentos de cabeça e do corpo dos pacientes. A utilização da CRI pode ser efetuada em diferentes situações em ambiente real e simulado e um jogo sério foi desenvolvido permitindo especificar um conjunto de tarefas a ser realizado pelos utilizadores. Os dados foram analisados recorrendo a métodos de extração de conhecimento, de modo a obter o modelo dos utilizadores. Usando os resultados obtidos pelo sistema de classificação, foi criada uma metodologia que permite selecionar a melhor interface e linguagem de comando da cadeira para cada utilizador. A avaliação para validação da abordagem foi realizada no âmbito do Projeto FCT/RIPD/ADA/109636/2009 - "IntellWheels - Intelligent Wheelchair with Flexible Multimodal Interface". As experiências envolveram um vasto conjunto de indivíduos que sofrem de diversos níveis de deficiência, em estreita colaboração com a Escola Superior de Tecnologia de Saúde do Porto e a Associação do Porto de Paralisia Cerebral. Os dados recolhidos através das experiências de navegação na CRI foram acompanhados por questionários preenchidos pelos utilizadores. Estes dados foram analisados estatisticamente, a fim de provar a eficácia e usabilidade na adequação da interface da CRI ao utilizador. Os resultados mostraram, em ambiente simulado, um valor de usabilidade do sistema de 67, baseado na opinião de uma amostra de pacientes que apresentam os graus IV e V (os mais severos) de Paralisia Cerebral. Foi também demonstrado estatisticamente que a interface atribuída automaticamente pela ferramenta tem uma avaliação superior à sugerida pelos técnicos de Terapia Ocupacional, mostrando a possibilidade de atribuir automaticamente uma linguagem de comando adaptada a cada utilizador. Experiências realizadas com distintos modos de controlo revelaram a preferência dos utilizadores por um controlo compartilhado com um nível de ajuda associado ao nível de constrangimento do paciente. Em conclusão, este trabalho demonstra que é possível adaptar automaticamente uma CRI ao utilizador com claros benefícios a nível de usabilidade e segurança.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Este trabajo se enmarca en el área de interacción hombre-máquina y los diferentes paradigmas que existe actualmente. Serevisan antecedentes y posibilidades vinculadas a la educación especial. Comocaso de estudio, se presenta una propuesta de adaptación al software educativo JClic, mediante la utilización de comandos por voz, con el objetivo de ser utilizado por usuarios/alumnos con deficiencia motriz sin consecuencias o con consecuencias leves en el desarrollo del lenguaje. Como parte de esta propuesta de adaptación, se estudiaron diferentes motores de reconocimiento de voz (RV), y se profundizó el análisis del motor de RV Sphinx-4. Se presenta aquí parte de este trabajo realizado y los resultados y conclusiones obtenidas, luego de la evaluación del prototipo.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This demo concerns a recently developed prototype of an emotionally-sensitive autonomous HiFi Spoken Conversa- tional Agent, called NEMOHIFI. The baseline agent was developed by the Speech Technology Group (GTH) and has recently been integrated with an emotional engine called NEMO (Need-inspired Emotional Model) to enable it to adapt to users emotion and respond to the users using ap- propriate expressive speech. NEMOHIFI controls and man- ages the HiFi audio system, and for end users, its functions equate a remote control, except that instead of clicking, the user interacts with the agent using voice. A pairwise com- parison between the baseline (non-adaptive) and NEMO- HIFI is also presented.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper presents a method of voice activity detection (VAD) suitable for high noise scenarios, based on the fusion of two complementary systems. The first system uses a proposed non-Gaussianity score (NGS) feature based on normal probability testing. The second system employs a histogram distance score (HDS) feature that detects changes in the signal through conducting a template-based similarity measure between adjacent frames. The decision outputs by the two systems are then merged using an open-by-reconstruction fusion stage. Accuracy of the proposed method was compared to several baseline VAD methods on a database created using real recordings of a variety of high-noise environments.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper presents a method of voice activity detection (VAD) for high noise scenarios, using a noise robust voiced speech detection feature. The developed method is based on the fusion of two systems. The first system utilises the maximum peak of the normalised time-domain autocorrelation function (MaxPeak). The second zone system uses a novel combination of cross-correlation and zero-crossing rate of the normalised autocorrelation to approximate a measure of signal pitch and periodicity (CrossCorr) that is hypothesised to be noise robust. The score outputs by the two systems are then merged using weighted sum fusion to create the proposed autocorrelation zero-crossing rate (AZR) VAD. Accuracy of AZR was compared to state of the art and standardised VAD methods and was shown to outperform the best performing system with an average relative improvement of 24.8% in half-total error rate (HTER) on the QUT-NOISE-TIMIT database created using real recordings from high-noise environments.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Visual activity detection of lip movements can be used to overcome the poor performance of voice activity detection based solely in the audio domain, particularly in noisy acoustic conditions. However, most of the research conducted in visual voice activity detection (VVAD) has neglected addressing variabilities in the visual domain such as viewpoint variation. In this paper we investigate the effectiveness of the visual information from the speaker’s frontal and profile views (i.e left and right side views) for the task of VVAD. As far as we are aware, our work constitutes the first real attempt to study this problem. We describe our visual front end approach and the Gaussian mixture model (GMM) based VVAD framework, and report the experimental results using the freely available CUAVE database. The experimental results show that VVAD is indeed possible from profile views and we give a quantitative comparison of VVAD based on frontal and profile views The results presented are useful in the development of multi-modal Human Machine Interaction (HMI) using a single camera, where the speaker’s face may not always be frontal.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Police in-vehicle systems include a visual output mobile data terminal (MDT) with manual input via touch screen and keyboard. This study investigated the potential for voice-based input and output modalities for reducing subjective workload of police officers while driving. Nineteen experienced drivers of police vehicles (one female) from New South Wales (NSW) Police completed four simulated urban drives. Three drives included a concurrent secondary task: an imitation licence number search using an emulated MDT. Three different interface output-input modalities were examined: Visual-Manual, Visual-Voice, and Audio-Voice. Following each drive, participants rated their subjective workload using the NASA - Raw Task Load Index and completed questions on acceptability. A questionnaire on interface preferences was completed by participants at the end of their session. Engaging in secondary tasks while driving significantly increased subjective workload. The Visual-Manual interface resulted in higher time demand than either of the voice-based interfaces and greater physical demand than the Audio-Voice interface. The Visual-Voice and Audio-Voice interfaces were rated easier to use and more useful than the Visual-Manual interface, although not significantly different from each other. Findings largely echoed those deriving from the analysis of the objective driving performance data. It is acknowledged that under standard procedures, officers should not drive while performing tasks concurrently with certain invehicle policing systems; however, in practice this sometimes occurs. Taking action now to develop voice-based technology for police in-vehicle systems has potential to realise visions for potentially safer and more efficient vehicle-based police work.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper proposes an automatic acoustic-phonetic method for estimating voice-onset time of stops. This method requires neither transcription of the utterance nor training of a classifier. It makes use of the plosion index for the automatic detection of burst onsets of stops. Having detected the burst onset, the onset of the voicing following the burst is detected using the epochal information and a temporal measure named the maximum weighted inner product. For validation, several experiments are carried out on the entire TIMIT database and two of the CMU Arctic corpora. The performance of the proposed method compares well with three state-of-the-art techniques. (C) 2014 Acoustical Society of America

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A characterization of the voice source (VS) signal by the pitch synchronous (PS) discrete cosine transform (DCT) is proposed. With the integrated linear prediction residual (ILPR) as the VS estimate, the PS DCT of the ILPR is evaluated as a feature vector for speaker identification (SID). On TIMIT and YOHO databases, using a Gaussian mixture model (GMM)-based classifier, it performs on par with existing VS-based features. On the NIST 2003 database, fusion with a GMM-based classifier using MFCC features improves the identification accuracy by 12% in absolute terms, proving that the proposed characterization has good promise as a feature for SID studies. (C) 2015 Acoustical Society of America