Building autonomous sensitive artificial listeners (Extended abstract)


Autoria(s): Schroder, Marc; Bevacqua, Elisabetta; Cowie, Roddy; Eyben, Florian; Gunes, Hatice; Heylen, DIrk; Ter Maat, Mark; McKeown, Gary; Pammi, Sathish; Pantic, Maja; Pelachaud, Catherine; Schuller, Bjorn; De Sevin, Etienne; Valstar, Michel; Wollmer, Martin
Data(s)

02/12/2015

Resumo

<p>This paper describes a substantial effort to build a real-time interactive multimodal dialogue system with a focus on emotional and non-verbal interaction capabilities. The work is motivated by the aim to provide technology with competences in perceiving and producing the emotional and non-verbal behaviours required to sustain a conversational dialogue. We present the Sensitive Artificial Listener (SAL) scenario as a setting which seems particularly suited for the study of emotional and non-verbal behaviour, since it requires only very limited verbal understanding on the part of the machine. This scenario allows us to concentrate on non-verbal capabilities without having to address at the same time the challenges of spoken language understanding, task modeling etc. We first summarise three prototype versions of the SAL scenario, in which the behaviour of the Sensitive Artificial Listener characters was determined by a human operator. These prototypes served the purpose of verifying the effectiveness of the SAL scenario and allowed us to collect data required for building system components for analysing and synthesising the respective behaviours. We then describe the fully autonomous integrated real-time system we created, which combines incremental analysis of user behaviour, dialogue management, and synthesis of speaker and listener behaviour of a SAL character displayed as a virtual agent. We discuss principles that should underlie the evaluation of SAL-type systems. Since the system is designed for modularity and reuse, and since it is publicly available, the SAL system has potential as a joint research tool in the affective computing research community.</p>

Identificador

http://pure.qub.ac.uk/portal/en/publications/building-autonomous-sensitive-artificial-listeners-extended-abstract(1186e26d-bbad-40a6-9386-241cc1957352).html

http://dx.doi.org/10.1109/ACII.2015.7344610

http://www.scopus.com/inward/record.url?scp=84964036733&partnerID=8YFLogxK

Idioma(s)

eng

Publicador

Institute of Electrical and Electronics Engineers Inc.

Direitos

info:eu-repo/semantics/restrictedAccess

Fonte

Schroder , M , Bevacqua , E , Cowie , R , Eyben , F , Gunes , H , Heylen , D I , Ter Maat , M , McKeown , G , Pammi , S , Pantic , M , Pelachaud , C , Schuller , B , De Sevin , E , Valstar , M & Wollmer , M 2015 , Building autonomous sensitive artificial listeners (Extended abstract) . in 2015 International Conference on Affective Computing and Intelligent Interaction, ACII 2015 . , 7344610 , Institute of Electrical and Electronics Engineers Inc. , pp. 456-462 , 2015 International Conference on Affective Computing and Intelligent Interaction, ACII 2015 , Xi'an , China , 21-24 September . DOI: 10.1109/ACII.2015.7344610

Palavras-Chave #/dk/atira/pure/subjectarea/asjc/1700/1702 #Artificial Intelligence #/dk/atira/pure/subjectarea/asjc/1700/1707 #Computer Vision and Pattern Recognition #/dk/atira/pure/subjectarea/asjc/1700/1709 #Human-Computer Interaction #/dk/atira/pure/subjectarea/asjc/1700/1712 #Software
Tipo

contributionToPeriodical