995 resultados para Virtual Humans
Resumo:
Non-verbal communication (NVC) is considered to represent more than 90 percent of everyday communication. In virtual world, this important aspect of interaction between virtual humans (VH) is strongly neglected. This paper presents a user-test study to demonstrate the impact of automatically generated graphics-based NVC expression on the dialog quality: first, we wanted to compare impassive and emotion facial expression simulation for impact on the chatting. Second, we wanted to see whether people like chatting within a 3D graphical environment. Our model only proposes facial expressions and head movements induced from spontaneous chatting between VHs. Only subtle facial expressions are being used as nonverbal cues - i.e. related to the emotional model. Motion capture animations related to hand gestures, such as cleaning glasses, were randomly used to make the virtual human lively. After briefly introducing the technical architecture of the 3D-chatting system, we focus on two aspects of chatting through VHs. First, what is the influence of facial expressions that are induced from text dialog? For this purpose, we exploited an emotion engine extracting an emotional content from a text and depicting it into a virtual character developed previously [GAS11]. Second, as our goal was not addressing automatic generation of text, we compared the impact of nonverbal cues in conversation with a chatbot or with a human operator with a wizard of oz approach. Among main results, the within group study -involving 40 subjects- suggests that subtle facial expressions impact significantly not only on the quality of experience but also on dialog understanding.
Resumo:
We introduce an innovative, semi-automatic method to transform low resolution facial meshes into high definition ones, based on the tailoring of a generic, neutral human head model, designed by an artist, to fit the facial features of a specific person. To determine these facial features we need to select a set of "control points" (corners of eyes, lips, etc.) in at least two photographs of the subject's face. The neutral head mesh is then automatically reshaped according to the relation between the control points in the original subject's mesh through a set of transformation pyramids. The last step consists in merging both meshes and filling the gaps that appear in the previous process. This algorithm avoids the use of expensive and complicated technologies to obtain depth maps, which also need to be meshed later.
Resumo:
The article presents the design process of intelligent virtual human patients that are used for the enhancement of clinical skills. The description covers the development from conceptualization and character creation to technical components and the application in clinical research and training. The aim is to create believable social interactions with virtual agents that help the clinician to develop skills in symptom and ability assessment, diagnosis, interview techniques and interpersonal communication. The virtual patient fulfills the requirements of a standardized patient producing consistent, reliable and valid interactions in portraying symptoms and behaviour related to a specific clinical condition.
Resumo:
In this paper we present XSAMPL3D, a novel language for the high-level representation of actions performed on objects by (virtual) humans. XSAMPL3D was designed to serve as action representation language in an imitation-based approach to character animation: First, a human demonstrates a sequence of object manipulations in an immersive Virtual Reality (VR) environment. From this demonstration, an XSAMPL3D description is automatically derived that represents the actions in terms of high-level action types and involved objects. The XSAMPL3D action description can then be used for the synthesis of animations where virtual humans of different body sizes and proportions reproduce the demonstrated action. Actions are encoded in a compact and human-readable XML-format. Thus, XSAMPL3D describtions are also amenable to manual authoring, e.g. for rapid prototyping of animations when no immersive VR environment is at the animator's disposal. However, when XSAMPL3D descriptions are derived from VR interactions, they can accomodate many details of the demonstrated action, such as motion trajectiories,hand shapes and other hand-object relations during grasping. Such detail would be hard to specify with manual motion authoring techniques only. Through the inclusion of language features that allow the representation of all relevant aspects of demonstrated object manipulations, XSAMPL3D is a suitable action representation language for the imitation-based approach to character animation.
Resumo:
This paper aims to describe the basic concepts and necessary for Java programs can invoke libraries of programming language C/C ++, through the JNA API. We used a library developed in C/C ++ called Glass [8], which offers a solution for viewing 3D graphics, using graphics clusters, reducing the cost of viewing. The purpose of the work is to interact with the humanoid developed using Java, which makes movements of LIBRAS language for the deaf, as Glass's, so that through this they can view the information using stereoscopic multi-view in full size. ©2010 IEEE.
Resumo:
Imitation learning is a promising approach for generating life-like behaviors of virtual humans and humanoid robots. So far, however, imitation learning has been mostly restricted to single agent settings where observed motions are adapted to new environment conditions but not to the dynamic behavior of interaction partners. In this paper, we introduce a new imitation learning approach that is based on the simultaneous motion capture of two human interaction partners. From the observed interactions, low-dimensional motion models are extracted and a mapping between these motion models is learned. This interaction model allows the real-time generation of agent behaviors that are responsive to the body movements of an interaction partner. The interaction model can be applied both to the animation of virtual characters as well as to the behavior generation for humanoid robots.
Resumo:
The use of serious games in education and their pedagogical benefit is being widely recognized. However, effective integration of serious games in education depends on addressing two big challenges: the successful incorporation of motivation and engagement that can lead to learning; and the highly specialised skills associated with customised development to meet the required pedagogical objectives. This paper presents the Westminster Serious Games Platform (wmin-SGP) an authoring tool that allows educators/domain experts without games design and development technical skills to create bespoke roleplay simulations in three dimensional scenes featuring fully embodied virtual humans capable of verbal and non-verbal interaction with users fit for specific educational objectives. The paper presents the wmin-SGP system architecture and it evaluates its effectiveness in fulfilling its purpose via the implementation of two roleplay simulations, one for Politics and one for Law. In addition, it presents the results of two types of evaluation that address how successfully the wmin-SGP combines usability principles and game core drives based on the Octalysis gamification framework that lead to motivating games experiences. The evaluation results shows that the wmin-SGP: provides an intuitive environment and tools that support users without advanced technical skills to create in real-time bespoke roleplay simulations in advanced graphical interfaces; satisfies most of the usability principles; and provides balanced simulations based on the Octalysis framework core drives. The paper concludes with a discussion of future extension of this real time authoring tool and directions for further development of the Octalysis framework to address learning.
Resumo:
199 p.
Resumo:
L'être humain utilise trois systèmes sensoriels distincts pour réguler le maintien de la station debout: la somesthésie, le système vestibulaire, et le système visuel. Le rôle de la vision dans la régulation posturale demeure peu connu, notamment sa variabilité en fonction de l'âge, du type développemental, et des atteintes neurologiques. Dans notre travail, la régulation posturale induite visuellement a été évaluée chez des participants au développement et vieillissement normaux âgés de 5-85 ans, chez des individus autistes (développement atypique) âgés de 12-33 ans, ainsi que chez des enfants entre 9-18 ans ayant subi un TCC léger. À cet effet, la réactivité posturale des participants en réponse à un tunnel virtuel entièrement immersif, se mouvant à trois niveaux de vélocité, a été mesurée; des conditions contrôles, où le tunnel était statique ou absent, ont été incluses. Les résultats montrent que la réactivité (i.e. instabilité) posturale induite visuellement est plus élevée chez les jeunes enfants; ensuite, elle s'atténue pour rejoindre des valeurs adultes vers 16-19 ans et augmente de façon linéaire en fonction de l'âge après 45 ans jusqu'à redevenir élevée vers 60 ans. De plus, à la plus haute vélocité du tunnel, les plus jeunes participants autistes ont manifesté significativement moins de réactivité posturale comparativement à leurs contrôles; cette différence n'était pas présente chez des participants plus âgés (16-33 ans). Enfin, les enfants ayant subi un TCC léger, et qui étaient initialement modérément symptomatiques, ont montré un niveau plus élevé d'instabilité posturale induite visuellement que les contrôles, et ce jusqu'à 12 semaines post-trauma malgré le fait que la majorité d'entre eux (89%) n'étaient plus symptomatiques à ce stade. En somme, cela suggère la présence d'une importante période de transition dans la maturation des systèmes sous-tendant l'intégration sensorimotrice impliquée dans le contrôle postural vers l'âge de 16 ans, et d'autres changements sensorimoteurs vers l'âge de 60 ans; cette sur-dépendance visuelle pour la régulation posturale chez les enfants et les aînés pourrait guider l'aménagement d'espaces et l'élaboration d'activités ajustés à l'âge des individus. De plus, le fait que l'hypo-réactivité posturale aux informations visuelles chez les autistes dépende des caractéristiques de l'environnement visuel et de l'âge chronologique, affine notre compréhension des anomalies sensorielles propres à l'autisme. Par ailleurs, le fait que les enfants ayant subi un TCC léger montrent des anomalies posturales jusqu'à 3 mois post-trauma, malgré une diminution significative des symptômes rapportés, pourrait être relié à une altération du traitement de l'information visuelle dynamique et pourrait avoir des implications quant à la gestion clinique des patients aux prises avec un TCC léger, puisque la résolution des symptômes est actuellement le principal critère utilisé pour la prise de décision quant au retour aux activités. Enfin, les résultats obtenus chez une population à développement atypique (autisme) et une population avec atteinte neurologique dite transitoire (TCC léger), contribuent non seulement à une meilleure compréhension des mécanismes d'intégration sensorimotrice sous-tendant le contrôle postural mais pourraient aussi servir comme marqueurs sensibles et spécifiques de dysfonction chez ces populations. Mots-clés : posture, équilibre, vision, développement/vieillissement sensorimoteur, autisme, TCC léger symptomatique, réalité virtuelle.
Resumo:
This paper outlines how the Ortelia project’s 3D virtual reality models have the capacity to assist our understanding of sites of cultural heritage. The VR investigation of such spaces can be a valuable tool in 'real world' empirical research in theatre and spatiality. Through a demonstration of two of Ortelia's VR models (an art gallery and a theatre), we suggest how we might consider interpreting cultural space and sites as contributing significantly to cultural capital. We also introduce the potential for human interaction in such venues through motion-capture to discuss the potential for assessing how humans interact in such contexts.
Resumo:
This paper outlines how the Ortelia project’s 3D virtual reality models have the capacity to assist our understanding of sites of cultural heritage. The VR investigation of such spaces can be a valuable tool in 'real world' empirical research in theatre and spatiality. Through a demonstration of two of Ortelia's VR models (an art gallery and a theatre), we suggest how we might consider interpreting cultural space and sites as contributing significantly to cultural capital. We also introduce the potential for human interaction in such venues through motion-capture to discuss the potential for assessing how humans interact in such contexts.
Resumo:
This paper reports the results of a mixed method approach to answer: To what extent do cultural values impact on e-service use in Saudi Arabia, and if so how? This paper will firstly, introduce the importance of culture and define the aspects of Saudi culture with focus on our scope: the fear of a lack of Interaction with other Humans. It will then describe the method used and present the qualitative and quantitative findings related to the need for Interactions with other Humans. Much of the written literature about human interaction aims at Information Systems design or design improvement. Yet, this is different to what is being investigated in this study. One of the factors this study will consider is the perceived lack of interaction with other humans or the anxiety people may feel in missing the physical interaction with other people by fully moving business interaction to the virtual world. The review of the literature indicates that the impact of such factor on Information and Communication Technologies (ICT) use has not been studied. This research aims to cover this gap by investigating to what extent the fear of a lack of Interaction with other Humans, as one of Saudi Arabia’s cultural values, impacts on e-service use in Saudi Arabia. The tested hypothesis was found consistent with its predicted outcome: the fear of a lack of Interaction with other Humans is a negative predictor of intention to use e-services in Saudi Arabia. It is evidenced that consideration of the impact of the cultural values will mainly contribute to the enhancement of ICTs implementation and use.
Resumo:
Humans, like other animals, alter their behavior depending on whether a threat is close or distant. We investigated spatial imminence of threat by developing an active avoidance paradigm in which volunteers were pursued through a maze by a virtual predator endowed with an ability to chase, capture, and inflict pain. Using functional magnetic resonance imaging, we found that as the virtual predator grew closer, brain activity shifted from the ventromedial prefrontal cortex to the periaqueductal gray. This shift showed maximal expression when a high degree of pain was anticipated. Moreover, imminence-driven periaqueductal gray activity correlated with increased subjective degree of dread and decreased confidence of escape. Our findings cast light on the neural dynamics of threat anticipation and have implications for the neurobiology of human anxiety-related disorders.
Resumo:
Meng, Q., Lee, M. (2003). Adapting Home Service Robot Behaviours by Experience Reuse and Interaction with Humans. 673-678. Paper presented at IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM 2003), Port Island, Kobe, Japan
Resumo:
A multitude of tasks that we perform on a daily basis require precise information about the orientation of our limbs with respect to the environment and the objects located within it. Recent studies have suggested that the inertia tensor, a physical property whose values are time- and co-ordinate-indepenclent, may be an important informational invariant used by the proprioceptive system to control the movements of our limbs (Pagano et al., Ecol. Psychol. 8 (1996) 43; Pagano and Turvey, Percept. Psychophys. 52 (1992) 617; Pagano and Turvey, J. Exp. Psychol. Hum. Percept. Perform. 21 (1995) 1070). We tested this hypothesis by recording the angular errors made by subjects when pointing to virtual targets in the dark. Close examination of the pointing errors made did not show any significant effects of the inertia tensor modifications on pointing accuracy. The kinematics of the pointing movements did not indicate that any on-line adjustments were being made to compensate for the inertia tensor changes. The implications of these findings with respect to the functioning of the proprioceptive system are discussed.