3 resultados para Animations
em Digital Peer Publishing
Resumo:
CampusContent (CC) is a DFG-funded competence center for eLearning with its own portal. It links content and people who support sharing and reuse of high quality learning materials and codified pedagogical know-how, such as learning objectives, pedagogical scenarios, recommended learning activities, and learning paths. The heart of the portal is a distributed repository whose contents are linked to various other CampusContent portals. Integrated into each portal are user-friendly tools for designing reusable learning content, exercises, and templates for learning units and courses. Specialized authoring tools permit the configuration, adaption, and automatic generation of interactive Flash animations using Adobe's Flexbuilder technology. More coarse-grained content components such as complete learning units and entire courses, in which contents and materials taken from the repository are embedded, can be created with XML-based authoring tools. Open service interface allow the deep or shallow integration of the portal provider's preferred authoring and learning tools. The portal is built on top of the Enterprise Content Management System Alfresco, which comes with social networking functionality that has been adapted to accommmodate collaboration, sharing and reuse within trusted communities of practice.
Resumo:
In this paper we present XSAMPL3D, a novel language for the high-level representation of actions performed on objects by (virtual) humans. XSAMPL3D was designed to serve as action representation language in an imitation-based approach to character animation: First, a human demonstrates a sequence of object manipulations in an immersive Virtual Reality (VR) environment. From this demonstration, an XSAMPL3D description is automatically derived that represents the actions in terms of high-level action types and involved objects. The XSAMPL3D action description can then be used for the synthesis of animations where virtual humans of different body sizes and proportions reproduce the demonstrated action. Actions are encoded in a compact and human-readable XML-format. Thus, XSAMPL3D describtions are also amenable to manual authoring, e.g. for rapid prototyping of animations when no immersive VR environment is at the animator's disposal. However, when XSAMPL3D descriptions are derived from VR interactions, they can accomodate many details of the demonstrated action, such as motion trajectiories,hand shapes and other hand-object relations during grasping. Such detail would be hard to specify with manual motion authoring techniques only. Through the inclusion of language features that allow the representation of all relevant aspects of demonstrated object manipulations, XSAMPL3D is a suitable action representation language for the imitation-based approach to character animation.
Resumo:
Non-verbal communication (NVC) is considered to represent more than 90 percent of everyday communication. In virtual world, this important aspect of interaction between virtual humans (VH) is strongly neglected. This paper presents a user-test study to demonstrate the impact of automatically generated graphics-based NVC expression on the dialog quality: first, we wanted to compare impassive and emotion facial expression simulation for impact on the chatting. Second, we wanted to see whether people like chatting within a 3D graphical environment. Our model only proposes facial expressions and head movements induced from spontaneous chatting between VHs. Only subtle facial expressions are being used as nonverbal cues - i.e. related to the emotional model. Motion capture animations related to hand gestures, such as cleaning glasses, were randomly used to make the virtual human lively. After briefly introducing the technical architecture of the 3D-chatting system, we focus on two aspects of chatting through VHs. First, what is the influence of facial expressions that are induced from text dialog? For this purpose, we exploited an emotion engine extracting an emotional content from a text and depicting it into a virtual character developed previously [GAS11]. Second, as our goal was not addressing automatic generation of text, we compared the impact of nonverbal cues in conversation with a chatbot or with a human operator with a wizard of oz approach. Among main results, the within group study -involving 40 subjects- suggests that subtle facial expressions impact significantly not only on the quality of experience but also on dialog understanding.