3 resultados para two-body interaction

em Digital Peer Publishing


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Imitation learning is a promising approach for generating life-like behaviors of virtual humans and humanoid robots. So far, however, imitation learning has been mostly restricted to single agent settings where observed motions are adapted to new environment conditions but not to the dynamic behavior of interaction partners. In this paper, we introduce a new imitation learning approach that is based on the simultaneous motion capture of two human interaction partners. From the observed interactions, low-dimensional motion models are extracted and a mapping between these motion models is learned. This interaction model allows the real-time generation of agent behaviors that are responsive to the body movements of an interaction partner. The interaction model can be applied both to the animation of virtual characters as well as to the behavior generation for humanoid robots.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

On the basis of a corpus of e-chat IRC exchanges (approximately 10,000 words in total) between Greek- and English-speaking speakers, the paper establishes a typical generic structure for two-party IRC exchanges, by focusing on how participants are oriented towards an ideal schema of phases and acts, as well as on how their interpersonal concerns contribute to the shaping of this schema. It is found that IRC interlocutors are primarily concerned with establishing contact with each other, while the (ideational) development of topic seems to be a less pressing need. The signaling of interpersonal relations is pervasive throughout e-chat discourse, as seen both in the range of devices developed and the two free elements of the generic schema, that is conversation play and channel check. It is also found that the accomplishment of the generic schema in each IRC exchange crucially depends on the acts of negotiation performed by the initiator and the responder.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The full-body control of virtual characters is a promising technique for application fields such as Virtual Prototyping. However it is important to assess to what extent the user full-body behavior is modified when immersed in a virtual environment. In the present study we have measured reach durations for two types of task (controlling a simple rigid shape vs. a virtual character) and two types of viewpoint (1st person vs. 3rd person). The paper first describes the architecture of the motion capture approach retained for the on-line full-body reach experiment. We then present reach measurement results performed in a non-virtual environment. They show that the target height parameter leads to reach duration variation of ∓25% around the average duration for the highest and lowest targets. This characteristic is highly accentuated in the virtual world as analyzed in the discussion section. In particular, the discrepancy observed for the first person viewpoint modality suggests to adopt a third person viewpoint when controling the posture of a virtual character in a virtual environment.