2 resultados para action level

em Digital Peer Publishing


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study examines the consequences of living in segregated and mixed neighbourhoods on ingroup bias and offensive action tendencies, taking into consideration the role of intergroup experiences and perceived threat. Using adult data from a cross-sectional survey in Belfast, Northern Ireland, we tested a model that examined the relationship between living in segregated (N = 396) and mixed (N = 562) neighbourhoods and positive contact, exposure to violence, perceived threat and outgroup orientations. Our results show that living in mixed neighbourhoods was associated with lower ingroup bias and reduced offensive action tendencies. These effects were partially mediated by positive contact. However, our analysis also shows that respondents living in mixed neighbourhoods report higher exposure to political violence and higher perceived threat to physical safety. These findings demonstrate the importance of examining both social experience and threat perceptions when testing the relationship between social environment and prejudice.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we present XSAMPL3D, a novel language for the high-level representation of actions performed on objects by (virtual) humans. XSAMPL3D was designed to serve as action representation language in an imitation-based approach to character animation: First, a human demonstrates a sequence of object manipulations in an immersive Virtual Reality (VR) environment. From this demonstration, an XSAMPL3D description is automatically derived that represents the actions in terms of high-level action types and involved objects. The XSAMPL3D action description can then be used for the synthesis of animations where virtual humans of different body sizes and proportions reproduce the demonstrated action. Actions are encoded in a compact and human-readable XML-format. Thus, XSAMPL3D describtions are also amenable to manual authoring, e.g. for rapid prototyping of animations when no immersive VR environment is at the animator's disposal. However, when XSAMPL3D descriptions are derived from VR interactions, they can accomodate many details of the demonstrated action, such as motion trajectiories,hand shapes and other hand-object relations during grasping. Such detail would be hard to specify with manual motion authoring techniques only. Through the inclusion of language features that allow the representation of all relevant aspects of demonstrated object manipulations, XSAMPL3D is a suitable action representation language for the imitation-based approach to character animation.