3 resultados para compact objects

em Digital Peer Publishing


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we present a model-based approach for real-time camera pose estimation in industrial scenarios. The line model which is used for tracking is generated by rendering a polygonal model and extracting contours out of the rendered scene. By un-projecting a point on the contour with the depth value stored in the z-buffer, the 3D coordinates of the contour can be calculated. For establishing 2D/3D correspondences the 3D control points on the contour are projected into the image and a perpendicular search for gradient maxima for every point on the contour is performed. Multiple hypotheses of 2D image points corresponding to a 3D control point make the pose estimation robust against ambiguous edges in the image.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, from the perspective of Cognitive Grammar, we consider the question of what kind of verbs can take cognate objects (COs) and what kind of verbs cannot. We investigate the syntactic properties of COs, such as the ability to take modifiers, the passivizability of cognate object constructions (COCs), and the it-pronominalization of COs. It is our contention that a detailed classification of verbs that occur in COCs is required in order to capture the relation between the syntactic properties and the modification of COs. While classifying verbs, we focus on three conceptual factors: the force of energy of the subject, a change of state of the subject, and the objectivity of the cognate noun. The study reveals that these three parameters enable us to capture the difference in the interpretation of COs in relation to modification and syntactic tests.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we present XSAMPL3D, a novel language for the high-level representation of actions performed on objects by (virtual) humans. XSAMPL3D was designed to serve as action representation language in an imitation-based approach to character animation: First, a human demonstrates a sequence of object manipulations in an immersive Virtual Reality (VR) environment. From this demonstration, an XSAMPL3D description is automatically derived that represents the actions in terms of high-level action types and involved objects. The XSAMPL3D action description can then be used for the synthesis of animations where virtual humans of different body sizes and proportions reproduce the demonstrated action. Actions are encoded in a compact and human-readable XML-format. Thus, XSAMPL3D describtions are also amenable to manual authoring, e.g. for rapid prototyping of animations when no immersive VR environment is at the animator's disposal. However, when XSAMPL3D descriptions are derived from VR interactions, they can accomodate many details of the demonstrated action, such as motion trajectiories,hand shapes and other hand-object relations during grasping. Such detail would be hard to specify with manual motion authoring techniques only. Through the inclusion of language features that allow the representation of all relevant aspects of demonstrated object manipulations, XSAMPL3D is a suitable action representation language for the imitation-based approach to character animation.