946 resultados para 3D object manipulation


Relevância:

100.00% 100.00%

Publicador:

Resumo:

To manipulate an object skillfully, the brain must learn its dynamics, specifying the mapping between applied force and motion. A fundamental issue in sensorimotor control is whether such dynamics are represented in an extrinsic frame of reference tied to the object or an intrinsic frame of reference linked to the arm. Although previous studies have suggested that objects are represented in arm-centered coordinates [1-6], all of these studies have used objects with unusual and complex dynamics. Thus, it is not known how objects with natural dynamics are represented. Here we show that objects with simple (or familiar) dynamics and those with complex (or unfamiliar) dynamics are represented in object- and arm-centered coordinates, respectively. We also show that objects with simple dynamics are represented with an intermediate coordinate frame when vision of the object is removed. These results indicate that object dynamics can be flexibly represented in different coordinate frames by the brain. We suggest that with experience, the representation of the dynamics of a manipulated object may shift from a coordinate frame tied to the arm toward one that is linked to the object. The additional complexity required to represent dynamics in object-centered coordinates would be economical for familiar objects because such a representation allows object use regardless of the orientation of the object in hand.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tracking methods have the potential to retrieve the spatial location of project related entities such as personnel and equipment at construction sites, which can facilitate several construction management tasks. Existing tracking methods are mainly based on Radio Frequency (RF) technologies and thus require manual deployment of tags. On construction sites with numerous entities, tags installation, maintenance and decommissioning become an issue since it increases the cost and time needed to implement these tracking methods. To address these limitations, this paper proposes an alternate 3D tracking method based on vision. It operates by tracking the designated object in 2D video frames and correlating the tracking results from multiple pre-calibrated views using epipolar geometry. The methodology presented in this paper has been implemented and tested on videos taken in controlled experimental conditions. Results are compared with the actual 3D positions to validate its performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present Multi Scale Shape Index (MSSI), a novel feature for 3D object recognition. Inspired by the scale space filtering theory and Shape Index measure proposed by Koenderink & Van Doorn [6], this feature associates different forms of shape, such as umbilics, saddle regions, parabolic regions to a real valued index. This association is useful for representing an object based on its constituent shape forms. We derive closed form scale space equations which computes a characteristic scale at each 3D point in a point cloud without an explicit mesh structure. This characteristic scale is then used to estimate the Shape Index. We quantitatively evaluate the robustness and repeatability of the MSSI feature for varying object scales and changing point cloud density. We also quantify the performance of MSSI for object category recognition on a publicly available dataset. © 2013 Springer-Verlag.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper describes the main features of a view-based model of object recognition. The model tries to capture general properties to be expected in a biological architecture for object recognition. The basic module is a regularization network in which each of the hidden units is broadly tuned to a specific view of the object to be recognized.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A persistent issue of debate in the area of 3D object recognition concerns the nature of the experientially acquired object models in the primate visual system. One prominent proposal in this regard has expounded the use of object centered models, such as representations of the objects' 3D structures in a coordinate frame independent of the viewing parameters [Marr and Nishihara, 1978]. In contrast to this is another proposal which suggests that the viewing parameters encountered during the learning phase might be inextricably linked to subsequent performance on a recognition task [Tarr and Pinker, 1989; Poggio and Edelman, 1990]. The 'object model', according to this idea, is simply a collection of the sample views encountered during training. Given that object centered recognition strategies have the attractive feature of leading to viewpoint independence, they have garnered much of the research effort in the field of computational vision. Furthermore, since human recognition performance seems remarkably robust in the face of imaging variations [Ellis et al., 1989], it has often been implicitly assumed that the visual system employs an object centered strategy. In the present study we examine this assumption more closely. Our experimental results with a class of novel 3D structures strongly suggest the use of a view-based strategy by the human visual system even when it has the opportunity of constructing and using object-centered models. In fact, for our chosen class of objects, the results seem to support a stronger claim: 3D object recognition is 2D view-based.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Many 3D objects in the world around us are strongly constrained. For instance, not only cultural artifacts but also many natural objects are bilaterally symmetric. Thoretical arguments suggest and psychophysical experiments confirm that humans may be better in the recognition of symmetric objects. The hypothesis of symmetry-induced virtual views together with a network model that successfully accounts for human recognition of generic 3D objects leads to predictions that we have verified with psychophysical experiments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tabletop computers featuring multi-touch input and object tracking are a common platform for research on Tangible User Interfaces (also known as Tangible Interaction). However, such systems are confined to sensing activity on the tabletop surface, disregarding the rich and relatively unexplored interaction canvas above the tabletop. This dissertation contributes with tCAD, a 3D modeling tool combining fiducial marker tracking, finger tracking and depth sensing in a single system. This dissertation presents the technical details of how these features were integrated, attesting to its viability through the design, development and early evaluation of the tCAD application. A key aspect of this work is a description of the interaction techniques enabled by merging tracked objects with direct user input on and above a table surface.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Somatosensory object discrimination has been shown to involve widespread cortical and subcortical structures in both cerebral hemispheres. In this study we aimed to identify the networks involved in tactile object manipulation by principal component analysis (PCA) of individual subjects. We expected to find more than one network.