34 resultados para choreography for the camera

em Cambridge University Engineering Department Publications Database


Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper presents a novel technique for reconstructing an outdoor sculpture from an uncalibrated image sequence acquired around it using a hand-held camera. The technique introduced here uses only the silhouettes of the sculpture for both motion estimation and model reconstruction, and no corner detection nor matching is necessary. This is very important as most sculptures are composed of smooth textureless surfaces, and hence their silhouettes are very often the only information available from their images. Besides, as opposed to previous works, the proposed technique does not require the camera motion to be perfectly circular (e.g., turntable sequence). It employs an image rectification step before the motion estimation step to obtain a rough estimate of the camera motion which is only approximately circular. A refinement process is then applied to obtain the true general motion of the camera. This allows the technique to handle large outdoor sculptures which cannot be rotated on a turntable, making it much more practical and flexible.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Vision trackers have been proposed as a promising alternative for tracking at large-scale, congested construction sites. They provide the location of a large number of entities in a camera view across frames. However, vision trackers provide only two-dimensional (2D) pixel coordinates, which are not adequate for construction applications. This paper proposes and validates a method that overcomes this limitation by employing stereo cameras and converting 2D pixel coordinates to three-dimensional (3D) metric coordinates. The proposed method consists of four steps: camera calibration, camera pose estimation, 2D tracking, and triangulation. Given that the method employs fixed, calibrated stereo cameras with a long baseline, appropriate algorithms are selected for each step. Once the first two steps reveal camera system parameters, the third step determines 2D pixel coordinates of entities in subsequent frames. The 2D coordinates are triangulated on the basis of the camera system parameters to obtain 3D coordinates. The methodology presented in this paper has been implemented and tested with data collected from a construction site. The results demonstrate the suitability of this method for on-site tracking purposes.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper presents a novel coarse-to-fine global localization approach inspired by object recognition and text retrieval techniques. Harris-Laplace interest points characterized by scale-invariant transformation feature descriptors are used as natural landmarks. They are indexed into two databases: a location vector space model (LVSM) and a location database. The localization process consists of two stages: coarse localization and fine localization. Coarse localization from the LVSM is fast, but not accurate enough, whereas localization from the location database using a voting algorithm is relatively slow, but more accurate. The integration of coarse and fine stages makes fast and reliable localization possible. If necessary, the localization result can be verified by epipolar geometry between the representative view in the database and the view to be localized. In addition, the localization system recovers the position of the camera by essential matrix decomposition. The localization system has been tested in indoor and outdoor environments. The results show that our approach is efficient and reliable. © 2006 IEEE.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper, we propose a vision based mobile robot localization strategy. Local scale-invariant features are used as natural landmarks in unstructured and unmodified environment. The local characteristics of the features we use prove to be robust to occlusion and outliers. In addition, the invariance of the features to viewpoint change makes them suitable landmarks for mobile robot localization. Scale-invariant features detected in the first exploration are indexed into a location database. Indexing and voting allow efficient recognition of global localization. The localization result is verified by epipolar geometry between the representative view in database and the view to be localized, thus the probability of false localization will be decreased. The localization system can recover the pose of the camera mounted on the robot by essential matrix decomposition. Then the position of the robot can be computed easily. Both calibrated and un-calibrated cases are discussed and relative position estimation based on calibrated camera turns out to be the better choice. Experimental results show that our approach is effective and reliable in the case of illumination changes, similarity transformations and extraneous features. © 2004 IEEE.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper addresses the problem of recovering the 3D shape of a surface of revolution from a single uncalibrated perspective view. The algorithm introduced here makes use of the invariant properties of a surface of revolution and its silhouette to locate the image of the revolution axis, and to calibrate the focal length of the camera. The image is then normalized and rectified such that the resulting silhouette exhibits bilateral symmetry. Such a rectification leads to a simpler differential analysis of the silhouette, and yields a simple equation for depth recovery. It is shown that under a general camera configuration, there will be a 2-parameter family of solutions for the reconstruction. The first parameter corresponds to an unknown scale, whereas the second one corresponds to an unknown attitude of the object. By identifying the image of a latitude circle, the ambiguity due to the unknown attitude can be resolved. Experimental results on real images are presented, which demonstrate the quality of the reconstruction. © 2004 Elsevier B.V. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This Chapter presents a vision-based system for touch-free interaction with a display at a distance. A single camera is fixed on top of the screen and is pointing towards the user. An attention mechanism allows the user to start the interaction and control a screen pointer by moving their hand in a fist pose directed at the camera. On-screen items can be chosen by a selection mechanism. Current sample applications include browsing video collections as well as viewing a gallery of 3D objects, which the user can rotate with their hand motion. We have included an up-to-date review of hand tracking methods, and comment on the merits and shortcomings of previous approaches. The proposed tracker uses multiple cues, appearance, color, and motion, for robustness. As the space of possible observation models is generally too large for exhaustive online search, we select models that are suitable for the particular tracking task at hand. During a training stage, various off-the-shelf trackers are evaluated. From this data differentmethods of fusing them online are investigated, including parallel and cascaded tracker evaluation. For the case of fist tracking, combining a small number of observers in a cascade results in an efficient algorithm that is used in our gesture interface. The system has been on public display at conferences where over a hundred users have engaged with it. © 2010 Springer-Verlag Berlin Heidelberg.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Videogrammetry is an inexpensive and easy-to-use technology for spatial 3D scene recovery. When applied to large scale civil infrastructure scenes, only a small percentage of the collected video frames are required to achieve robust results. However, choosing the right frames requires careful consideration. Videotaping a built infrastructure scene results in large video files filled with blurry, noisy, or redundant frames. This is due to frame rate to camera speed ratios that are often higher than necessary; camera and lens imperfections and limitations that result in imaging noise; and occasional jerky motions of the camera that result in motion blur; all of which can significantly affect the performance of the videogrammetric pipeline. To tackle these issues, this paper proposes a novel method for automating the selection of an optimized number of informative, high quality frames. According to this method, as the first step, blurred frames are removed using the thresholds determined based on a minimum level of frame quality required to obtain robust results. Then, an optimum number of key frames are selected from the remaining frames using the selection criteria devised by the authors. Experimental results show that the proposed method outperforms existing methods in terms of improved 3D reconstruction results, while maintaining the optimum number of extracted frames needed to generate high quality 3D point clouds.© 2012 Elsevier Ltd. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The tomographic reconstruction of OH* chemiluminescence was performed on two interacting turbulent premixed bluff-body stabilized flames under steady flow conditions and acoustic excitation. These measurements elucidate the complex three-dimensional (3D) vortex-flame interactions which have previously not been accessible. The experiment was performed using a single camera and intensifier, with multiple views acquired by repositioning the camera, permitting calculation of the mean and phase-averaged volumetric OH* distributions. The reconstructed flame structure and phase-averaged dynamics are compared with OH planar laser-induced fluorescence and flame surface density measurements for the first time. The volumetric data revealed that the large-scale vortex-flame structures formed along the shear layers of each flame collide when the two flames meet, resulting in complex 3D flame structures in between the two flames. With a fairly simple experimental setup, it is shown that the tomographic reconstruction of OH* chemiluminescence in forced flames is a powerful tool that can yield important physical insights into large-scale 3D flame dynamics that are important in combustion instability. © 2013 IOP Publishing Ltd.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Automating the model generation process of infrastructure can substantially reduce the modeling time and cost. This paper presents a method to generate a sparse point cloud of an infrastructure scene using a single video camera under practical constraints. It is the first step towards establishing an automatic framework for object-oriented as-built modeling. Motion blur and key frame selection criteria are considered. Structure from motion and bundle adjustment are explored. The method is demonstrated in a case study where the scene of a reinforced concrete bridge is videotaped, reconstructed, and metrically validated. The result indicates the applicability, efficiency, and accuracy of the proposed method.

Relevância:

30.00% 30.00%

Publicador: