975 resultados para camera trapping


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a novel technique for reconstructing an outdoor sculpture from an uncalibrated image sequence acquired around it using a hand-held camera. The technique introduced here uses only the silhouettes of the sculpture for both motion estimation and model reconstruction, and no corner detection nor matching is necessary. This is very important as most sculptures are composed of smooth textureless surfaces, and hence their silhouettes are very often the only information available from their images. Besides, as opposed to previous works, the proposed technique does not require the camera motion to be perfectly circular (e.g., turntable sequence). It employs an image rectification step before the motion estimation step to obtain a rough estimate of the camera motion which is only approximately circular. A refinement process is then applied to obtain the true general motion of the camera. This allows the technique to handle large outdoor sculptures which cannot be rotated on a turntable, making it much more practical and flexible.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Super-Resolution imaging techniques such as Fluorescent Photo-Activation Localisation Microscopy (FPALM) have created a powerful new toolkit for investigating living cells, however a simple platform for growing, trapping, holding and controlling the cells is needed before the approach can become truly widespread. We present a microfluidic device formed in polydimethylsiloxane (PDMS) with a fluidic design which traps cells in a high-density array of wells and holds them very still throughout the life cycle, using hydrodynamic forces only. The device meets or exceeds all the necessary criteria for FPALM imaging of Schizosaccharomyces pombe and is designed to remain flexible, robust and easy to use. © 2011 IEEE.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We study optical trapping of nanotubes and graphene. We extract the distribution of both centre-of-mass and angular fuctuations from three-dimensional tracking of these optically trapped carbon nanostructures. The optical force and torque constants are measured from auto and cross-correlation of the tracking signals. We demonstrate that nanotubes enable nanometer spatial, and femto-Newton force resolution in photonic force microscopy by accurately measuring the radiation pressure in a double frequency optical tweezers. Finally, we integrate optical trapping with Raman and photoluminescence spectroscopy demonstrating the use of a Raman and photoluminescence tweezers by investigating the spectroscopy of nanotubes and graphene fakes in solution. Experimental results are compared with calculations based on electromagnetic scattering theory. © 2011 by the Author(s); licensee Accademia Peloritana dei Pericolanti, Messina, Italy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Calibration of a camera system is a necessary step in any stereo metric process. It correlates all cameras to a common coordinate system by measuring the intrinsic and extrinsic parameters of each camera. Currently, manual calibration of a camera system is the only way to achieve calibration in civil engineering operations that require stereo metric processes (photogrammetry, videogrammetry, vision based asset tracking, etc). This type of calibration however is time-consuming and labor-intensive. Furthermore, in civil engineering operations, camera systems are exposed to open, busy sites. In these conditions, the position of presumably stationary cameras can easily be changed due to external factors such as wind, vibrations or due to an unintentional push/touch from personnel on site. In such cases manual calibration must be repeated. In order to address this issue, several self-calibration algorithms have been proposed. These algorithms use Projective Geometry, Absolute Conic and Kruppa Equations and variations of these to produce processes that achieve calibration. However, most of these methods do not consider all constraints of a camera system such as camera intrinsic constraints, scene constraints, camera motion or varying camera intrinsic properties. This paper presents a novel method that takes all constraints into consideration to auto-calibrate cameras using an image alignment algorithm originally meant for vision based tracking. In this method, image frames are taken from cameras. These frames are used to calculate the fundamental matrix that gives epipolar constraints. Intrinsic and extrinsic properties of cameras are acquired from this calculation. Test results are presented in this paper with recommendations for further improvement.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Camera motion estimation is one of the most significant steps for structure-from-motion (SFM) with a monocular camera. The normalized 8-point, the 7-point, and the 5-point algorithms are normally adopted to perform the estimation, each of which has distinct performance characteristics. Given unique needs and challenges associated to civil infrastructure SFM scenarios, selection of the proper algorithm directly impacts the structure reconstruction results. In this paper, a comparison study of the aforementioned algorithms is conducted to identify the most suitable algorithm, in terms of accuracy and reliability, for reconstructing civil infrastructure. The free variables tested are baseline, depth, and motion. A concrete girder bridge was selected as the "test-bed" to reconstruct using an off-the-shelf camera capturing imagery from all possible positions that maximally the bridge's features and geometry. The feature points in the images were extracted and matched via the SURF descriptor. Finally, camera motions are estimated based on the corresponding image points by applying the aforementioned algorithms, and the results evaluated.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The commercial far-range (>10m) infrastructure spatial data collection methods are not completely automated. They need significant amount of manual post-processing work and in some cases, the equipment costs are significant. This paper presents a method that is the first step of a stereo videogrammetric framework and holds the promise to address these issues. Under this method, video streams are initially collected from a calibrated set of two video cameras. For each pair of simultaneous video frames, visual feature points are detected and their spatial coordinates are then computed. The result, in the form of a sparse 3D point cloud, is the basis for the next steps in the framework (i.e., camera motion estimation and dense 3D reconstruction). A set of data, collected from an ongoing infrastructure project, is used to show the merits of the method. Comparison with existing tools is also shown, to indicate the performance differences of the proposed method in the level of automation and the accuracy of results.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Automating the model generation process of infrastructure can substantially reduce the modeling time and cost. This paper presents a method to generate a sparse point cloud of an infrastructure scene using a single video camera under practical constraints. It is the first step towards establishing an automatic framework for object-oriented as-built modeling. Motion blur and key frame selection criteria are considered. Structure from motion and bundle adjustment are explored. The method is demonstrated in a case study where the scene of a reinforced concrete bridge is videotaped, reconstructed, and metrically validated. The result indicates the applicability, efficiency, and accuracy of the proposed method.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Vision trackers have been proposed as a promising alternative for tracking at large-scale, congested construction sites. They provide the location of a large number of entities in a camera view across frames. However, vision trackers provide only two-dimensional (2D) pixel coordinates, which are not adequate for construction applications. This paper proposes and validates a method that overcomes this limitation by employing stereo cameras and converting 2D pixel coordinates to three-dimensional (3D) metric coordinates. The proposed method consists of four steps: camera calibration, camera pose estimation, 2D tracking, and triangulation. Given that the method employs fixed, calibrated stereo cameras with a long baseline, appropriate algorithms are selected for each step. Once the first two steps reveal camera system parameters, the third step determines 2D pixel coordinates of entities in subsequent frames. The 2D coordinates are triangulated on the basis of the camera system parameters to obtain 3D coordinates. The methodology presented in this paper has been implemented and tested with data collected from a construction site. The results demonstrate the suitability of this method for on-site tracking purposes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We use laser beams with radial and azimuthal polarization to optically trap carbon nanotubes. We measure force constants and trap parameters as a function of power showing improved axial trapping efficiency with respect to linearly polarized beams. The analysis of the thermal fluctuations highlights a significant change in the optical trapping potential when using cylindrical vector beams. This enables the use of polarization states to shape optical traps according to the particle geometry, as well as paving the way to nanoprobe-based photonic force microscopy with increased performance compared to a standard linearly polarized configuration. © 2012 Optical Society of America.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present a system for augmenting depth camera output using multispectral photometric stereo. The technique is demonstrated using a Kinect sensor and is able to produce geometry independently for each frame. Improved reconstruction is demonstrated using the Kinect's inbuilt RGB camera and further improvements are achieved by introducing an additional high resolution camera. As well as qualitative improvements in reconstruction a quantitative reduction in temporal noise is shown. As part of the system an approach is presented for relaxing the assumption of multispectral photometric stereo that scenes are of constant chromaticity to the assumption that scenes contain multiple piecewise constant chromaticities.