958 resultados para 2D-3D calibration
Resumo:
Tracking methods have the potential to retrieve the spatial location of project related entities such as personnel and equipment at construction sites, which can facilitate several construction management tasks. Existing tracking methods are mainly based on Radio Frequency (RF) technologies and thus require manual deployment of tags. On construction sites with numerous entities, tags installation, maintenance and decommissioning become an issue since it increases the cost and time needed to implement these tracking methods. To address these limitations, this paper proposes an alternate 3D tracking method based on vision. It operates by tracking the designated object in 2D video frames and correlating the tracking results from multiple pre-calibrated views using epipolar geometry. The methodology presented in this paper has been implemented and tested on videos taken in controlled experimental conditions. Results are compared with the actual 3D positions to validate its performance.
Resumo:
Most quasi-static ultrasound elastography methods image only the axial strain, derived from displacements measured in the direction of ultrasound propagation. In other directions, the beam lacks high resolution phase information and displacement estimation is therefore less precise. However, these estimates can be improved by steering the ultrasound beam through multiple angles and combining displacements measured along the different beam directions. Previously, beamsteering has only considered the 2D case to improve the lateral displacement estimates. In this paper, we extend this to 3D using a simulated 2D array to steer both laterally and elevationally in order to estimate the full 3D displacement vector over a volume. The method is tested on simulated and phantom data using a simulated 6-10MHz array, and the precision of displacement estimation is measured with and without beamsteering. In simulations, we found a statistically significant improvement in the precision of lateral and elevational displacement estimates: lateral precision 35.69μm unsteered, 3.70μm steered; elevational precision 38.67μm unsteered, 3.64μm steered. Similar results were found in the phantom data: lateral precision 26.51μm unsteered, 5.78μm steered; elevational precision 28.92μm unsteered, 11.87μm steered. We conclude that volumetric 3D beamsteering improves the precision of lateral and elevational displacement estimates.
Resumo:
Most quasi-static ultrasound elastography methods image only the axial strain, derived from displacements measured in the direction of ultrasound propagation. In other directions, the beam lacks high resolution phase information and displacement estimation is therefore less precise. However, these estimates can be improved by steering the ultrasound beam through multiple angles and combining displacements measured along the different beam directions. Previously, beamsteering has only considered the 2D case to improve the lateral displacement estimates. In this paper, we extend this to 3D using a simulated 2D array to steer both laterally and elevationally in order to estimate the full 3D displacement vector over a volume. The method is tested on simulated and phantom data using a simulated 6-10 MHz array, and the precision of displacement estimation is measured with and without beamsteering. In simulations, we found a statistically significant improvement in the precision of lateral and elevational displacement estimates: lateral precision 35.69 μm unsteered, 3.70 μm steered; elevational precision 38.67 μm unsteered, 3.64 μm steered. Similar results were found in the phantom data: lateral precision 26.51 μm unsteered, 5.78 μm steered; elevational precision 28.92 μm unsteered, 11.87 μm steered. We conclude that volumetric 3D beamsteering improves the precision of lateral and elevational displacement estimates. © 2012 Elsevier B.V. All rights reserved.
Resumo:
Photoluminescence spectroscopy has been used to investigate self-assembled InAs islands in InAlAs grown on InP(0 0 1) by molecular beam epitaxy, in correlation with transmission electron microscopy. The nominal deposition of 3.6 monolayers of InAs at 470 degrees C achieves the onset stage of coherent island formation. In addition to one strong emission around 0.74 eV, the sample displaces several emission peaks at 0.87, 0.92. 0.98, and 1.04 eV. Fully developed islands that coexist with semi-finished disk islands account for the multipeak emission. These results provide strong evidence of size quantization effects in InAs islands. (C) 1999 Elsevier Science B.V. All rights reserved.
Resumo:
A portable 3D laser scanning system has been designed and built for robot vision. By tilting the charge coupled device (CCD) plane of portable 3D scanning system according to the Scheimpflug condition, the depth-of-view is successfully extended from less than 40 to 100 mm. Based on the tilted camera model, the traditional two-step camera calibration method is modified by introducing the angle factor. Meanwhile, a novel segmental calibration approach, i.e., dividing the whole work range into two parts and calibrating, respectively, with corresponding system parameters, is proposed to effectively improve the measurement accuracy of the large depth-of-view 3D laser scanner. In the process of 3D reconstruction, different calibration parameters are used to transform the 2D coordinates into 3D coordinates according to the different positions of the image in the CCD plane, and the measurement accuracy of 60 mu m is obtained experimentally. Finally, the experiment of scanning a lamina by the large depth-of-view portable 3D laser scanner used by an industrial robot IRB 4400 is also employed to demonstrate the effectiveness and high measurement accuracy of our scanning system. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
A fascinating 3D polycatenane-like metal-organic framework with two kinds of helical chains was reported, in which the helical chains exhibit multiple interweaving modes based on the unusual 2D -> 2D parallel -> 3D parallel interpenetration.
Resumo:
A polynomial time algorithm (pruned correspondence search, PCS) with good average case performance for solving a wide class of geometric maximal matching problems, including the problem of recognizing 3D objects from a single 2D image, is presented. Efficient verification algorithms, based on a linear representation of location constraints, are given for the case of affine transformations among vector spaces and for the case of rigid 2D and 3D transformations with scale. Some preliminary experiments suggest that PCS is a practical algorithm. Its similarity to existing correspondence based algorithms means that a number of existing techniques for speedup can be incorporated into PCS to improve its performance.
Resumo:
Similarity measurements between 3D objects and 2D images are useful for the tasks of object recognition and classification. We distinguish between two types of similarity metrics: metrics computed in image-space (image metrics) and metrics computed in transformation-space (transformation metrics). Existing methods typically use image and the nearest view of the object. Example for such a measure is the Euclidean distance between feature points in the image and corresponding points in the nearest view. (Computing this measure is equivalent to solving the exterior orientation calibration problem.) In this paper we introduce a different type of metrics: transformation metrics. These metrics penalize for the deformatoins applied to the object to produce the observed image. We present a transformation metric that optimally penalizes for "affine deformations" under weak-perspective. A closed-form solution, together with the nearest view according to this metric, are derived. The metric is shown to be equivalent to the Euclidean image metric, in the sense that they bound each other from both above and below. For Euclidean image metric we offier a sub-optimal closed-form solution and an iterative scheme to compute the exact solution.
Resumo:
We propose an affine framework for perspective views, captured by a single extremely simple equation based on a viewer-centered invariant we call "relative affine structure". Via a number of corollaries of our main results we show that our framework unifies previous work --- including Euclidean, projective and affine --- in a natural and simple way, and introduces new, extremely simple, algorithms for the tasks of reconstruction from multiple views, recognition by alignment, and certain image coding applications.
Resumo:
An approach for estimating 3D body pose from multiple, uncalibrated views is proposed. First, a mapping from image features to 2D body joint locations is computed using a statistical framework that yields a set of several body pose hypotheses. The concept of a "virtual camera" is introduced that makes this mapping invariant to translation, image-plane rotation, and scaling of the input. As a consequence, the calibration matrices (intrinsics) of the virtual cameras can be considered completely known, and their poses are known up to a single angular displacement parameter. Given pose hypotheses obtained in the multiple virtual camera views, the recovery of 3D body pose and camera relative orientations is formulated as a stochastic optimization problem. An Expectation-Maximization algorithm is derived that can obtain the locally most likely (self-consistent) combination of body pose hypotheses. Performance of the approach is evaluated with synthetic sequences as well as real video sequences of human motion.