8 resultados para Projector-Camera system
em Cambridge University Engineering Department Publications Database
Resumo:
Calibration of a camera system is a necessary step in any stereo metric process. It correlates all cameras to a common coordinate system by measuring the intrinsic and extrinsic parameters of each camera. Currently, manual calibration of a camera system is the only way to achieve calibration in civil engineering operations that require stereo metric processes (photogrammetry, videogrammetry, vision based asset tracking, etc). This type of calibration however is time-consuming and labor-intensive. Furthermore, in civil engineering operations, camera systems are exposed to open, busy sites. In these conditions, the position of presumably stationary cameras can easily be changed due to external factors such as wind, vibrations or due to an unintentional push/touch from personnel on site. In such cases manual calibration must be repeated. In order to address this issue, several self-calibration algorithms have been proposed. These algorithms use Projective Geometry, Absolute Conic and Kruppa Equations and variations of these to produce processes that achieve calibration. However, most of these methods do not consider all constraints of a camera system such as camera intrinsic constraints, scene constraints, camera motion or varying camera intrinsic properties. This paper presents a novel method that takes all constraints into consideration to auto-calibrate cameras using an image alignment algorithm originally meant for vision based tracking. In this method, image frames are taken from cameras. These frames are used to calculate the fundamental matrix that gives epipolar constraints. Intrinsic and extrinsic properties of cameras are acquired from this calculation. Test results are presented in this paper with recommendations for further improvement.
Resumo:
Vision trackers have been proposed as a promising alternative for tracking at large-scale, congested construction sites. They provide the location of a large number of entities in a camera view across frames. However, vision trackers provide only two-dimensional (2D) pixel coordinates, which are not adequate for construction applications. This paper proposes and validates a method that overcomes this limitation by employing stereo cameras and converting 2D pixel coordinates to three-dimensional (3D) metric coordinates. The proposed method consists of four steps: camera calibration, camera pose estimation, 2D tracking, and triangulation. Given that the method employs fixed, calibrated stereo cameras with a long baseline, appropriate algorithms are selected for each step. Once the first two steps reveal camera system parameters, the third step determines 2D pixel coordinates of entities in subsequent frames. The 2D coordinates are triangulated on the basis of the camera system parameters to obtain 3D coordinates. The methodology presented in this paper has been implemented and tested with data collected from a construction site. The results demonstrate the suitability of this method for on-site tracking purposes.
Resumo:
Tracking of project related entities such as construction equipment, materials, and personnel is used to calculate productivity, detect travel path conflicts, enhance the safety on the site, and monitor the project. Radio frequency tracking technologies (Wi-Fi, RFID, UWB) and GPS are commonly used for this purpose. However, on large-scale sites, deploying, maintaining and removing such systems can be costly and time-consuming. In addition, privacy issues with personnel tracking often limits the usability of these technologies on construction sites. This paper presents a vision based tracking framework that holds promise to address these limitations. The framework uses videos from a set of two or more static cameras placed on construction sites. In each camera view, the framework identifies and tracks construction entities providing 2D image coordinates across frames. Combining the 2D coordinates based on the installed camera system (the distance between the cameras and the view angles of them), 3D coordinates are calculated at each frame. The results of each step are presented to illustrate the feasibility of the framework.
A holographic projection system with an electrically tuning and continuously adjustable optical zoom
Resumo:
A holographic projection system with optical zoom is demonstrated. By using a combination of a LC lens and an encoded Fresnel lens on the LCoS panel, we can control zoom in a holographic projector. The magnification can be electrically adjusted by tuning the focal length of the combination of the two lenses. The zoom ratio of the holographic projection system can reach 3.7:1 with continuous zoom function. The optical zoom function can decrease the complexity of the holographic projection system.
Resumo:
We present a system for augmenting depth camera output using multispectral photometric stereo. The technique is demonstrated using a Kinect sensor and is able to produce geometry independently for each frame. Improved reconstruction is demonstrated using the Kinect's inbuilt RGB camera and further improvements are achieved by introducing an additional high resolution camera. As well as qualitative improvements in reconstruction a quantitative reduction in temporal noise is shown. As part of the system an approach is presented for relaxing the assumption of multispectral photometric stereo that scenes are of constant chromaticity to the assumption that scenes contain multiple piecewise constant chromaticities.
Resumo:
The tunable liquid crystal (LC) lens designed for a holographic projection system is demonstrated. By using a single patterned electrode LC lens, a solid lens and an encoded Fresnel lens on the LCoS panel, we can maintain the image size of the holographic projector with different wavelengths (λ:674nm, 532nm and 445nm) . The zoom ratio of the holographic projection system depends on the lens power of the solid lens and the tunable lens power of the LC lens. The optical zoom function can help to solve the image size mismatching problem of the holographic projection system. © 2013 SPIE.
Resumo:
BACKGROUND: Despite the widespread use of sensors in engineering systems like robots and automation systems, the common paradigm is to have fixed sensor morphology tailored to fulfill a specific application. On the other hand, robotic systems are expected to operate in ever more uncertain environments. In order to cope with the challenge, it is worthy of note that biological systems show the importance of suitable sensor morphology and active sensing capability to handle different kinds of sensing tasks with particular requirements. METHODOLOGY: This paper presents a robotics active sensing system which is able to adjust its sensor morphology in situ in order to sense different physical quantities with desirable sensing characteristics. The approach taken is to use thermoplastic adhesive material, i.e. Hot Melt Adhesive (HMA). It will be shown that the thermoplastic and thermoadhesive nature of HMA enables the system to repeatedly fabricate, attach and detach mechanical structures with a variety of shape and size to the robot end effector for sensing purposes. Via active sensing capability, the robotic system utilizes the structure to physically probe an unknown target object with suitable motion and transduce the arising physical stimuli into information usable by a camera as its only built-in sensor. CONCLUSIONS/SIGNIFICANCE: The efficacy of the proposed system is verified based on two results. Firstly, it is confirmed that suitable sensor morphology and active sensing capability enables the system to sense different physical quantities, i.e. softness and temperature, with desirable sensing characteristics. Secondly, given tasks of discriminating two visually indistinguishable objects with respect to softness and temperature, it is confirmed that the proposed robotic system is able to autonomously accomplish them. The way the results motivate new research directions which focus on in situ adjustment of sensor morphology will also be discussed.