932 resultados para movie camera
Resumo:
Automating the model generation process of infrastructure can substantially reduce the modeling time and cost. This paper presents a method to generate a sparse point cloud of an infrastructure scene using a single video camera under practical constraints. It is the first step towards establishing an automatic framework for object-oriented as-built modeling. Motion blur and key frame selection criteria are considered. Structure from motion and bundle adjustment are explored. The method is demonstrated in a case study where the scene of a reinforced concrete bridge is videotaped, reconstructed, and metrically validated. The result indicates the applicability, efficiency, and accuracy of the proposed method.
Resumo:
Vision trackers have been proposed as a promising alternative for tracking at large-scale, congested construction sites. They provide the location of a large number of entities in a camera view across frames. However, vision trackers provide only two-dimensional (2D) pixel coordinates, which are not adequate for construction applications. This paper proposes and validates a method that overcomes this limitation by employing stereo cameras and converting 2D pixel coordinates to three-dimensional (3D) metric coordinates. The proposed method consists of four steps: camera calibration, camera pose estimation, 2D tracking, and triangulation. Given that the method employs fixed, calibrated stereo cameras with a long baseline, appropriate algorithms are selected for each step. Once the first two steps reveal camera system parameters, the third step determines 2D pixel coordinates of entities in subsequent frames. The 2D coordinates are triangulated on the basis of the camera system parameters to obtain 3D coordinates. The methodology presented in this paper has been implemented and tested with data collected from a construction site. The results demonstrate the suitability of this method for on-site tracking purposes.
Resumo:
We present a system for augmenting depth camera output using multispectral photometric stereo. The technique is demonstrated using a Kinect sensor and is able to produce geometry independently for each frame. Improved reconstruction is demonstrated using the Kinect's inbuilt RGB camera and further improvements are achieved by introducing an additional high resolution camera. As well as qualitative improvements in reconstruction a quantitative reduction in temporal noise is shown. As part of the system an approach is presented for relaxing the assumption of multispectral photometric stereo that scenes are of constant chromaticity to the assumption that scenes contain multiple piecewise constant chromaticities.
Resumo:
Traditional approaches to upper body pose estimation using monocular vision rely on complex body models and a large variety of geometric constraints. We argue that this is not ideal and somewhat inelegant as it results in large processing burdens, and instead attempt to incorporate these constraints through priors obtained directly from training data. A prior distribution covering the probability of a human pose occurring is used to incorporate likely human poses. This distribution is obtained offline, by fitting a Gaussian mixture model to a large dataset of recorded human body poses, tracked using a Kinect sensor. We combine this prior information with a random walk transition model to obtain an upper body model, suitable for use within a recursive Bayesian filtering framework. Our model can be viewed as a mixture of discrete Ornstein-Uhlenbeck processes, in that states behave as random walks, but drift towards a set of typically observed poses. This model is combined with measurements of the human head and hand positions, using recursive Bayesian estimation to incorporate temporal information. Measurements are obtained using face detection and a simple skin colour hand detector, trained using the detected face. The suggested model is designed with analytical tractability in mind and we show that the pose tracking can be Rao-Blackwellised using the mixture Kalman filter, allowing for computational efficiency while still incorporating bio-mechanical properties of the upper body. In addition, the use of the proposed upper body model allows reliable three-dimensional pose estimates to be obtained indirectly for a number of joints that are often difficult to detect using traditional object recognition strategies. Comparisons with Kinect sensor results and the state of the art in 2D pose estimation highlight the efficacy of the proposed approach.
Resumo:
We present a novel X-ray frame camera with variable exposure time that is based on double-gated micro-channel plates (MCP). Two MCPs are connected so that their channels form a Chevron-MCP structure, and four parallel micro-strip lines (MSLs) are deposited on each surface of the Chevron-MCP. The MSLs on opposing surfaces of the Chevron-MCP are oriented normal to each other and subjected to high voltage. The MSLs on the input and output surfaces are fed high voltage pulses to form a gating action. In forming two-dimensional images, modifying the width of the gating pulse serves to set exposure times (ranging from ps to ms) and modifying the delay between each gating pulse serves to set capture times. This prototype provides a new tool for high-speed X-ray imaging, and this paper presents both simulations and experimental results obtained with the camera.
Resumo:
We describe our research on the employment of an infrared upconversion screen made of electron trapping material (ETM) in combination with the high sensitivity of the S-20 photocathode responsive to visible radiation to produce a streak camera arrangement capable of viewing and recording infrared incident pulses. The ETM-based upconversion screen converts 800-1600 nm infrared radiation to visible light which is viewed or recorded by the S-20 photocathode. The peak values of the upconversion efficiency are located at 1165 nm for CaS:Eu, Sm and 1060 nm for CaS:Ce, Sm. The present experiment showed time resolution was 12.3 ps for a CaS:Eu, Sm screen and 8.4 ps for a CaS:Ce, Sm screen. The minimum detectability is 4.8 x 10(-9) J/mm(2) (minimum detectability of the coupled visible streak camera is 8.3x10(-10) J/mm(2)). Other parameters, such as spatial resolution and dynamic range, have also been measured and analyzed. The results show ETM can be used in the measurement of infrared ultrafast phenomena up to picosecond time domain. In consideration of the limited number of trapped electrons in ETM, the infrared-sensitive streak camera consisting of an ETM-based upconversion screen is suitable to operate in the single shot mode. (C) 1999 American Institute of Physics. [S0034-6748(99)00112-4].