953 resultados para Pushbroom camera
Resumo:
Tracking of project related entities such as construction equipment, materials, and personnel is used to calculate productivity, detect travel path conflicts, enhance the safety on the site, and monitor the project. Radio frequency tracking technologies (Wi-Fi, RFID, UWB) and GPS are commonly used for this purpose. However, on large-scale sites, deploying, maintaining and removing such systems can be costly and time-consuming. In addition, privacy issues with personnel tracking often limits the usability of these technologies on construction sites. This paper presents a vision based tracking framework that holds promise to address these limitations. The framework uses videos from a set of two or more static cameras placed on construction sites. In each camera view, the framework identifies and tracks construction entities providing 2D image coordinates across frames. Combining the 2D coordinates based on the installed camera system (the distance between the cameras and the view angles of them), 3D coordinates are calculated at each frame. The results of each step are presented to illustrate the feasibility of the framework.
Resumo:
When tracking resources in large-scale, congested, outdoor construction sites, the cost and time for purchasing, installing and maintaining the position sensors needed to track thousands of materials, and hundreds of equipment and personnel can be significant. To alleviate this problem a novel vision based tracking method that allows each sensor (camera) to monitor the position of multiple entities simultaneously has been proposed. This paper presents the full-scale validation experiments for this method. The validation included testing the method under harsh conditions at a variety of mega-project construction sites. The procedure for collecting data from the sites, the testing procedure, metrics, and results are reported. Full-scale validation demonstrates that the novel vision tracking provides a good solution to track different entities on a large, congested construction site.
Resumo:
Camera motion estimation is one of the most significant steps for structure-from-motion (SFM) with a monocular camera. The normalized 8-point, the 7-point, and the 5-point algorithms are normally adopted to perform the estimation, each of which has distinct performance characteristics. Given unique needs and challenges associated to civil infrastructure SFM scenarios, selection of the proper algorithm directly impacts the structure reconstruction results. In this paper, a comparison study of the aforementioned algorithms is conducted to identify the most suitable algorithm, in terms of accuracy and reliability, for reconstructing civil infrastructure. The free variables tested are baseline, depth, and motion. A concrete girder bridge was selected as the "test-bed" to reconstruct using an off-the-shelf camera capturing imagery from all possible positions that maximally the bridge's features and geometry. The feature points in the images were extracted and matched via the SURF descriptor. Finally, camera motions are estimated based on the corresponding image points by applying the aforementioned algorithms, and the results evaluated.
Resumo:
Vision-based object detection has been introduced in construction for recognizing and locating construction entities in on-site camera views. It can provide spatial locations of a large number of entities, which is beneficial in large-scale, congested construction sites. However, even a few false detections prevent its practical applications. In resolving this issue, this paper presents a novel hybrid method for locating construction equipment that fuses the function of detection and tracking algorithms. This method detects construction equipment in the video view by taking advantage of entities' motion, shape, and color distribution. Background subtraction, Haar-like features, and eigen-images are used for motion, shape, and color information, respectively. A tracking algorithm steps in the process to make up for the false detections. False detections are identified by catching drastic changes in object size and appearance. The identified false detections are replaced with tracking results. Preliminary experiments show that the combination with tracking has the potential to enhance the detection performance.
Resumo:
Videogrammetry is an inexpensive and easy-to-use technology for spatial 3D scene recovery. When applied to large scale civil infrastructure scenes, only a small percentage of the collected video frames are required to achieve robust results. However, choosing the right frames requires careful consideration. Videotaping a built infrastructure scene results in large video files filled with blurry, noisy, or redundant frames. This is due to frame rate to camera speed ratios that are often higher than necessary; camera and lens imperfections and limitations that result in imaging noise; and occasional jerky motions of the camera that result in motion blur; all of which can significantly affect the performance of the videogrammetric pipeline. To tackle these issues, this paper proposes a novel method for automating the selection of an optimized number of informative, high quality frames. According to this method, as the first step, blurred frames are removed using the thresholds determined based on a minimum level of frame quality required to obtain robust results. Then, an optimum number of key frames are selected from the remaining frames using the selection criteria devised by the authors. Experimental results show that the proposed method outperforms existing methods in terms of improved 3D reconstruction results, while maintaining the optimum number of extracted frames needed to generate high quality 3D point clouds.© 2012 Elsevier Ltd. All rights reserved.
Resumo:
This paper analyzes the forced response of swirl-stabilized lean-premixed flames to acoustic forcing in a laboratory-scale stratified burner. The double-swirler, double-channel annular burner was specially designed to generate acoustic velocity oscillations and radial fuel stratification at the inlet of the combustion chamber. Temporal oscillations of equivalence ratio along the axial direction are dissipated over a long distance, and therefore the effects of time-varying fuel/air ratio on the flame response are not considered. Simultaneous measurements of inlet velocity and heat release rate oscillations were made using a hot wire anemometer and photomultiplier tubes with narrowband OH*/CH* interference filters. Time-averaged CH* chemiluminescence intensities were measured using an intensified CCD camera. Results show that flame stabilization mechanisms vary depending on stratification ratio for a constant global equivalence ratio. For a uniformly premixed condition, an enveloped M-shaped flame is observed. For stratified conditions, however, a dihedral V-flame and a detached flame are developed for outer stream and inner stream fuel enrichment cases, respectively. Flame transfer function (FTF) measurement results indicate that a V-shaped flame tends to damp incident flow oscillations, while a detached flame acts as a strong amplifier relative to the uniformly premixed condition. The phase difference of FTF increases in the presence of stratification. More importantly, the dynamic characteristics obtained from the forced stratified flame measurements are well correlated with unsteady flame behavior under limit-cycle pressure oscillations. The results presented in this paper provide insight into the impact of nonuniform reactant stoichiometry on combustion instabilities, which has not been well explored to date. Copyright © 2011 by ASME.
Resumo:
Localization of chess-board vertices is a common task in computer vision, underpinning many applications, but relatively little work focusses on designing a specific feature detector that is fast, accurate and robust. In this paper the `Chess-board Extraction by Subtraction and Summation' (ChESS) feature detector, designed to exclusively respond to chess-board vertices, is presented. The method proposed is robust against noise, poor lighting and poor contrast, requires no prior knowledge of the extent of the chess-board pattern, is computationally very efficient, and provides a strength measure of detected features. Such a detector has significant application both in the key field of camera calibration, as well as in Structured Light 3D reconstruction. Evidence is presented showing its robustness, accuracy, and efficiency in comparison to other commonly used detectors both under simulation and in experimental 3D reconstruction of flat plate and cylindrical objects
Resumo:
Optical motion capture systems suffer from marker occlusions resulting in loss of useful information. This paper addresses the problem of real-time joint localisation of legged skeletons in the presence of such missing data. The data is assumed to be labelled 3d marker positions from a motion capture system. An integrated framework is presented which predicts the occluded marker positions using a Variable Turn Model within an Unscented Kalman filter. Inferred information from neighbouring markers is used as observation states; these constraints are efficient, simple, and real-time implementable. This work also takes advantage of the common case that missing markers are still visible to a single camera, by combining predictions with under-determined positions, resulting in more accurate predictions. An Inverse Kinematics technique is then applied ensuring that the bone lengths remain constant over time; the system can thereby maintain a continuous data-flow. The marker and Centre of Rotation (CoR) positions can be calculated with high accuracy even in cases where markers are occluded for a long period of time. Our methodology is tested against some of the most popular methods for marker prediction and the results confirm that our approach outperforms these methods in estimating both marker and CoR positions. © 2012 Springer-Verlag.
A Videogrammetric As-Built Data Collection Method for Digital Fabrication of Sheet Metal Roof Panels
Resumo:
A roofing contractor typically needs to acquire as-built dimensions of a roof structure several times over the course of its build to be able to digitally fabricate sheet metal roof panels. Obtaining these measurements using the exiting roof surveying methods could be costly in terms of equipment, labor, and/or worker exposure to safety hazards. This paper presents a video-based surveying technology as an alternative method which is simple to use, automated, less expensive, and safe. When using this method, the contractor collects video streams with a calibrated stereo camera set. Unique visual characteristics of scenes from a roof structure are then used in the processing step to automatically extract as-built dimensions of roof planes. These dimensions are finally represented in a XML format to be loaded into sheet metal folding and cutting machines. The proposed method has been tested for a roofing project and the preliminary results indicate its capabilities.
Resumo:
The tomographic reconstruction of OH* chemiluminescence was performed on two interacting turbulent premixed bluff-body stabilized flames under steady flow conditions and acoustic excitation. These measurements elucidate the complex three-dimensional (3D) vortex-flame interactions which have previously not been accessible. The experiment was performed using a single camera and intensifier, with multiple views acquired by repositioning the camera, permitting calculation of the mean and phase-averaged volumetric OH* distributions. The reconstructed flame structure and phase-averaged dynamics are compared with OH planar laser-induced fluorescence and flame surface density measurements for the first time. The volumetric data revealed that the large-scale vortex-flame structures formed along the shear layers of each flame collide when the two flames meet, resulting in complex 3D flame structures in between the two flames. With a fairly simple experimental setup, it is shown that the tomographic reconstruction of OH* chemiluminescence in forced flames is a powerful tool that can yield important physical insights into large-scale 3D flame dynamics that are important in combustion instability. © 2013 IOP Publishing Ltd.
Resumo:
Silicon is known to be a very good material for the realization of high-Q, low-volume photonic cavities, but at the same it is usually considered as a poor material for nonlinear optical functionalities like second-harmonic generation, because its second-order nonlinear susceptibility vanishes in the dipole approximation. In this work we demonstrate that nonlinear optical effects in silicon nanocavities can be strongly enhanced and even become macroscopically observable. We employ photonic crystal nanocavities in silicon membranes that are optimized simultaneously for high quality factor and efficient coupling to an incoming beam in the far field. Using a low-power, continuous-wave laser at telecommunication wavelengths as a pump beam, we demonstrate simultaneous generation of second- and third harmonics in the visible region, which can be observed with a simple camera. The results are in good agreement with a theoretical model that treats third-harmonic generation as a bulk effect in the cavity region, and second-harmonic generation as a surface effect arising from the vertical hole sidewalls. Optical bistability is also observed in the silicon nanocavities and its physical mechanisms (optical, due to two-photon generation of free carriers, as well as thermal) are investigated. © 2011 IEEE.
Resumo:
Time-resolved particle image velocimetry (PIV) has been performed inside the nozzle of a commercially available inkjet print-head to obtain the time-dependent velocity waveform. A printhead with a single transparent nozzle 80 μm in orifice diameter was used to eject single droplets at a speed of 5 m/s. An optical microscope was used with an ultra-high-speed camera to capture the motion of particles suspended in a transparent liquid at the center of the nozzle and above the fluid meniscus at a rate of half a million frames per second. Time-resolved velocity fields were obtained from a fluid layer approximately 200 μm thick within the nozzle for a complete jetting cycle. A Lagrangian finite-element numerical model with experimental measurements as inputs was used to predict the meniscus movement. The model predictions showed good agreement with the experimental results. This work provides the first experimental verification of physical models and numerical simulations of flows within a drop-on-demand nozzle. © 2012 Society for Imaging Science and Technology.
Resumo:
An Agat-SF linear-scan streak image-converter camera was used to record output pulses of 2. 7 psec duration generated by an injection laser with an external dispersive resonator operated in the active mode-locking regime. The duration of the pulses was determined by the reciprocal of the spectral width and the product of the duration and the spectral width was 0. 30.
Resumo:
This paper is about detecting bipedal motion in video sequences by using point trajectories in a framework of classification. Given a number of point trajectories, we find a subset of points which are arising from feet in bipedal motion by analysing their spatio-temporal correlation in a pairwise fashion. To this end, we introduce probabilistic trajectories as our new features which associate each point over a sufficiently long time period in the presence of noise. They are extracted from directed acyclic graphs whose edges represent temporal point correspondences and are weighted with their matching probability in terms of appearance and location. The benefit of the new representation is that it practically tolerates inherent ambiguity for example due to occlusions. We then learn the correlation between the motion of two feet using the probabilistic trajectories in a decision forest classifier. The effectiveness of the algorithm is demonstrated in experiments on image sequences captured with a static camera, and extensions to deal with a moving camera are discussed. © 2013 Elsevier B.V. All rights reserved.
Resumo:
This work addresses the challenging problem of unconstrained 3D human pose estimation (HPE) from a novel perspective. Existing approaches struggle to operate in realistic applications, mainly due to their scene-dependent priors, such as background segmentation and multi-camera network, which restrict their use in unconstrained environments. We therfore present a framework which applies action detection and 2D pose estimation techniques to infer 3D poses in an unconstrained video. Action detection offers spatiotemporal priors to 3D human pose estimation by both recognising and localising actions in space-time. Instead of holistic features, e.g. silhouettes, we leverage the flexibility of deformable part model to detect 2D body parts as a feature to estimate 3D poses. A new unconstrained pose dataset has been collected to justify the feasibility of our method, which demonstrated promising results, significantly outperforming the relevant state-of-the-arts. © 2013 IEEE.