65 resultados para cameras and camera accessories
em Cambridge University Engineering Department Publications Database
Resumo:
Vision trackers have been proposed as a promising alternative for tracking at large-scale, congested construction sites. They provide the location of a large number of entities in a camera view across frames. However, vision trackers provide only two-dimensional (2D) pixel coordinates, which are not adequate for construction applications. This paper proposes and validates a method that overcomes this limitation by employing stereo cameras and converting 2D pixel coordinates to three-dimensional (3D) metric coordinates. The proposed method consists of four steps: camera calibration, camera pose estimation, 2D tracking, and triangulation. Given that the method employs fixed, calibrated stereo cameras with a long baseline, appropriate algorithms are selected for each step. Once the first two steps reveal camera system parameters, the third step determines 2D pixel coordinates of entities in subsequent frames. The 2D coordinates are triangulated on the basis of the camera system parameters to obtain 3D coordinates. The methodology presented in this paper has been implemented and tested with data collected from a construction site. The results demonstrate the suitability of this method for on-site tracking purposes.
Resumo:
We present a multispectral photometric stereo method for capturing geometry of deforming surfaces. A novel photometric calibration technique allows calibration of scenes containing multiple piecewise constant chromaticities. This method estimates per-pixel photometric properties, then uses a RANSAC-based approach to estimate the dominant chromaticities in the scene. A likelihood term is developed linking surface normal, image intensity and photometric properties, which allows estimating the number of chromaticities present in a scene to be framed as a model estimation problem. The Bayesian Information Criterion is applied to automatically estimate the number of chromaticities present during calibration. A two-camera stereo system provides low resolution geometry, allowing the likelihood term to be used in segmenting new images into regions of constant chromaticity. This segmentation is carried out in a Markov Random Field framework and allows the correct photometric properties to be used at each pixel to estimate a dense normal map. Results are shown on several challenging real-world sequences, demonstrating state-of-the-art results using only two cameras and three light sources. Quantitative evaluation is provided against synthetic ground truth data. © 2011 IEEE.
Resumo:
Tracking of project related entities such as construction equipment, materials, and personnel is used to calculate productivity, detect travel path conflicts, enhance the safety on the site, and monitor the project. Radio frequency tracking technologies (Wi-Fi, RFID, UWB) and GPS are commonly used for this purpose. However, on large-scale sites, deploying, maintaining and removing such systems can be costly and time-consuming. In addition, privacy issues with personnel tracking often limits the usability of these technologies on construction sites. This paper presents a vision based tracking framework that holds promise to address these limitations. The framework uses videos from a set of two or more static cameras placed on construction sites. In each camera view, the framework identifies and tracks construction entities providing 2D image coordinates across frames. Combining the 2D coordinates based on the installed camera system (the distance between the cameras and the view angles of them), 3D coordinates are calculated at each frame. The results of each step are presented to illustrate the feasibility of the framework.
Resumo:
The amount of original imaging information produced yearly during the last decade has experienced a tremendous growth in all industries due to the technological breakthroughs in digital imaging and electronic storage capabilities. This trend is affecting the construction industry as well, where digital cameras and image databases are gradually replacing traditional photography. Owners demand complete site photograph logs and engineers store thousands of images for each project to use in a number of construction management tasks like monitoring an activity's progress and keeping evidence of the "as built" in case any disputes arise. So far, retrieval methodologies are done manually with the user being responsible for imaging classification according to specific rules that serve a limited number of construction management tasks. New methods that, with the guidance of the user, can automatically classify and retrieve construction site images are being developed and promise to remove the heavy burden of manually indexing images. In this paper, both the existing methods and a novel image retrieval method developed by the authors for the classification and retrieval of construction site images are described and compared. Specifically a number of examples are deployed in order to present their advantages and limitations. The results from this comparison demonstrates that the content based image retrieval method developed by the authors can reduce the overall time spent for the classification and retrieval of construction images while providing the user with the flexibility to retrieve images according different classification schemes.
Resumo:
Structured Light Plethysmography (SLP) is a novel non-invasive method that uses structured light to perform pulmonary function testing that does not require physical contact with a patient. The technique produces an estimate of chest wall volume changes over time. A patient is observed continuously by two cameras and a known pattern of light (i.e. structured light) is projected onto the chest using an off-the-shelf projector. Corner features from the projected light pattern are extracted, tracked and brought into correspondence for both camera views over successive frames. A novel self calibration algorithm recovers the intrinsic and extrinsic camera parameters from these point correspondences. This information is used to reconstruct a surface approximation of the chest wall and several novel ideas for 'cleaning up' the reconstruction are used. The resulting volume and derived statistics (e.g. FVC, FEV) agree very well with data taken with a spirometer. © 2010. The copyright of this document resides with its authors.
Resumo:
The measurement of high speed laser beam parameters during processing is a topic that has seen growing attention over the last few years as quality assurance places greater demand on the monitoring of the manufacturing process. The targets for any monitoring system is to be non-intrusive, low cost, simple to operate, high speed and capable of operation in process. A new ISO compliant system is presented based on the integration of an imaging plate and camera located behind a proprietary mirror sampling device. The general layout of the device is presented along with the thermal and optical performance of the sampling optic. Diagnostic performance of the system is compared with industry standard devices, demonstrating the high quality high speed data which has been generated using this system.
Resumo:
An adaptive lens, which has variable focus and is rapidly controllable with simple low-power electronics, has numerous applications in optical telecommunications devices, 3D display systems, miniature cameras and adaptive optics. The University of Durham is developing a range of adaptive liquid crystal lenses, and here we describe work on construction of modal liquid crystal lenses. This type of lens was first described by Naumov [1] and further developed by others [24]. In this system, a spatially varying and circularly symmetric voltage profile can be generated across a liquid-crystal cell, generating a lens-like refractive index profile. Such devices are simple in design, and do not require a pixellated structure. The shape and focussing power of the lens can be controlled by the variation of applied electric field and frequency. Results show adaptive lenses operating at optical wavelengths with continuously variable focal lengths from infinity to 70 cm. Switching speeds are of the order of 1 second between focal positions. Manufacturing methods of our adaptive lenses are presented, together with the latest results to the performance of these devices.
Resumo:
This paper tackles the novel challenging problem of 3D object phenotype recognition from a single 2D silhouette. To bridge the large pose (articulation or deformation) and camera viewpoint changes between the gallery images and query image, we propose a novel probabilistic inference algorithm based on 3D shape priors. Our approach combines both generative and discriminative learning. We use latent probabilistic generative models to capture 3D shape and pose variations from a set of 3D mesh models. Based on these 3D shape priors, we generate a large number of projections for different phenotype classes, poses, and camera viewpoints, and implement Random Forests to efficiently solve the shape and pose inference problems. By model selection in terms of the silhouette coherency between the query and the projections of 3D shapes synthesized using the galleries, we achieve the phenotype recognition result as well as a fast approximate 3D reconstruction of the query. To verify the efficacy of the proposed approach, we present new datasets which contain over 500 images of various human and shark phenotypes and motions. The experimental results clearly show the benefits of using the 3D priors in the proposed method over previous 2D-based methods. © 2011 IEEE.
Resumo:
The capability to automatically identify shapes, objects and materials from the image content through direct and indirect methodologies has enabled the development of several civil engineering related applications that assist in the design, construction and maintenance of construction projects. Examples include surface cracks detection, assessment of fire-damaged mortar, fatigue evaluation of asphalt mixes, aggregate shape measurements, velocimentry, vehicles detection, pore size distribution in geotextiles, damage detection and others. This capability is a product of the technological breakthroughs in the area of Image and Video Processing that has allowed for the development of a large number of digital imaging applications in all industries ranging from the well established medical diagnostic tools (magnetic resonance imaging, spectroscopy and nuclear medical imaging) to image searching mechanisms (image matching, content based image retrieval). Content based image retrieval techniques can also assist in the automated recognition of materials in construction site images and thus enable the development of reliable methods for image classification and retrieval. The amount of original imaging information produced yearly in the construction industry during the last decade has experienced a tremendous growth. Digital cameras and image databases are gradually replacing traditional photography while owners demand complete site photograph logs and engineers store thousands of images for each project to use in a number of construction management tasks. However, construction companies tend to store images without following any standardized indexing protocols, thus making the manual searching and retrieval a tedious and time-consuming effort. Alternatively, material and object identification techniques can be used for the development of automated, content based, construction site image retrieval methodology. These methods can utilize automatic material or object based indexing to remove the user from the time-consuming and tedious manual classification process. In this paper, a novel material identification methodology is presented. This method utilizes content based image retrieval concepts to match known material samples with material clusters within the image content. The results demonstrate the suitability of this methodology for construction site image retrieval purposes and reveal the capability of existing image processing technologies to accurately identify a wealth of materials from construction site images.
Resumo:
On-site tracking in open construction sites is often difficult because of the large amounts of items that are present and need to be tracked. Additionally, the amounts of occlusions/obstructions present create a highly complex tracking environment. Existing tracking methods are based mainly on Radio Frequency technologies, including Global Positioning Systems (GPS), Radio Frequency Identification (RFID), Bluetooth and Wireless Fidelity (Wi-Fi, Ultra-Wideband, etc). These methods require considerable amounts of pre-processing time since they need to manually deploy tags and keep record of the items they are placed on. In construction sites with numerous entities, tags installation, maintenance and decommissioning become an issue since it increases the cost and time needed to implement these tracking methods. This paper presents a novel method for open site tracking with construction cameras based on machine vision. According to this method, video feed is collected from on site video cameras, and the user selects the entity he wishes to track. The entity is tracked in each video using 2D vision tracking. Epipolar geometry is then used to calculate the depth of the marked area to provide the 3D location of the entity. This method addresses the limitations of radio frequency methods by being unobtrusive and using inexpensive, and easy to deploy equipment. The method has been implemented in a C++ prototype and preliminary results indicate its effectiveness
Resumo:
Tracking applications provide real time on-site information that can be used to detect travel path conflicts, calculate crew productivity and eliminate unnecessary processes at the site. This paper presents the validation of a novel vision based tracking methodology at the Egnatia Odos Motorway in Thessaloniki, Greece. Egnatia Odos is a motorway that connects Turkey with Italy through Greece. Its multiple open construction sites serves as an ideal multi-site test bed for validating construction site tracking methods. The vision based tracking methodology uses video cameras and computer algorithms to calculate the 3D position of project related entities (e.g. personnel, materials and equipment) in construction sites. The approach provides an unobtrusive, inexpensive way of effectively identifying and tracking the 3D location of entities. The process followed in this study starts by acquiring video data from multiple synchronous cameras at several large scale project sites of Egnatia Odos, such as tunnels, interchanges and bridges under construction. Subsequent steps include the evaluation of the collected data and finally, performing the 3D tracking operations on selected entities (heavy equipment and personnel). The accuracy and precision of the method's results is evaluated by comparing it with the actual 3D position of the object, thus assessing the 3D tracking method's effectiveness.
Resumo:
The technological advancements in digital imaging, the widespread popularity of digital cameras, and the increasing demand by owners and contractors for detailed and complete site photograph logs have triggered an ever-increasing growth in the rate of construction image data collection, with thousands of images being stored for each project. However, the sheer volume of images and the difficulties in accurately and manually indexing them have generated a pressing need for methods that can index and retrieve images with minimal or no user intervention. This paper reports recent developments from research efforts in the indexing and retrieval of construction site images in architecture, engineering, construction, and facilities management image database systems. The limitations and benefits of the existing methodologies will be presented, as well as an explanation of the reasons for the development of a novel image retrieval approach that not only can recognize construction materials within the image content in order to index images, but also can be compatible with existing retrieval methods, enabling enhanced results.