39 resultados para planes of vision
em Cambridge University Engineering Department Publications Database
Resumo:
Vision tracking has significant potential for tracking resources on large scale, congested construction sites, where a small number of cameras strategically placed around the site could replace hundreds of tracking tags. The correlation of vision tracking 2D positions from multiple views can provide the 3D position. However, there are many 2D vision trackers available in the literature, and little information is available on which one is most effective for construction applications. In this paper, a comparative study of various vision tracker categories is carried out, to identify which one is most effective in tracking construction resources. Testing parameters for evaluating categories of trackers are identified, and benefits and limitations of each category are presented. The most promising trackers are tested using a database of construction operations videos. The results indicate the effectiveness of each tracker in relation to each parameter of the test, and the most suitable tracker needed to research effective 3D vision trackers of construction resources.
Resumo:
Tracking applications provide real time on-site information that can be used to detect travel path conflicts, calculate crew productivity and eliminate unnecessary processes at the site. This paper presents the validation of a novel vision based tracking methodology at the Egnatia Odos Motorway in Thessaloniki, Greece. Egnatia Odos is a motorway that connects Turkey with Italy through Greece. Its multiple open construction sites serves as an ideal multi-site test bed for validating construction site tracking methods. The vision based tracking methodology uses video cameras and computer algorithms to calculate the 3D position of project related entities (e.g. personnel, materials and equipment) in construction sites. The approach provides an unobtrusive, inexpensive way of effectively identifying and tracking the 3D location of entities. The process followed in this study starts by acquiring video data from multiple synchronous cameras at several large scale project sites of Egnatia Odos, such as tunnels, interchanges and bridges under construction. Subsequent steps include the evaluation of the collected data and finally, performing the 3D tracking operations on selected entities (heavy equipment and personnel). The accuracy and precision of the method's results is evaluated by comparing it with the actual 3D position of the object, thus assessing the 3D tracking method's effectiveness.
Resumo:
Vision based tracking can provide the spatial location of construction entities such as equipment, workers, and materials in large scale, congested construction sites. It tracks entities in video streams by inferring their locations based on the entities’ visual features and motion histories. To initiate the process, it is necessary to determine the pixel areas corresponding to the construction entities to be tracked in the following consecutive video frames. In order to fully automate the process, an automated way of initialization is needed. This paper presents the method for construction worker detection which can automatically recognize and localize construction workers in video frames. The method first finds the foreground areas of moving objects using a background subtraction method. Within these foreground areas, construction workers are recognized based on the histogram of oriented gradients (HOG) and histogram of the HSV colors. HOG’s have proved to work effectively for detection of people, and the histogram of HSV colors helps differentiate between pedestrians and construction workers wearing safety vests. Preliminary experiments show that the proposed method has the potential to automate the initialization process of vision based tracking.
Resumo:
Estimating the fundamental matrix (F), to determine the epipolar geometry between a pair of images or video frames, is a basic step for a wide variety of vision-based functions used in construction operations, such as camera-pair calibration, automatic progress monitoring, and 3D reconstruction. Currently, robust methods (e.g., SIFT + normalized eight-point algorithm + RANSAC) are widely used in the construction community for this purpose. Although they can provide acceptable accuracy, the significant amount of required computational time impedes their adoption in real-time applications, especially video data analysis with many frames per second. Aiming to overcome this limitation, this paper presents and evaluates the accuracy of a solution to find F by combining the use of two speedy and consistent methods: SURF for the selection of a robust set of point correspondences and the normalized eight-point algorithm. This solution is tested extensively on construction site image pairs including changes in viewpoint, scale, illumination, rotation, and moving objects. The results demonstrate that this method can be used for real-time applications (5 image pairs per second with the resolution of 640 × 480) involving scenes of the built environment.
Resumo:
We have grown epitaxially orientation-controlled monoclinic VO2 nanowires without employing catalysts by a vapor-phase transport process. Electron microscopy results reveal that single crystalline VO2 nanowires having a [100] growth direction grow laterally on the basal c plane and out of the basal r and a planes of sapphire, exhibiting triangular and rectangular cross sections, respectively. In addition, we have directly observed the structural phase transition of single crystalline VO2 nanowires between the monoclinic and tetragonal phases which exhibit insulating and metallic properties, respectively, and clearly analyzed their corresponding relationships using in situ transmission electron microscopy.
Resumo:
A concept based upon Equal Channel Angular Extrusion (ECAE) is developed and introduced in the form of a Universal Re-usable Energy Absorption Device 'UREAD'. In impact situations the device utilises the energy required to extrude deformable materials through the shear planes of a set of intersecting channels and hence provides the means to protect engineering structures. The impact force is absorbed through the resistance of a deformable material and the energy is dissipated through an operational stroke. This paper examines the use of this new concept under dynamic loading. The device performance and usability during dynamic impacts are tested in a landing frame type experiment where the effectiveness of the technique in reducing impact loads and energy are also examined. © (2011) Trans Tech Publications Switzerland.
Resumo:
Vision based tracking can provide the spatial location of project related entities such as equipment, workers, and materials in a large-scale congested construction site. It tracks entities in a video stream by inferring their motion. To initiate the process, it is required to determine the pixel areas of the entities to be tracked in the following consecutive video frames. For the purpose of fully automating the process, this paper presents an automated way of initializing trackers using Semantic Texton Forests (STFs) method. STFs method performs simultaneously the segmentation of the image and the classification of the segments based on the low-level semantic information and the context information. In this paper, STFs method is tested in the case of wheel loaders recognition. In the experiments, wheel loaders are further divided into several parts such as wheels and body parts to help learn the context information. The results show 79% accuracy of recognizing the pixel areas of the wheel loader. These results signify that STFs method has the potential to automate the initialization process of vision based tracking.
Resumo:
Monitoring the location of resources on large scale, congested, outdoor sites can be performed more efficiently with vision tracking, as this approach does not require any pre-tagging of resources. However, the greatest impediment to the use of vision tracking in this case is the lack of detection methods that are needed to automatically mark the resources of interest and initiate the tracking. This paper presents such a novel method for construction worker detection that localizes construction workers in video frames. The proposed method exploits motion, shape, and color cues to narrow down the detection regions to moving objects, people, and finally construction workers, respectively. The three cues are characterized by using background subtraction, the histogram of oriented gradients (HOG), and the HSV color histogram. The method has been tested on videos taken in various environments. The results demonstrate its suitability for automatic initialization of vision trackers.