14 resultados para subtitled videos

em Cambridge University Engineering Department Publications Database


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The automated detection of structural elements (e.g., columns and beams) from visual data can be used to facilitate many construction and maintenance applications. The research in this area is under initial investigation. The existing methods solely rely on color and texture information, which makes them unable to identify each structural element if these elements connect each other and are made of the same material. The paper presents a novel method of automated concrete column detection from visual data. The method overcomes the limitation by combining columns’ boundary information with their color and texture cues. It starts from recognizing long vertical lines in an image/video frame through edge detection and Hough transform. The bounding rectangle for each pair of lines is then constructed. When the rectangle resembles the shape of a column and the color and texture contained in the pair of lines are matched with one of the concrete samples in knowledge base, a concrete column surface is assumed to be located. This way, one concrete column in images/videos is detected. The method was tested using real images/videos. The results are compared with the manual detection ones to indicate the method’s validity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Temporal synchronization of multiple video recordings of the same dynamic event is a critical task in many computer vision applications e.g. novel view synthesis and 3D reconstruction. Typically this information is implied through the time-stamp information embedded in the video streams. User-generated videos shot using consumer grade equipment do not contain this information; hence, there is a need to temporally synchronize signals using the visual information itself. Previous work in this area has either assumed good quality data with relatively simple dynamic content or the availability of precise camera geometry. Our first contribution is a synchronization technique which tries to establish correspondence between feature trajectories across views in a novel way, and specifically targets the kind of complex content found in consumer generated sports recordings, without assuming precise knowledge of fundamental matrices or homographies. We evaluate performance using a number of real video recordings and show that our method is able to synchronize to within 1 sec, which is significantly better than previous approaches. Our second contribution is a robust and unsupervised view-invariant activity recognition descriptor that exploits recurrence plot theory on spatial tiles. The descriptor is individually shown to better characterize the activities from different views under occlusions than state-of-the-art approaches. We combine this descriptor with our proposed synchronization method and show that it can further refine the synchronization index. © 2013 ACM.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Human locomotion is known to be influenced by observation of another person's gait. For example, athletes often synchronize their step in long distance races. However, how interaction with a virtual runner affects the gait of a real runner has not been studied. We investigated this by creating an illusion of running behind a virtual model (VM) using a treadmill and large screen virtual environment showing a video of a VM. We looked at step synchronization between the real and virtual runner and at the role of the step frequency (SF) in the real runner's perception of VM speed. We found that subjects match VM SF when asked to match VM speed with their own (Figure 1). This indicates step synchronization may be a strategy of speed matching or speed perception. Subjects chose higher speeds when VMSF was higher (though VM was 12km/h in all videos). This effect was more pronounced when the speed estimate was rated verbally while standing still. (Figure 2). This may due to correlated physical activity affecting the perception of VM speed [Jacobs et al. 2005]; or step synchronization altering the subjects' perception of self speed [Durgin et al. 2007]. Our findings indicate that third person activity in a collaborative virtual locomotive environment can have a pronounced effect on an observer's gait activity and their perceptual judgments of the activity of others: the SF of others (virtual or real) can potentially influence one's perception of self speed and lead to changes in speed and SF. A better understanding of the underlying mechanisms would support the design of more compelling virtual trainers and may be instructive for competitive athletics in the real world. © 2009 ACM.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We introduce a new algorithm to automatically identify the time and pixel location of foot contact events in high speed video of sprinters. We use this information to autonomously synchronise and overlay multiple recorded performances to provide feedback to athletes and coaches during their training sessions. The algorithm exploits the variation in speed of different parts of the body during sprinting. We use an array of foreground accumulators to identify short-term static pixels and a temporal analysis of the associated static regions to identify foot contacts. We evaluated the technique using 13 videos of three sprinters. It successfully identifed 55 of the 56 contacts, with a mean localisation error of 1.39±1.05 pixels. Some videos were also seen to produce additional, spurious contacts. We present heuristics to help identify the true contacts. © 2011 Springer-Verlag Berlin Heidelberg.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Vision tracking has significant potential for tracking resources on large scale, congested construction sites, where a small number of cameras strategically placed around the site could replace hundreds of tracking tags. The correlation of vision tracking 2D positions from multiple views can provide the 3D position. However, there are many 2D vision trackers available in the literature, and little information is available on which one is most effective for construction applications. In this paper, a comparative study of various vision tracker categories is carried out, to identify which one is most effective in tracking construction resources. Testing parameters for evaluating categories of trackers are identified, and benefits and limitations of each category are presented. The most promising trackers are tested using a database of construction operations videos. The results indicate the effectiveness of each tracker in relation to each parameter of the test, and the most suitable tracker needed to research effective 3D vision trackers of construction resources.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Tracking of project related entities such as construction equipment, materials, and personnel is used to calculate productivity, detect travel path conflicts, enhance the safety on the site, and monitor the project. Radio frequency tracking technologies (Wi-Fi, RFID, UWB) and GPS are commonly used for this purpose. However, on large-scale sites, deploying, maintaining and removing such systems can be costly and time-consuming. In addition, privacy issues with personnel tracking often limits the usability of these technologies on construction sites. This paper presents a vision based tracking framework that holds promise to address these limitations. The framework uses videos from a set of two or more static cameras placed on construction sites. In each camera view, the framework identifies and tracks construction entities providing 2D image coordinates across frames. Combining the 2D coordinates based on the installed camera system (the distance between the cameras and the view angles of them), 3D coordinates are calculated at each frame. The results of each step are presented to illustrate the feasibility of the framework.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Tracking methods have the potential to retrieve the spatial location of project related entities such as personnel and equipment at construction sites, which can facilitate several construction management tasks. Existing tracking methods are mainly based on Radio Frequency (RF) technologies and thus require manual deployment of tags. On construction sites with numerous entities, tags installation, maintenance and decommissioning become an issue since it increases the cost and time needed to implement these tracking methods. To address these limitations, this paper proposes an alternate 3D tracking method based on vision. It operates by tracking the designated object in 2D video frames and correlating the tracking results from multiple pre-calibrated views using epipolar geometry. The methodology presented in this paper has been implemented and tested on videos taken in controlled experimental conditions. Results are compared with the actual 3D positions to validate its performance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The automated detection of structural elements (e.g. concrete columns) in visual data is useful in many construction and maintenance applications. The research in this area is under initial investigation. The authors previously presented a concrete column detection method that utilized boundary and color information as detection cues. However, the method is sensitive to parameter selection, which reduces its ability to robustly detect concrete columns in live videos. Compared against the previous method, the new method presented in this paper reduces the reliance of parameter settings mainly in three aspects. First, edges are located using color information. Secondly, the orientation information of edge points is considered in constructing column boundaries. Thirdly, an artificial neural network for concrete material classification is developed to replace concrete sample matching. The method is tested using live videos, and results are compared with the results obtained with the previous method to demonstrate the new method improvements.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Manual inspection is required to determine the condition of damaged buildings after an earthquake. The lack of available inspectors, when combined with the large volume of inspection work, makes such inspection subjective and time-consuming. Completing the required inspection takes weeks to complete, which has adverse economic and societal impacts on the affected population. This paper proposes an automated framework for rapid post-earthquake building evaluation. Under the framework, the visible damage (cracks and buckling) inflicted on concrete columns is first detected. The damage properties are then measured in relation to the column's dimensions and orientation, so that the column's load bearing capacity can be approximated as a damage index. The column damage index supplemented with other building information (e.g. structural type and columns arrangement) is then used to query fragility curves of similar buildings, constructed from the analyses of existing and on-going experimental data. The query estimates the probability of the building being in different damage states. The framework is expected to automate the collection of building damage data, to provide a quantitative assessment of the building damage state, and to estimate the vulnerability of the building to collapse in the event of an aftershock. Videos and manual assessments of structures after the 2009 earthquake in Haiti are used to test the parts of the framework.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Pavement condition assessment is essential when developing road network maintenance programs. In practice, pavement sensing is to a large extent automated when regarding highway networks. Municipal roads, however, are predominantly surveyed manually due to the limited amount of expensive inspection vehicles. As part of a research project that proposes an omnipresent passenger vehicle network for comprehensive and cheap condition surveying of municipal road networks this paper deals with pothole recognition. Existing methods either rely on expensive and high-maintenance range sensors, or make use of acceleration data, which can only provide preliminary and rough condition surveys. In our previous work we created a pothole detection method for pavement images. In this paper we present an improved recognition method for pavement videos that incrementally updates the texture signature for intact pavement regions and uses vision tracking to track detected potholes. The method is tested and results demonstrate its reasonable efficiency.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Monitoring the location of resources on large scale, congested, outdoor sites can be performed more efficiently with vision tracking, as this approach does not require any pre-tagging of resources. However, the greatest impediment to the use of vision tracking in this case is the lack of detection methods that are needed to automatically mark the resources of interest and initiate the tracking. This paper presents such a novel method for construction worker detection that localizes construction workers in video frames. The proposed method exploits motion, shape, and color cues to narrow down the detection regions to moving objects, people, and finally construction workers, respectively. The three cues are characterized by using background subtraction, the histogram of oriented gradients (HOG), and the HSV color histogram. The method has been tested on videos taken in various environments. The results demonstrate its suitability for automatic initialization of vision trackers.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Due to its importance, video segmentation has regained interest recently. However, there is no common agreement about the necessary ingredients for best performance. This work contributes a thorough analysis of various within- and between-frame affinities suitable for video segmentation. Our results show that a frame-based superpixel segmentation combined with a few motion and appearance-based affinities are sufficient to obtain good video segmentation performance. A second contribution of the paper is the extension of [1] to include motion-cues, which makes the algorithm globally aware of motion, thus improving its performance for video sequences. Finally, we contribute an extension of an established image segmentation benchmark [1] to videos, allowing coarse-to-fine video segmentations and multiple human annotations. Our results are tested on BMDS [2], and compared to existing methods. © 2013 Springer-Verlag.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Temporal synchronization of multiple video recordings of the same dynamic event is a critical task in many computer vision applications e.g. novel view synthesis and 3D reconstruction. Typically this information is implied, since recordings are made using the same timebase, or time-stamp information is embedded in the video streams. Recordings using consumer grade equipment do not contain this information; hence, there is a need to temporally synchronize signals using the visual information itself. Previous work in this area has either assumed good quality data with relatively simple dynamic content or the availability of precise camera geometry. In this paper, we propose a technique which exploits feature trajectories across views in a novel way, and specifically targets the kind of complex content found in consumer generated sports recordings, without assuming precise knowledge of fundamental matrices or homographies. Our method automatically selects the moving feature points in the two unsynchronized videos whose 2D trajectories can be best related, thereby helping to infer the synchronization index. We evaluate performance using a number of real recordings and show that synchronization can be achieved to within 1 sec, which is better than previous approaches. Copyright 2013 ACM.