83 resultados para Video platforms


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Spread Transform (ST) is a quantization watermarking algorithm in which vectors of the wavelet coefficients of a host work are quantized, using one of two dithered quantizers, to embed hidden information bits; Loo had some success in applying such a scheme to still images. We extend ST to the video watermarking problem. Visibility considerations require that each spreading vector refer to corresponding pixels in each of several frames, that is, a multi-frame embedding approach. Use of the hierarchical complex wavelet transform (CWT) for a visual mask reduces computation and improves robustness to jitter and valumetric scaling. We present a method of recovering temporal synchronization at the detector, and give initial results demonstrating the robustness and capacity of the scheme.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Models capturing the connectivity between different domains of a design, e.g. between components and functions, can provide a tool for tracing and analysing aspects of that design. In this paper, video experiments are used to explore the role of cross-domain modelling in building up information about a design. The experiments highlight that cross-domain modelling can be a useful tool to create and structure design information. Findings suggest that consideration of multiple domains encourages discussion during modelling, helps identify design aspects that might otherwise be overlooked, and can help promote consideration of alternative design options. Copyright © 2002-2012 The Design Society. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Only very few constructed facilities today have a complete record of as-built information. Despite the growing use of Building Information Modelling and the improvement in as-built records, several more years will be required before guidelines that require as-built data modelling will be implemented for the majority of constructed facilities, and this will still not address the stock of existing buildings. A technical solution for scanning buildings and compiling Building Information Models is needed. However, this is a multidisciplinary problem, requiring expertise in scanning, computer vision and videogrammetry, machine learning, and parametric object modelling. This paper outlines the technical approach proposed by a consortium of researchers that has gathered to tackle the ambitious goal of automating as-built modelling as far as possible. The top level framework of the proposed solution is presented, and each process, input and output is explained, along with the steps needed to validate them. Preliminary experiments on the earlier stages (i.e. processes) of the framework proposed are conducted and results are shown; the work toward implementation of the remainder is ongoing.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The automated detection of structural elements (e.g. concrete columns) in visual data is useful in many construction and maintenance applications. The research in this area is under initial investigation. The authors previously presented a concrete column detection method that utilized boundary and color information as detection cues. However, the method is sensitive to parameter selection, which reduces its ability to robustly detect concrete columns in live videos. Compared against the previous method, the new method presented in this paper reduces the reliance of parameter settings mainly in three aspects. First, edges are located using color information. Secondly, the orientation information of edge points is considered in constructing column boundaries. Thirdly, an artificial neural network for concrete material classification is developed to replace concrete sample matching. The method is tested using live videos, and results are compared with the results obtained with the previous method to demonstrate the new method improvements.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Automating the model generation process of infrastructure can substantially reduce the modeling time and cost. This paper presents a method to generate a sparse point cloud of an infrastructure scene using a single video camera under practical constraints. It is the first step towards establishing an automatic framework for object-oriented as-built modeling. Motion blur and key frame selection criteria are considered. Structure from motion and bundle adjustment are explored. The method is demonstrated in a case study where the scene of a reinforced concrete bridge is videotaped, reconstructed, and metrically validated. The result indicates the applicability, efficiency, and accuracy of the proposed method.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Vision based tracking can provide the spatial location of construction entities such as equipment, workers, and materials in large scale, congested construction sites. It tracks entities in video streams by inferring their locations based on the entities’ visual features and motion histories. To initiate the process, it is necessary to determine the pixel areas corresponding to the construction entities to be tracked in the following consecutive video frames. In order to fully automate the process, an automated way of initialization is needed. This paper presents the method for construction worker detection which can automatically recognize and localize construction workers in video frames. The method first finds the foreground areas of moving objects using a background subtraction method. Within these foreground areas, construction workers are recognized based on the histogram of oriented gradients (HOG) and histogram of the HSV colors. HOG’s have proved to work effectively for detection of people, and the histogram of HSV colors helps differentiate between pedestrians and construction workers wearing safety vests. Preliminary experiments show that the proposed method has the potential to automate the initialization process of vision based tracking.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Vision-based object detection has been introduced in construction for recognizing and locating construction entities in on-site camera views. It can provide spatial locations of a large number of entities, which is beneficial in large-scale, congested construction sites. However, even a few false detections prevent its practical applications. In resolving this issue, this paper presents a novel hybrid method for locating construction equipment that fuses the function of detection and tracking algorithms. This method detects construction equipment in the video view by taking advantage of entities' motion, shape, and color distribution. Background subtraction, Haar-like features, and eigen-images are used for motion, shape, and color information, respectively. A tracking algorithm steps in the process to make up for the false detections. False detections are identified by catching drastic changes in object size and appearance. The identified false detections are replaced with tracking results. Preliminary experiments show that the combination with tracking has the potential to enhance the detection performance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Monitoring the location of resources on large scale, congested, outdoor sites can be performed more efficiently with vision tracking, as this approach does not require any pre-tagging of resources. However, the greatest impediment to the use of vision tracking in this case is the lack of detection methods that are needed to automatically mark the resources of interest and initiate the tracking. This paper presents such a novel method for construction worker detection that localizes construction workers in video frames. The proposed method exploits motion, shape, and color cues to narrow down the detection regions to moving objects, people, and finally construction workers, respectively. The three cues are characterized by using background subtraction, the histogram of oriented gradients (HOG), and the HSV color histogram. The method has been tested on videos taken in various environments. The results demonstrate its suitability for automatic initialization of vision trackers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In spite of over two decades of intense research, illumination and pose invariance remain prohibitively challenging aspects of face recognition for most practical applications. The objective of this work is to recognize faces using video sequences both for training and recognition input, in a realistic, unconstrained setup in which lighting, pose and user motion pattern have a wide variability and face images are of low resolution. The central contribution is an illumination invariant, which we show to be suitable for recognition from video of loosely constrained head motion. In particular there are three contributions: (i) we show how a photometric model of image formation can be combined with a statistical model of generic face appearance variation to exploit the proposed invariant and generalize in the presence of extreme illumination changes; (ii) we introduce a video sequence re-illumination algorithm to achieve fine alignment of two video sequences; and (iii) we use the smoothness of geodesically local appearance manifold structure and a robust same-identity likelihood to achieve robustness to unseen head poses. We describe a fully automatic recognition system based on the proposed method and an extensive evaluation on 323 individuals and 1474 video sequences with extreme illumination, pose and head motion variation. Our system consistently achieved a nearly perfect recognition rate (over 99.7% on all four databases). © 2012 Elsevier Ltd All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The visual system must learn to infer the presence of objects and features in the world from the images it encounters, and as such it must, either implicitly or explicitly, model the way these elements interact to create the image. Do the response properties of cells in the mammalian visual system reflect this constraint? To address this question, we constructed a probabilistic model in which the identity and attributes of simple visual elements were represented explicitly and learnt the parameters of this model from unparsed, natural video sequences. After learning, the behaviour and grouping of variables in the probabilistic model corresponded closely to functional and anatomical properties of simple and complex cells in the primary visual cortex (V1). In particular, feature identity variables were activated in a way that resembled the activity of complex cells, while feature attribute variables responded much like simple cells. Furthermore, the grouping of the attributes within the model closely parallelled the reported anatomical grouping of simple cells in cat V1. Thus, this generative model makes explicit an interpretation of complex and simple cells as elements in the segmentation of a visual scene into basic independent features, along with a parametrisation of their moment-by-moment appearances. We speculate that such a segmentation may form the initial stage of a hierarchical system that progressively separates the identity and appearance of more articulated visual elements, culminating in view-invariant object recognition.