21 resultados para Dynamic texture recognition


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In recent years, there has been a move towards the development of indirect structural health monitoring (SHM)techniques for bridges; the low-cost vibration-based method presented in this paper is such an approach. It consists of the use of a moving vehicle fitted with accelerometers on its axles and incorporates wavelet analysis and statistical pattern recognition. The aim of the approach is to both detect and locate damage in bridges while reducing the need for direct instrumentation of the bridge. In theoretical simulations, a simplified vehicle-bridge interaction model is used to investigate the effectiveness of the approach in detecting damage in a bridge from vehicle accelerations. For this purpose, the accelerations are processed using a continuous wavelet transform as when the axle passes over a damaged section, any discontinuity in the signal would affect the wavelet coefficients. Based on these coefficients, a damage indicator is formulated which can distinguish between different damage levels. However, it is found to be difficult to quantify damage of varying levels when the vehicle’s transverse position is varied between bridge crossings. In a real bridge field experiment, damage was applied artificially to a steel truss bridge to test the effectiveness of the indirect approach in practice; for this purpose a two-axle van was driven across the bridge at constant speed. Both bridge and vehicle acceleration measurements were recorded. The dynamic properties of the test vehicle were identified initially via free vibration tests. It was found that the resulting damage indicators for the bridge and vehicle showed similar patterns, however, it was difficult to distinguish between different artificial damage scenarios.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One of the most widely used techniques in computer vision for foreground detection is to model each background pixel as a Mixture of Gaussians (MoG). While this is effective for a static camera with a fixed or a slowly varying background, it fails to handle any fast, dynamic movement in the background. In this paper, we propose a generalised framework, called region-based MoG (RMoG), that takes into consideration neighbouring pixels while generating the model of the observed scene. The model equations are derived from Expectation Maximisation theory for batch mode, and stochastic approximation is used for online mode updates. We evaluate our region-based approach against ten sequences containing dynamic backgrounds, and show that the region-based approach provides a performance improvement over the traditional single pixel MoG. For feature and region sizes that are equal, the effect of increasing the learning rate is to reduce both true and false positives. Comparison with four state-of-the art approaches shows that RMoG outperforms the others in reducing false positives whilst still maintaining reasonable foreground definition. Lastly, using the ChangeDetection (CDNet 2014) benchmark, we evaluated RMoG against numerous surveillance scenes and found it to amongst the leading performers for dynamic background scenes, whilst providing comparable performance for other commonly occurring surveillance scenes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PatchCity is a new approach to the procedural generation of city models. The algorithm uses texture synthesis to create a city layout in the visual style of one or more input examples. Data is provided in vector graphic form from either real or synthetic city definitions. The paper describes the PatchCity algorithm, illustrates its use, and identifies its strengths and limitations. The technique provides a greater range of features and styles of city layout than existing generative methods, thereby achieving results that are more realistic. An open source implementation of the algorithm is available.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We address the problem of 3D-assisted 2D face recognition in scenarios when the input image is subject to degradations or exhibits intra-personal variations not captured by the 3D model. The proposed solution involves a novel approach to learn a subspace spanned by perturbations caused by the missing modes of variation and image degradations, using 3D face data reconstructed from 2D images rather than 3D capture. This is accomplished by modelling the difference in the texture map of the 3D aligned input and reference images. A training set of these texture maps then defines a perturbation space which can be represented using PCA bases. Assuming that the image perturbation subspace is orthogonal to the 3D face model space, then these additive components can be recovered from an unseen input image, resulting in an improved fit of the 3D face model. The linearity of the model leads to efficient fitting. Experiments show that our method achieves very competitive face recognition performance on Multi-PIE and AR databases. We also present baseline face recognition results on a new data set exhibiting combined pose and illumination variations as well as occlusion.