81 resultados para MPEG video


Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In spite of over two decades of intense research, illumination and pose invariance remain prohibitively challenging aspects of face recognition for most practical applications. The objective of this work is to recognize faces using video sequences both for training and recognition input, in a realistic, unconstrained setup in which lighting, pose and user motion pattern have a wide variability and face images are of low resolution. In particular there are three areas of novelty: (i) we show how a photometric model of image formation can be combined with a statistical model of generic face appearance variation, learnt offline, to generalize in the presence of extreme illumination changes; (ii) we use the smoothness of geodesically local appearance manifold structure and a robust same-identity likelihood to achieve invariance to unseen head poses; and (iii) we introduce an accurate video sequence "reillumination" algorithm to achieve robustness to face motion patterns in video. We describe a fully automatic recognition system based on the proposed method and an extensive evaluation on 171 individuals and over 1300 video sequences with extreme illumination, pose and head motion variation. On this challenging data set our system consistently demonstrated a nearly perfect recognition rate (over 99.7%), significantly outperforming state-of-the-art commercial software and methods from the literature. © Springer-Verlag Berlin Heidelberg 2006.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we describe a video tracking application using the dual-tree polar matching algorithm. The models are specified in a probabilistic setting, and a particle ilter is used to perform the sequential inference. Computer simulations demonstrate the ability of the algorithm to track a simulated video moving target in an urban environment with complete and partial occlusions. © The Institution of Engineering and Technology.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We propose a system that can reliably track multiple cars in congested traffic environments. Our system's key basis is the implementation of a sequential Monte Carlo algorithm, which introduces robustness against problems arising due to the proximity between vehicles. By directly modelling occlusions and collisions between cars we obtain promising results on an urban traffic dataset. Extensions to this initial framework are also suggested. © 2010 IEEE.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present a novel, implementation friendly and occlusion aware semi-supervised video segmentation algorithm using tree structured graphical models, which delivers pixel labels alongwith their uncertainty estimates. Our motivation to employ supervision is to tackle a task-specific segmentation problem where the semantic objects are pre-defined by the user. The video model we propose for this problem is based on a tree structured approximation of a patch based undirected mixture model, which includes a novel time-series and a soft label Random Forest classifier participating in a feedback mechanism. We demonstrate the efficacy of our model in cutting out foreground objects and multi-class segmentation problems in lengthy and complex road scene sequences. Our results have wide applicability, including harvesting labelled video data for training discriminative models, shape/pose/articulation learning and large scale statistical analysis to develop priors for video segmentation. © 2011 IEEE.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Spread Transform (ST) is a quantization watermarking algorithm in which vectors of the wavelet coefficients of a host work are quantized, using one of two dithered quantizers, to embed hidden information bits; Loo had some success in applying such a scheme to still images. We extend ST to the video watermarking problem. Visibility considerations require that each spreading vector refer to corresponding pixels in each of several frames, that is, a multi-frame embedding approach. Use of the hierarchical complex wavelet transform (CWT) for a visual mask reduces computation and improves robustness to jitter and valumetric scaling. We present a method of recovering temporal synchronization at the detector, and give initial results demonstrating the robustness and capacity of the scheme.