5 resultados para head model
em Cambridge University Engineering Department Publications Database
Resumo:
In spite of over two decades of intense research, illumination and pose invariance remain prohibitively challenging aspects of face recognition for most practical applications. The objective of this work is to recognize faces using video sequences both for training and recognition input, in a realistic, unconstrained setup in which lighting, pose and user motion pattern have a wide variability and face images are of low resolution. The central contribution is an illumination invariant, which we show to be suitable for recognition from video of loosely constrained head motion. In particular there are three contributions: (i) we show how a photometric model of image formation can be combined with a statistical model of generic face appearance variation to exploit the proposed invariant and generalize in the presence of extreme illumination changes; (ii) we introduce a video sequence re-illumination algorithm to achieve fine alignment of two video sequences; and (iii) we use the smoothness of geodesically local appearance manifold structure and a robust same-identity likelihood to achieve robustness to unseen head poses. We describe a fully automatic recognition system based on the proposed method and an extensive evaluation on 323 individuals and 1474 video sequences with extreme illumination, pose and head motion variation. Our system consistently achieved a nearly perfect recognition rate (over 99.7% on all four databases). © 2012 Elsevier Ltd All rights reserved.
Resumo:
This paper presents a novel, three-dimensional, single-pile model, formulated in the wavenumber domain and adapted to account for boundary conditions using the superposition of loading cases. The pile is modelled as a column in axial vibration, and a Euler-Bernoulli beam in lateral vibration. The surrounding soil is treated as a viscoelastic continuum. The response of the pile is presented in terms of the stiffness and damping coefficients, and also the magnitude and phase of the pile-head frequency-response function. Comparison with existing models shows that excellent agreement is observed between this model, a boundary-element formulation, and an elastic-continuum-type formulation. This three-dimensional model has an accuracy equivalent to a 3D boundary-element model, and a runtime similar to a 2D plane-strain analytical model. Analysis of the response of the single pile illustrates a difference in axial and lateral vibration behaviour; the displacement along the pile is relatively invariant under axial loads, but in lateral vibration the pile exhibits localised deformations. This implies that a plane-strain assumption is valid for axial loadings and only at higher frequencies for lateral loadings. © 2013 Elsevier Ltd.
Resumo:
We present a model for the self-organized formation of place cells, head-direction cells, and spatial-view cells in the hippocampal formation based on unsupervised learning on quasi-natural visual stimuli. The model comprises a hierarchy of Slow Feature Analysis (SFA) nodes, which were recently shown to reproduce many properties of complex cells in the early visual system []. The system extracts a distributed grid-like representation of position and orientation, which is transcoded into a localized place-field, head-direction, or view representation, by sparse coding. The type of cells that develops depends solely on the relevant input statistics, i.e., the movement pattern of the simulated animal. The numerical simulations are complemented by a mathematical analysis that allows us to accurately predict the output of the top SFA layer.
Resumo:
Creating a realistic talking head, which given an arbitrary text as input generates a realistic looking face speaking the text, has been a long standing research challenge. Talking heads which cannot express emotion have been made to look very realistic by using concatenative approaches [Wang et al. 2011], however allowing the head to express emotion creates a much more challenging problem and model based approaches have shown promise in this area. While 2D talking heads currently look more realistic than their 3D counterparts, they are limited both in the range of poses they can express and in the lighting conditions that they can be rendered under. Previous attempts to produce videorealistic 3D expressive talking heads [Cao et al. 2005] have produced encouraging results but not yet achieved the level of realism of their 2D counterparts.