50 resultados para Face recognition from video


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper discusses user target intention recognition algorithms for pointing - clicking tasks to reduce users' pointing time and difficulty. Predicting targets by comparing the bearing angles to targets proposed as one of the first algorithms [1] is compared with a Kalman Filter prediction algorithm. Accuracy and sensitivity of prediction are used as performance criteria. The outcomes of a standard point and click experiment are used for performance comparison, collected from both able-bodied and impaired users. © 2013 Springer-Verlag Berlin Heidelberg.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Statistical approaches for building non-rigid deformable models, such as the Active Appearance Model (AAM), have enjoyed great popularity in recent years, but typically require tedious manual annotation of training images. In this paper, a learning based approach for the automatic annotation of visually deformable objects from a single annotated frontal image is presented and demonstrated on the example of automatically annotating face images that can be used for building AAMs for fitting and tracking. This approach employs the idea of initially learning the correspondences between landmarks in a frontal image and a set of training images with a face in arbitrary poses. Using this learner, virtual images of unseen faces at any arbitrary pose for which the learner was trained can be reconstructed by predicting the new landmark locations and warping the texture from the frontal image. View-based AAMs are then built from the virtual images and used for automatically annotating unseen images, including images of different facial expressions, at any random pose within the maximum range spanned by the virtually reconstructed images. The approach is experimentally validated by automatically annotating face images from three different databases. © 2009 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper tackles the novel challenging problem of 3D object phenotype recognition from a single 2D silhouette. To bridge the large pose (articulation or deformation) and camera viewpoint changes between the gallery images and query image, we propose a novel probabilistic inference algorithm based on 3D shape priors. Our approach combines both generative and discriminative learning. We use latent probabilistic generative models to capture 3D shape and pose variations from a set of 3D mesh models. Based on these 3D shape priors, we generate a large number of projections for different phenotype classes, poses, and camera viewpoints, and implement Random Forests to efficiently solve the shape and pose inference problems. By model selection in terms of the silhouette coherency between the query and the projections of 3D shapes synthesized using the galleries, we achieve the phenotype recognition result as well as a fast approximate 3D reconstruction of the query. To verify the efficacy of the proposed approach, we present new datasets which contain over 500 images of various human and shark phenotypes and motions. The experimental results clearly show the benefits of using the 3D priors in the proposed method over previous 2D-based methods. © 2011 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The dynamic response of end-clamped monolithic beams and sandwich beams of equal areal mass have been measured by loading the beams at mid-span with metal foam projectiles to simulate localised blast loading. The sandwich beams were made from carbon fibre laminate and comprised identical face sheets and a square-honeycomb core. The transient deflection of the beams was determined as a function of projectile momentum, and the measured response was compared with finite element simulations based upon a damage mechanics approach. A range of failure modes were observed in the sandwich beams including core fracture, plug-type shear failure of the core, debonding of the face sheets from the core and tensile tearing of the face sheets at the supports. In contrast, the monolithic beams failed by a combination of delamination of the plies and tensile failure at the supports. The finite element simulations of the beam response were accurate provided the carbon fibre properties were endowed with rate sensitivity of damage growth. The relative performance of monolithic and sandwich beams were quantified by the maximum transverse deflection at mid-span for a given projectile momentum. It was found that the sandwich beams outperformed both monolithic composite beams and steel sandwich beams with a square-honeycomb core. However, the composite beams failed catastrophically at a lower projectile impulse than the steel beams due to the lower ductility of the composite material. © 2011 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We develop a convex relaxation of maximum a posteriori estimation of a mixture of regression models. Although our relaxation involves a semidefinite matrix variable, we reformulate the problem to eliminate the need for general semidefinite programming. In particular, we provide two reformulations that admit fast algorithms. The first is a max-min spectral reformulation exploiting quasi-Newton descent. The second is a min-min reformulation consisting of fast alternating steps of closed-form updates. We evaluate the methods against Expectation-Maximization in a real problem of motion segmentation from video data.

Relevância:

50.00% 50.00%

Publicador:

Relevância:

50.00% 50.00%

Publicador:

Resumo:

We propose an algorithm for semantic segmentation based on 3D point clouds derived from ego-motion. We motivate five simple cues designed to model specific patterns of motion and 3D world structure that vary with object category. We introduce features that project the 3D cues back to the 2D image plane while modeling spatial layout and context. A randomized decision forest combines many such features to achieve a coherent 2D segmentation and recognize the object categories present. Our main contribution is to show how semantic segmentation is possible based solely on motion-derived 3D world structure. Our method works well on sparse, noisy point clouds, and unlike existing approaches, does not need appearance-based descriptors. Experiments were performed on a challenging new video database containing sequences filmed from a moving car in daylight and at dusk. The results confirm that indeed, accurate segmentation and recognition are possible using only motion and 3D world structure. Further, we show that the motion-derived information complements an existing state-of-the-art appearance-based method, improving both qualitative and quantitative performance. © 2008 Springer Berlin Heidelberg.

Relevância:

40.00% 40.00%

Publicador:

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this paper, a novel cortex-inspired feed-forward hierarchical object recognition system based on complex wavelets is proposed and tested. Complex wavelets contain three key properties for object representation: shift invariance, which enables the extraction of stable local features; good directional selectivity, which simplifies the determination of image orientations; and limited redundancy, which allows for efficient signal analysis using the multi-resolution decomposition offered by complex wavelets. In this paper, we propose a complete cortex-inspired object recognition system based on complex wavelets. We find that the implementation of the HMAX model for object recognition in [1, 2] is rather over-complete and includes too much redundant information and processing. We have optimized the structure of the model to make it more efficient. Specifically, we have used the Caltech 5 standard dataset to compare with Serre's model in [2] (which employs Gabor filter bands). Results demonstrate that the complex wavelet model achieves a speed improvement of about 4 times over the Serre model and gives comparable recognition performance. © 2011 IEEE.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The automated detection of structural elements (e.g., columns and beams) from visual data can be used to facilitate many construction and maintenance applications. The research in this area is under initial investigation. The existing methods solely rely on color and texture information, which makes them unable to identify each structural element if these elements connect each other and are made of the same material. The paper presents a novel method of automated concrete column detection from visual data. The method overcomes the limitation by combining columns’ boundary information with their color and texture cues. It starts from recognizing long vertical lines in an image/video frame through edge detection and Hough transform. The bounding rectangle for each pair of lines is then constructed. When the rectangle resembles the shape of a column and the color and texture contained in the pair of lines are matched with one of the concrete samples in knowledge base, a concrete column surface is assumed to be located. This way, one concrete column in images/videos is detected. The method was tested using real images/videos. The results are compared with the manual detection ones to indicate the method’s validity.