106 resultados para Face.


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The self-quotient image is a biologically inspired representation which has been proposed as an illumination invariant feature for automatic face recognition. Owing to the lack of strong domain specific assumptions underlying this representation, it can be readily extracted from raw images irrespective of the persons's pose, facial expression etc. What makes the self-quotient image additionally attractive is that it can be computed quickly and in a closed form using simple low-level image operations. However, it is generally accepted that the self-quotient is insufficiently robust to large illumination changes which is why it is mainly used in applications in which low precision is an acceptable compromise for high recall (e.g. retrieval systems). Yet, in this paper we demonstrate that the performance of this representation in challenging illuminations has been greatly underestimated. We show that its error rate can be reduced by over an order of magnitude, without any changes to the representation itself. Rather, we focus on the manner in which the dissimilarity between two self-quotient images is computed. By modelling the dominant sources of noise affecting the representation, we propose and evaluate a series of different dissimilarity measures, the best of which reduces the initial error rate of 63.0% down to only 5.7% on the notoriously challenging YaleB data set.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Over the course of the last decade, infrared (IR) and particularly thermal IR imaging based face recognition has emerged as a promising complement to conventional, visible spectrum based approaches which continue to struggle when applied in practice. While inherently insensitive to visible spectrum illumination changes, IR data introduces specific challenges of its own, most notably sensitivity to factors which affect facial heat emission patterns, e.g. emotional state, ambient temperature, and alcohol intake. In addition, facial expression and pose changes are more difficult to correct in IR images because they are less rich in high frequency detail which is an important cue for fitting any deformable model. In this paper we describe a novel method which addresses these major challenges. Specifically, when comparing two thermal IR images of faces, we mutually normalize their poses and facial expressions by using an active appearance model (AAM) to generate synthetic images of the two faces with a neutral facial expression and in the same view (the average of the two input views). This is achieved by piecewise affine warping which follows AAM fitting. A major contribution of our work is the use of an AAM ensemble in which each AAM is specialized to a particular range of poses and a particular region of the thermal IR face space. Combined with the contributions from our previous work which addressed the problem of reliable AAM fitting in the thermal IR spectrum, and the development of a person-specific representation robust to transient changes in the pattern of facial temperature emissions, the proposed ensemble framework accurately matches faces across the full range of yaw from frontal to profile, even in the presence of scale variation (e.g. due to the varying distance of a subject from the camera). The effectiveness of the proposed approach is demonstrated on the largest public database of thermal IR images of faces and a newly acquired data set of thermal IR motion videos. Our approach achieved perfect recognition performance on both data sets, significantly outperforming the current state of the art methods even when they are trained with multiple images spanning a range of head views.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Automatic face recognition (AFR) is an area with immense practical potential which includes a wide range of commercial and law enforcement applications, and it continues to be one of the most active research areas of computer vision. Even after over three decades of intense research, the state-of-the-art in AFR continues to improve, benefiting from advances in a range of different fields including image processing, pattern recognition, computer graphics and physiology. However, systems based on visible spectrum images continue to face challenges in the presence of illumination, pose and expression changes, as well as facial disguises, all of which can significantly decrease their accuracy. Amongst various approaches which have been proposed in an attempt to overcome these limitations, the use of infrared (IR) imaging has emerged as a particularly promising research direction. This paper presents a comprehensive and timely review of the literature on this subject.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Over the course of the last decade, infrared (IR) and particularly thermal IR imaging based face recognition has emerged as a promising complement to conventional, visible spectrum based approaches which continue to struggle when applied in the real world. While inherently insensitive to visible spectrum illumination changes, IR images introduce specific challenges of their own, most notably sensitivity to factors which affect facial heat emission patterns, e.g. emotional state, ambient temperature, and alcohol intake. In addition, facial expression and pose changes are more difficult to correct in IR images because they are less rich in high frequency detail which is an important cue for fitting any deformable model. In this paper we describe a novel method which addresses these major challenges. Specifically, to normalize for pose and facial expression changes we generate a synthetic frontal image of a face in a canonical, neutral facial expression from an image of the face in an arbitrary pose and facial expression. This is achieved by piecewise affine warping which follows active appearance model (AAM) fitting. This is the first publication which explores the use of an AAM on thermal IR images; we propose a pre-processing step which enhances detail in thermal images, making AAM convergence faster and more accurate. To overcome the problem of thermal IR image sensitivity to the exact pattern of facial temperature emissions we describe a representation based on reliable anatomical features. In contrast to previous approaches, our representation is not binary; rather, our method accounts for the reliability of the extracted features. This makes the proposed representation much more robust both to pose and scale changes. The effectiveness of the proposed approach is demonstrated on the largest public database of thermal IR images of faces on which it achieved 100% identification rate, significantly outperforming previously described methods

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Species that have temperature-dependent sex determination (TSD) often produce highly skewed offspring sex ratios contrary to long-standing theoretical predictions. This ecological enigma has provoked concern that climate change may induce the production of single-sex generations and hence lead to population extirpation. All species of sea turtles exhibit TSD, many are already endangered, and most already produce sex ratios skewed to the sex produced at warmer temperatures (females). We tracked male loggerhead turtles (Caretta caretta) from Zakynthos, Greece, throughout the entire interval between successive breeding seasons and identified individuals on their breeding grounds, using photoidentification, to determine breeding periodicity and operational sex ratios. Males returned to breed at least twice as frequently as females. We estimated that the hatchling sex ratio of 70:30 female to male for this rookery will translate into an overall operational sex ratio (OSR) (i.e., ratio of total number of males vs females breeding each year) of close to 50:50 female to male. We followed three male turtles for between 10 and 12 months during which time they all traveled back to the breeding grounds. Flipper tagging revealed the proportion of females returning to nest after intervals of 1, 2, 3, and 4 years were 0.21, 0.38, 0.29, and 0.12, respectively (mean interval 2.3 years). A further nine male turtles were tracked for short periods to determine their departure date from the breeding grounds. These departure dates were combined with a photoidentification data set of 165 individuals identified on in-water transect surveys at the start of the breeding season to develop a statistical model of the population dynamics. This model produced a maximum likelihood estimate that males visit the breeding site 2.6 times more often than females (95%CI 2.1, 3.1), which was consistent with the data from satellite tracking and flipper tagging. Increased frequency of male breeding will help ameliorate female-biased hatchling sex ratios. Combined with the ability of males to fertilize the eggs of many females and for females to store sperm to fertilize many clutches, our results imply that effects of climate change on the viability of sea turtle populations are likely to be less acute than previously suspected.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this chapter we focus on face appearance-based biometrics. The cheap and readily available hardware used to acquire data, their non-invasiveness and the ease of employing them from a distance and without the awareness of the user, are just some of the reasons why these continue to be of great practical interest. However, a number of research challenges remain. Specifically, face biometrics have traditionally focused on images acquired in the visible light spectrum and these are greatly affected by such extrinsic factors such as the illumination, camera angle (or, equivalently, head pose) and occlusion. In practice, the effects of changing pose are usually least problematic and can oftentimes be overcome by acquiring data over a time period, e.g., by tracking a face in a surveillance video. Consequently, image sequence or image set matching has recently gained a lot of attention in the literature [137–139] and is the paradigm adopted in this chapter as well. In other words, we assume that the training image set for each individual contains some variability in pose, but is not obtained in scripted conditions or in controlled illumination. In contrast, illumination is much more difficult to deal with: the illumination setup is in most cases not practical to control and its physics is difficult to accurately model. Thermal spectrum imagery is useful in this regard as it is virtually insensitive to illumination changes, as illustrated in Fig. 6.1. On the other hand, it lacks much of the individual, discriminating facial detail contained in visual images. In this sense, the two modalities can be seen as complementing each other. The key idea behind the system presented in this chapter is that robustness to extreme illumination changes can be achieved by fusing the two. This paradigm will further prove useful when we consider the difficulty of recognition in the presence of occlusion caused by prescription glasses.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

As a problem of high practical appeal but outstanding challenges, computer-based face recognition remains a topic of extensive research attention. In this paper we are specifically interested in the task of identifying a person from multiple training and query images. Thus, a novel method is proposed which advances the state-of-the-art in set based face recognition. Our method is based on a previously described invariant in the form of generic shape-illumination effects. The contributions include: (i) an analysis of computational demands of the original method and a demonstration of its practical limitations, (ii) a novel representation of personal appearance in the form of linked mixture models in image and pose-signature spaces, and (iii) an efficient (in terms of storage needs and matching time) manifold re-illumination algorithm based on the aforementioned representation. An evaluation and comparison of the proposed method with the original generic shape-illumination algorithm shows that comparably high recognition rates are achieved on a large data set (1.5% error on 700 face sets containing 100 individuals and extreme illumination variation) with a dramatic improvement in matching speed (over 700 times for sets containing 1600 faces) and storage requirements (independent of the number of training images).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Face recognition from a single image remains an important task in many practical applications and a significant research challenge. Some of the challenges are inherent to the problem, for example due to changing lighting conditions. Others, no less significant, are of a practical nature – face recognition algorithms cannot be assumed to operate on perfect data, but rather often on data that has already been subject to pre-processing errors (e.g. localization and registration errors). This paper introduces a novel method for face recognition that is both trained and queried using only a single image per subject. The key concept, motivated by abundant prior work on face appearance manifolds, is that of face part manifolds – it is shown that the appearance seen through a sliding window overlaid over an image of a face, traces a trajectory over a 2D manifold embedded in the image space. We present a theoretical argument for the use of this representation and demonstrate how it can be effectively exploited in the single image based recognition. It is shown that while inheriting the advantages of local feature methods, it also implicitly captures the geometric relationship between discriminative facial features and is naturally robust to face localization errors. Our theoretical arguments are verified in an experimental evaluation on the Yale Face Database.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Our aim in this paper is to robustly match frontal faces in the presence of extreme illumination changes, using only a single training image per person and a single probe image. In the illumination conditions we consider, which include those with the dominant light source placed behind and to the side of the user, directly above and pointing downwards or indeed below and pointing upwards, this is a most challenging problem. The presence of sharp cast shadows, large poorly illuminated regions of the face, quantum and quantization noise and other nuisance effects, makes it difficult to extract a sufficiently discriminative yet robust representation. We introduce a representation which is based on image gradient directions near robust edges which correspond to characteristic facial features. Robust edges are extracted using a cascade of processing steps, each of which seeks to harness further discriminative information or normalize for a particular source of extra-personal appearance variability. The proposed representation was evaluated on the extremely difficult YaleB data set. Unlike most of the previous work we include all available illuminations, perform training using a single image per person and match these also to a single probe image. In this challenging evaluation setup, the proposed gradient edge map achieved 0.8% error rate, demonstrating a nearly perfect receiver-operator characteristic curve behaviour. This is by far the best performance achieved in this setup reported in the literature, the best performing methods previously proposed attaining error rates of approximately 6–7%.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Illumination invariance remains the most researched, yet the most challenging aspect of automatic face recognition. In this paper we investigate the discriminative power of colour-based invariants in the presence of large illumination changes between training and test data, when appearance changes due to cast shadows and non-Lambertian effects are significant. Specifically, there are three main contributions: (i) we employ a more sophisticated photometric model of the camera and show how its parameters can be estimated, (ii) we derive several novel colour-based face invariants, and (iii) on a large database of video sequences we examine and evaluate the largest number of colour-based representations in the literature. Our results suggest that colour invariants do have a substantial discriminative power which may increase the robustness and accuracy of recognition from low resolution images.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider the problem of matching a face in a low resolution query video sequence against a set of higher quality gallery sequences. This problem is of interest in many applications, such as law enforcement. Our main contribution is an extension of the recently proposed Generic Shape-Illumination Manifold (gSIM) framework. Specifically, (i) we show how super-resolution across pose and scale can be achieved implicitly, by off-line learning of subsampling artefacts; (ii) we use this result to propose an extension to the statistical model of the gSIM by compounding it with a hierarchy of subsampling models at multiple scales; and (iii) we describe an extensive empirical evaluation of the method on over 1300 video sequences – we first measure the degradation in performance of the original gSIM algorithm as query sequence resolution is decreased and then show that the proposed extension produces an error reduction in the mean recognition rate of over 50%.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In spite of over two decades of intense research, illumination and pose invariance remain prohibitively challenging aspects of face recognition for most practical applications. The objective of this work is to recognize faces using video sequences both for training and recognition input, in a realistic, unconstrained setup in which lighting, pose and user motion pattern have a wide variability and face images are of low resolution. In particular there are three areas of novelty: (i) we show how a photometric model of image formation can be combined with a statistical model of generic face appearance variation, learnt offline, to generalize in the presence of extreme illumination changes; (ii) we use the smoothness of geodesically local appearance manifold structure and a robust same-identity likelihood to achieve invariance to unseen head poses; and (iii) we introduce an accurate video sequence “reillumination” algorithm to achieve robustness to face motion patterns in video. We describe a fully automatic recognition system based on the proposed method and an extensive evaluation on 171 individuals and over 1300 video sequences with extreme illumination, pose and head motion variation. On this challenging data set our system consistently demonstrated a nearly perfect recognition rate (over 99.7%), significantly outperforming state-of-the-art commercial software and methods from the literature

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The objective of this work is to recognize faces using sets of images in visual and thermal spectra. This is challenging because the former is greatly affected by illumination changes, while the latter frequently contains occlusions due to eye-wear and is inherently less discriminative. Our method is based on a fusion of the two modalities. Specifically: we examine (i) the effects of preprocessing of data in each domain, (ii) the fusion of holistic and local facial appearance, and (iii) propose an algorithm for combining the similarity scores in visual and thermal spectra in the presence of prescription glasses and significant pose variations, using a small number of training images (5-7). Our system achieved a high correct identification rate of 97% on a freely available test set of 29 individuals and extreme illumination changes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The objective of this work is to recognize all the frontal faces of a character in the closed world of a movie or situation comedy, given a small number of query faces. This is challenging because faces in a feature-length film are relatively uncontrolled with a wide variability of scale, pose, illumination, and expressions, and also may be partially occluded. We develop a recognition method based on a cascade of processing steps that normalize for the effects of the changing imaging environment. In particular there are three areas of novelty: (i) we suppress the background surrounding the face, enabling the maximum area of the face to be retained for recognition rather than a subset; (ii) we include a pose refinement step to optimize the registration between the test image and face exemplar; and (iii) we use robust distance to a sub-space to allow for partial occlusion and expression change. The method is applied and evaluated on several feature length films. It is demonstrated that high recall rates (over 92%) can be achieved whilst maintaining good precision (over 93%).