Largest Matching Areas for Illumination and Occlusion Robust Face Recognition


Autoria(s): McLaughlin, Niall; Ming, Ji; Crookes, Danny
Data(s)

29/02/2016

Resumo

In this paper, we introduce a novel approach to face recognition which simultaneously tackles three combined challenges: 1) uneven illumination; 2) partial occlusion; and 3) limited training data. The new approach performs lighting normalization, occlusion de-emphasis and finally face recognition, based on finding the largest matching area (LMA) at each point on the face, as opposed to traditional fixed-size local area-based approaches. Robustness is achieved with novel approaches for feature extraction, LMA-based face image comparison and unseen data modeling. On the extended YaleB and AR face databases for face identification, our method using only a single training image per person, outperforms other methods using a single training image, and matches or exceeds methods which require multiple training images. On the labeled faces in the wild face verification database, our method outperforms comparable unsupervised methods. We also show that the new method performs competitively even when the training images are corrupted.

Formato

application/pdf

Identificador

http://pure.qub.ac.uk/portal/en/publications/largest-matching-areas-for-illumination-and-occlusion-robust-face-recognition(8f499b9f-757a-47a8-98b5-3824afb08667).html

http://dx.doi.org/10.1109/TCYB.2016.2529300

http://pure.qub.ac.uk/ws/files/27152645/largest.pdf

Idioma(s)

eng

Direitos

info:eu-repo/semantics/openAccess

Fonte

McLaughlin , N , Ming , J & Crookes , D 2016 , ' Largest Matching Areas for Illumination and Occlusion Robust Face Recognition ' IEEE Transactions on Cybernetics . DOI: 10.1109/TCYB.2016.2529300

Tipo

article