Dictionary Representation of Deep Features for Occlusion-Robust Face Recognition

View/ Open
Issue Date
2019-02-25Author
Cen, Feng
Wang, Guanghui
Publisher
Institute of Electrical and Electronics Engineers
Type
Article
Article Version
Scholarly/refereed, publisher version
Rights
Copyright © 2019, IEEE.
Metadata
Show full item recordAbstract
Deep learning has achieved exciting results in face recognition; however, the accuracy is still unsatisfying for occluded faces. To improve the robustness for occluded faces, this paper proposes a novel deep dictionary representation-based classification scheme, where a convolutional neural network is employed as the feature extractor and followed by a dictionary to linearly code the extracted deep features. The dictionary is composed by a gallery part consisting of the deep features of the training samples and an auxiliary part consisting of the mapping vectors acquired from the subjects either inside or outside the training set and associated with the occlusion patterns of the testing face samples. A squared Euclidean norm is used to regularize the coding coefficients. The proposed scheme is computationally efficient and is robust to large contiguous occlusion. In addition, the proposed scheme is generic for both the occluded and non-occluded face images and works with a single training sample per subject. The extensive experimental evaluations demonstrate the superior performance of the proposed approach over other state-of-the-art algorithms.
Collections
Citation
F. Cen and G. Wang, "Dictionary Representation of Deep Features for Occlusion-Robust Face Recognition," in IEEE Access, vol. 7, pp. 26595-26605, 2019. doi: 10.1109/ACCESS.2019.2901376
Items in KU ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
We want to hear from you! Please share your stories about how Open Access to this item benefits YOU.