1 of 1

Face Authentication from Masked Face Images Using Deep Learning on Periocular Biometrics

Introduction

10th Annual COE Graduate Poster Presentation Competition

Student(s): Jeffrey J. Hernandez V. (MS), Rodney Dejournett (UG)

Advisor(s): Dr. Xiaohong Yuan

Cross-Disciplinary Research Area: xx

Related Work/Reference

Augmentation and Datasets

Conclusion

  • Authentication is a crucial part in any security-based computing system as it only allows legitimate users to access system resources
  • Implementation of the authentication system can be quite difficult as it requires quite a lot of set up and there are a lot of ways to authenticate
  • Biometrics mostly refer to a part of the human body being utilized for something, in this case, to identify or verify a person
  • A biometric based authentication system consists of two phases: Feature Extraction and Verification
  • During the feature extraction phase, a set of biometric features are extracted from the image dataset that has been collected, in which becomes a template to be utilized by the system
  • In the verification phase, the biometric feature data is applied in the algorithm to verify/ authenticate the label with the legitimate person
  • A large image dataset is preferable in developing any biometric based authentication system

Objectives

  • To replicate facial recognition with the focus of only the periocular region of the face using a binary image classifier CNN model
  • To recognize the subject even if the individual has a mask over their face
  • To detect if the subject is part of the database of authenticated people, giving a pass or fail to the tested image(s)
  • Chatterjee, P. (2020). (rep.). Deep Convolutional Neural Networks for the iris and face based Presentation Attack Mitigation (pp. 1–101). Greensboro, NC: The Graduate School of NCAT.
  • Huang, B. (2020, February 13). Real-World Masked Face Dataset (RMFD). GitHub. From https://github.com/X-zhangyang/Real-World-Masked-Face-Dataset#real-world-masked-face-datasetrmfd.
  • Rosebrock, A. (2017, April 3). Facial Landmarks with dlib, opencv, and python. PyImageSearch. From https://www.pyimagesearch.com/2017/04/03/facial-landmarks-dlib-opencv-python/.
  • Computational Intelligence and Photography Lab, Yonsei University. (2019, January 14). Real and Fake Face Detection. Kaggle. From https://www.kaggle.com/ciplab/real-and-fake-face-detection.
  • Brownlee, J. (2019, June 3). How to Perform Face Detection with Deep Learning. Machine Learning Mastery. From https://machinelearningmastery.com/how-to-perform-face-detection-with-classical-and-deep-learning-methods-in-python-with-keras/.
  • Phan, B. (2020, May 30). 10 Minutes to Building a Fully-Connected Binary Image Classifier in TensorFlow. Towards Data Science. Retrieved December 13, 2021, from https://towardsdatascience.com/10-minutes-to-building-a-fully-connected-binary-image-classifier-in-tensorflow-d88062e1247f.
  • The training for the model ran for 54 minutes and resulted in 39% loss and 81% accuracy for training and 58% loss and 70% accuracy for validation.
  • The predictions from the confusion matrix were 333 true positives, 103 false positives, 314 false negatives, and 125 true negatives
  • With the results, the accuracy when it comes to the CNN binary image classifier model ranges around 53%.
  • The project continues to develop as the focus of trying to achieve a high accuracy when it comes to the authentication of masked faces is still relevant

Future Work:

  • A baseline facial recognition model using binary image classifier has been taken into consideration to compare the results it gives to our own model
    • This will allow us to break down what will work as a means for better results: an update to the periocular region dataset or reworking the CNN model.
  • Switching into full, uncovered faces instead of the periocular region of masked faces to to see if the CNN model’s facial recognition would operate better with a full-face dataset rather than a dataset with a part of someone’s face

Facial Extraction and Database

  • Using a CNN face detection model and shape predictor from within dlib, a program would detect facial landmarks within the masked images dataset and extract them into a new folder
  • To make sure that the landmarks were accurate, all 68 facial landmarks were detected (or predicted as much as the program can) in each face
  • The project will only require using a small region of face, which in this case is the periocular information including both eyes and eyebrows region (facial landmarks 18-30)
  • The CNN model that was developed and used came to be from integrating with Keras Image Classification and following a base model of a classification matrix and its many layers

  • Image augmentation is necessary within this project since it will provide training and exposure to the CNN model through allowing multiple different variations of the subject images into the training and testing sets
  • The large, combined dataset soon grew to have 2,005 subjects, with each subject getting 10 augmented images of them, resulting in a total of 20,045 images
  • Once the augmentation has been completed, the dataset needed to be split into a training set and testing set, which would be utilized by our CNN model.
  • To achieve a high accuracy with our model, it was decided to get an 80% to 20% relationship with the 80% of images going into the training set and 20% going into the testing set
  • Within each set, the images will be split 50-50 into authentic and unauthentic sets
  • After augmentation is done, training had 16035 images (8018 in authentic, 8017 in unauthentic) and testing had 4010 images (2010 authentic, 2000 unauthentic)

  • Prepared the data to be used for deep learning to identify if the image matches an authenticated face within the database, all using a CNN binary image classifier

Results