FRR-Net: A Real-Time Blind Face Restoration and Relighting Network
Authors: Samira Pouyanfar, Sunando Sengupta, Mahmoud Mohammadi, Ebey Abraham, Brett Bloomquist, Lukas Dauterman, Anjali Parikh, Steve Lim, and Eric Sommerlade
Workshop: NTIRE2023-New Trends in Image Restoration and Enhancement workshop and associated challenges
Motivations
FRR-Net contributions:
(e) Low Light+Blur/Noise
Example input images captured under varying conditions using a custom
webcam and the corresponding enhanced version obtained by FRR-Net
Degradation Model�
FRR-Net recovers face quality in a variety of conditions
input
target
Low Light
Blurry
Noise
Illumination
Model Framework
Model compression module
3
4
4
5
ELT layout
Experiments
Comparison results on StyleGAN validation data
Inference time and computational cost comparison
Comparison results on Celeb-A validation data
Inference time on NPU for various Width-Depth versions
Input
Ground Truth
FRR-Net (ours)
GFPGAN (CVPR 2021)
VQFR (ECCV2022)
PANINI (AAAI 2022)
FRR-Net improves both low light and noise/blur
Our approach does similarly or better relative to other SOTA approaches.
FRR-Net Outputs on Real Samples�FRR-Net can remove the unnatural color cast on users’ faces because of screen content or other light sources�
Input Output
Input Output
Input Output
Input Output
Video Demo: Low Light-Far from Camera
Input
FRR-Net
Video Demo: Synthetic Noise+light+blur
Input
FRR-Net
Video Demo: All distortion combined
Input
FRR-Net
Limitations and Future Work
ELT layout
ELT layout