TUE-AM-335
Object Detection
Credit: Alberto Rizzoli
RCNN, Girshick et al. CVPR 2014
Fast RCNN, Girshick ICCV 2015
Recent Object Detectors
Faster RCNN, Ren et al. NeurIPS 2015
Yolo, Redmon et al. CVPR 2016
Detectron, Girshick et al. 2018
https://github.com/facebookresearch/Detectron
Domain Shift
“What you saw is not what you get”
Training data
“what you saw”
Deployment
“what you get”
Domain Shifts
Source domain: Labeled samples (Cityscapes, Synthetic, Visible)
Target domain: Unlabeled samples (FoggyCityscapes, Real-world, Thermal)
How to tackle a domain shift?
Include annotated samples
for target domain
Domain Adaptation
Labor intensive
Time consuming
Expensive
Solutions
Increasing model generalization capability and robustness
Unsupervised Domain Adaptation (UDA)
UDA
Class A
Class B
Source Domain
Target Domain
Setting:
UDA Drawbacks
Source-Free Domain Adaptation (SFDA)
Setting:
Challenges
Source-model Prediction Ground Truth
�
Knowledge Distillation of Target to Source model
To effectively distill target domain knowledge into a source-trained model we employ a student-teacher framework.
Enhancing Target Domain Feature Representations
Motivation and Key Idea
Proposed Network
Instance Relation Graph Network
Loss functions
Pseudo-label Loss
Graph Contrastive Loss
Overall Loss
Quantitative Results
Quantitative Results
IRG Proposal Relation
Qualitative Results
Student-Teacher Proposed method
�
Acknowledgement
Vision and Image Understanding (VIU) Lab @JHU
Vibashan VS
Poojan Oza
Vishal M Patel