CSE 5539: �Domain Adaptation
Today
Recap of challenges
Problems:
Mismatch between training/test data
Mismatch between training/test data
product images
ImageNet
web images
Data collection bias
Credits: Rogerio Feris, ICCV-2019 slides
Recap of challenges
Problems:
Potential solutions:
Transfer learning
Transfer learning
Anchor
Weather
Reporter
ImageNet
Transfer learning
leopard cat
leopard
cheetah
bobcat
clouded
leopard
cat
wildcat
Transfer learning
leopard cat
cat
leopard
Felidae
small sized
black spot
……
Felidae
small sized
stripe
……
Felidae
large sized
black spot
……
Domain adaptation
[You al., 2019]
Domain adaptation
Data
reweighting
Domain adaptation
Feature
transform
Domain adaptation
[Saenko al., 2019]
Key words
Questions?
Today
Basic setup of domain adaptation
Basic setup of domain adaptation
Credits: Hoffman 2019 ICCV tutorial
Domain adaptation (DA)
Inductive
or
Transductive
Worth of thinking
Questions?
Today
Theoretical perspective
Ben-David et al, 2010
Hypothesis class, problem difficulty
Error in the source domain
Discrepancy in distribution
Today
Algorithm
Algorithm
Quantifying domain mismatch
Maximum Mean Discrepancy (MMD)
MMD: Fortes and Mourier, 1953
Let’s assume we know
We want to minimize MMD
Maximum Mean Discrepancy (MMD)
-
Re-weighting to “minimize” MMD
Transforms to “minimize” MMD
Why?
Recap: MMD
Questions?
MMD: before DNN
Credits: Hoffman 2019 ICCV tutorial
MMD: DNN transform
Credits: Hoffman 2019 ICCV tutorial
Credits: Hoffman 2019 ICCV tutorial
Can we train with domain loss first then classification loss?
Credits: Hoffman 2019 ICCV tutorial
(Deep) CORAL: Correlation Alignment
Geodesic flow kernel (GFK)
Gong et al, CVPR 2012
Kernel machines
Questions?
Today
Domain mismatch
Adversarial learning
Credits: Hoffman 2019 ICCV tutorial
Adversarial learning
Credits: Hoffman 2019 ICCV tutorial
Adversarial learning
Credits: Hoffman 2019 ICCV tutorial
Adversarial learning
Credits: Hoffman 2019 ICCV tutorial
Binary classification:�SD: +1
TD: -1
Adversarial learning
Credits: Hoffman 2019 ICCV tutorial
Binary classification:�SD: +1
TD: -1
Adversarial learning
Credits: Hoffman 2019 ICCV tutorial
The devils in the details!
Be aware of trivial solutions
Recap: Adversarial learning
Credits: Hoffman 2019 ICCV tutorial
Additional objectives
[Li et al., CVPR 2018]
Challenges
Questions?
Today
Problems
[Saito et al., CVPR 2018] Credits: Hoffman 2019 ICCV tutorial
Alignment is not aware of the task-specific boundary
[Saito et al., CVPR 2018] Credits: Saenko 2019 ICCV tutorial
Solution: change the alignment objective
[Saito et al., CVPR 2018] Credits: Saenko 2019 ICCV tutorial
Maximum classifier discrepancy (MCD)
-
+
-
+
[Saito et al., CVPR 2018]
Maximum classifier discrepancy (MCD)
[Saito et al., CVPR 2018]
Maximum classifier discrepancy (MCD)
Recap: Theoretical perspective
[Ben-David et al., 2010]
Hypothesis class, problem difficulty
Training loss
Discrepancy in distribution
Maximum classifier discrepancy (MCD)
-
+
-
+
[Saito et al., CVPR 2018]
Experimental results
Questions?
Today
[Zhu et al., ICCV 2017]
maximize w.r.t. DY
minimize w.r.t. G
To encourage meaningful transformation!
Questions?
Today
Domain adaptation (DA)
Self-training
Self-training
Results
Results
Why does pre-training work?
Questions?
Recap: Domain adaptation
Components:
Algorithms:
Deep MMD
Credits: Hoffman 2019 ICCV tutorial
Domain adversarial learning
Credits: Hoffman 2019 ICCV tutorial
Maximum classifier discrepancy (MCD)
-
+
-
+
[Saito et al., CVPR 2018]
Maximum classifier discrepancy (MCD)
[Saito et al., CVPR 2018]
Cycle consistency
[Zhu et al., ICCV 2017]
Can we mess up the correspondence between SD and TD?
Semantic correspondence
Semantic correspondence
Semantic correspondence
Machines may find unreasonable shortcuts to minimize the training loss!
[Tzeng et al., 2017]
Today
Domain adaptation for semantic segmentation