ABCDEFGHIJKLMNOPQRSTUVWXYZ
1
Topics (light green -> empirical; light orange -> theoretical)Papers for Review (everyone should read; non-presenting students should choose a paper to review)Additional Papers for Presentation (presenting team is suggested to read)
2
Paper 1Paper 2Paper 1Paper 2
3
Adversarial Examples & Robustness EvaluationTowards Deep Learning Models Resistant to Adversarial AttacksObfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial ExamplesOn Adaptive Attacks to Adversarial Example DefensesReliable Evaluation of Adversarial Robustness with an Ensemble of Diverse Parameter-free Attacks
4
Robustness Certification MethodsProvable Defenses against Adversarial Examples via the Convex Outer Adversarial PolytopeCertified Adversarial Robustness via Randomized SmoothingProvably Robust Deep Learning via Adversarially Trained Smoothed ClassifiersTraining Certifiably Robust Neural Networks with Efficient Local Lipschitz Bounds
5
Robust Overfitting & Mitigation MethodsOverfitting in adversarially robust deep learningAdversarial Weight Perturbation Helps Robust GeneralizationUnderstanding Robust Overfitting of Adversarial Training and BeyondRobust overfitting may be mitigated by properly learned smoothening
6
Robust Generalization & Semi-Supervised MethodsUnlabeled Data Improves Adversarial RobustnessROBUST LEARNING MEETS GENERATIVE MODELS: CAN PROXY DISTRIBUTIONS IMPROVE ADVERSARIAL ROBUSTNESS?Adversarially Robust Generalization Requires More DataFixing Data Augmentation to Improve Adversarial Robustness
7
Intrinsic Limits on Adversarial RobustnessTheoretically Principled Trade-off between Robustness and AccuracyEmpirically Measuring Concentration: Fundamental Limits on Intrinsic RobustnessUnderstanding and Mitigating the Tradeoff Between Robustness and AccuracyAdversarial Examples Are Not Bugs, They Are Features
8
Targeted Poisoning Attacks & CertificationPoison Frogs! Targeted Clean-Label Poisoning Attacks on Neural NetworksDEEP PARTITION AGGREGATION: PROVABLE DEFENSES AGAINST GENERAL POISONING ATTACKSOn Optimal Learning Under Targeted Data Poisoning Lethal Dose Conjecture on Data Poisoning
9
Indiscriminate Poisoning & Backdoor AttacksCertified Defenses for Data Poisoning AttacksHow To Backdoor Federated LearningSubpopulation Data Poisoning Attacks
Indiscriminate Data Poisoning Attacks on Neural Networks
10
Adversarial ML Beyond Image ClassificationTextAttack: A Framework for Adversarial Attacks, Data Augmentation, and Adversarial Training in NLPAdVersarial: Perceptual Ad Blocking meets Adversarial Machine LearningAdversarial Attacks on Neural Networks for Graph DataWhy So Toxic? Measuring and Triggering Toxic Behavior in Open-Domain Chatbots
11
12
13
Black-box Adversarial Attacks (Bonus)Black-box Adversarial Attacks with Limited Queries and InformationSquare Attack: a query-efficient black-box adversarial attack via random searchSimple Black-box Adversarial AttacksTowards Efficient Data Free Black-box Adversarial Attack
14
Theoretical Analysis of Adversarial Training (Bonus)Theoretical Analysis of Adversarial Learning: A Minimax ApproachConvergence of Adversarial Training in Overparametrized Neural NetworksOver-parameterized Adversarial Training: An Analysis Overcoming the Curse of DimensionalityON ACHIEVING OPTIMAL ADVERSARIAL TEST ERROR
15
Adversarial Examples, Adversarial Training & Beyond (Bonus)FAST IS BETTER THAN FREE: REVISITING ADVERSARIAL TRAININGUniversal adversarial perturbationsAdversarial PatchConfidence-Calibrated Adversarial Training: Generalizing to Unseen Attacks
16
17
Basics of Adversarial Machine LearningIntriguing properties of neural networksEXPLAINING AND HARNESSING ADVERSARIAL EXAMPLESPoisoning Attacks against Support Vector Machines
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100