ABCDEFGHIJKLMNOPQRSTUVWXYZAAABAC
1
typeURLYearVenuenotes
2
Note: General References/tutorials are linked to via Canvas.
3
This is an evolving list of candidate papers for presentation and discussion
4
Note: You may also pick papers from FACCT 2022 or AIES 2022 conferences.
5
6
Explainability Papers
7
1Lundbergfeature attribution
From Local Explanations to Global Understanding with Explainable AI for Trees
2020
https://github.com/slundberg/shap
How can we fool LIME and SHAP? Adversarial Attacks on Post hoc Explanation Methods
8
2interpretable models
Distill-and-Compare: Auditing Black-Box Models Using Transparent Model Distillation
2018AAAI
https://github.com/shftan/auditblackbox
Does knowledge distillation really work?
9
3interpretable modelsHow interpretable and trustworthy are gams?2021KDD
InterpretableML package: Interpretml: A unified framework for machine learning interpretability
https://github.com/interpretml/interpret
10
4Ustin and Rudininterpretable modelsLearning Optimized Risk Scores2019JMLR
11
5Rudininterpretable models
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019NMI
12
6Counterfactuals
Explanations based on the Missing: Towards Contrastive Explanations with Pertinent Negatives
2018NIPS
https://github.com/IBM/Contrastive-Explanation-Method
13
7Stepin, et alCounterfactuals
A survey of contrastive and counterfactual explanation generation methods for explainable artificial intelligence
2021IEEE Access
14
8CounterfactualsFactual and counterfactual explanations for black box decision making2019IEEE Intelligent Systems
15
9ByrneCounterfactuals
Counterfactuals in Explainable Artificial Intelligence (XAI): Evidence from Human Reasoning
2019ICJAI
16
9.5JPMCregressionCounterfactual Explanations for Arbitrary Regression Models2021
17
10exemplar basedUnderstanding Black-Box Predictions via Influence Functions2017ICML
https://github.com/kohpangwei/influence-release
18
11exemplar basedInterpreting black box predictions using fisher kernels2019AIStats
19
12
Mukund Sundararajan, Ankur Taly, Qiqi Yan
neural network orientedAxiomatic attribution for deep networks2017ICMLUsing a deep learning algorithm and integrated gradients explanation to assist grading for diabetic retinopathy
20
13neural network oriented
Explaining Deep Neural Networks and Beyond: A Review of Methods and Applications
2021Proc. IEEE
huge LRP hub at http://www.heatmapping.org/
21
14Ismail et alneural network orientedInterpretable Mixture of Experts for Structured Data2022
22
15Chefer et alneural network orientedTransformer interpretability beyond attention visualization2021CVPR
23
16Karimicausal
Algorithmic recourse under imperfect causal knowledge: a probabilistic approach
2020NeurIPS
24
17Galhotra et alCausal
Explaining black-box algorithms using probabilistic contrastive counterfactuals
2021SIGMOD
25
18Koh et alHAIConcept bottleneck models2020ICML
https://github.com/yewsiang/ConceptBottleneck
26
19
QV Liao and K. Varshney
HAIHuman-Centered Explainable AI (XAI): From Algorithms to User Experiences2022
27
20Lakkaraju et alHAIRethinking Explainability as a Dialogue: A Practitioner's Perspective2022
28
21Bansal et al.HCI + XAI
Does the Whole Exceed its Parts? The Effect of AI Explanations on Complementary Team Performance
2021CHI
29
22Bucinca et al.HCI + XAI
To Trust or to Think: Cognitive Forcing Functions Can Reduce Overreliance on AI in AI-assisted Decision-making
2021CSCW
30
23TiddiKnowledge graphsKnowledge Graphs as Tools for Explainable Machine Learning: A Survey.2022AI Jl
31
24Slack et alquality/reliabilityReliable Post hoc Explanations: Modeling Uncertainty in Explainability.2021NeurIPS
32
25Rojat et altime seriesExplainable artificial intelligence (xai) on timeseries data: A survey2021
33
34
Fairness-Papers
35
1history50 Years of Test (Un)fairness: Lessons for Machine Learning2019FAT'19
36
2post-processingEquality of Opportunity in Supervised Learning2016NIPShttps://github.com/gpleiss/equalized_odds_and_calibration
37
3post-processingOn Fairness and Calibration2017NIPShttps://github.com/gpleiss/equalized_odds_and_calibration
38
4pre-processingOptimized Pre-Processing for Discrimination Prevention2017NIPShttps://github.com/fair-preprocessing/nips2017
39
5Wang, Utsunpre-processing
Repairing without Retraining: Avoiding Disparate Impact with Counterfactual Distributions
2019icml
40
6bias detectionFast Threshold Tests for Detecting Discrimination.2018AISTATS
https://github.com/5harad/fasttt
slides at https://samcorbettdavies.files.wordpress.com/2017/11/making-fair-decisions-with-algorithms.pdf
goes with A Bayesian Model of Cash Bail Decisions
Facct 21
41
7Celis et altheory
Fair classification with noisy protected attributes: A framework with provable guarantees
2021ICMLgithub.com/vijaykeswani/NoisyFair-Classification.
42
8Quy et aldata issuesA survey on datasets for fairness‐aware machine learning2022WIREsgoes with Retiring Adult: New datasets for fair machine learning, Nips21
43
9in-processingMitigating Unwanted Biases with Adversarial Learning2018AAAIhttps://www.ibm.com/blogs/research/2018/09/ai-fairness-360/
44
10in-processing
Classification with Fairness Constraints: A Meta-Algorithm with Provable Guarantees
2019ACM
45
11Black et alindividual fairnessFliptest: fairness testing via optimal transport
46
12
Speichter (Gummadi's group)
individual fairness
A unified approach to quantifying algorithmic unfairness: Measuring individual &group unfairness via inequality indices
2018KDD
47
13fairness - HCI
Disparate Interactions: An Algorithm-in-the-Loop Analysis of Fairness in Risk Assessments
2019FAT'19
48
14Min LeeHCI
Procedural justice in algorithmic fairness: Leveraging transparency and outcome control for fair algorithmic mediation
2019CSCW
49
15Lee & SinghHCIThe landscape and gaps in open source fairness toolkits2021CHI
50
16
Wang & Joachims
rankingUser Fairness, Item Fairness, and Diversity for Rankings in Two-Sided Markets2021SIGIR
51
17(Kenthapadi)ranking
Fairness-aware ranking in search & recommendation systems with application to linkedin talent search
2019KDD
52
18
Singh, David Kempe, Thorsten Joachims
rankingFairness in ranking under uncertainty2021NIPS
53
19Ke Yangranking, causalityCausal intersectionality for fair ranking
54
20Gopalan et alFairness of XAIBias: Measuring the Fairness of Explanations
55
56
Drift/robustness: Papers
57
0
pick any paper from "required" sections of https://github.com/acmi-lab/cmu-10732-robustness-adaptivity-shift/blob/main/Schedule.md except "mixture proportion" paper.
58
1
Stephan Rabanser, Stephan Günnemann, Zachary C Lipton
drift -detectionFailing loudly: An empirical study of methods for detecting dataset shift2018NeurIPS
59
2Reis et aldrift -detection
Fast Unsupervised Online Drift Detection Using Incremental Kolmogorov-Smirnov Test
2016KDD
60
3
Lipton, Wang, Smola
driftDetecting and correcting for label shift with black box predictors2018ICML
61
4Jingkang YangdriftGeneralized Out-of-Distribution Detection2021
62
5Rawal et aldrift
Algorithmic recourse in the wild: Understanding the impact of data and model shifts
2020
63
6Kohdrift benchmarkingWILDS: A Benchmark of in-the-Wild Distribution Shifts2021ICML
64
7Rezaei et aldrift + fairnessRobust Fairness Under Covariate Shift2021AAAI
65
8H. Singh et aldrift + fairnessFairness violations and mitigation under covariate shift2021FAccT
66
9M. Abdar et aluncertainty
A review of uncertainty quantification in deep learning: Techniques, applications and challenges
2021Info Fusion
67
10Bhatt et al.Uncertainty-XAI
Uncertainty as a Form of Transparency: Measuring, Communicating, and Using Uncertainty
2021AIES
68
11Ley et al.Uncertainty-XAI
Diverse, Global and Amortised Counterfactual Explanations for Uncertainty Estimates
2022AAAI
69
12Ilyas et aladversarial attacksAdversarial examples are not bugs, they are features2019NeurIPS
70
13
F Tramer, N Carlini, W Brendel, A Madry
adversarial attacksOn adaptive attacks to adversarial example defenses2020NeurIPS
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100