Maarten Buyl & Tijl De Bie
Inherent Limitations of
AI Fairness
2
AI Fairness
*
*may not actually lead to fairness
3
a technical approach �to algorithmic bias
Fairness
a socio-technical notion
of non-discrimination
AI Fairness
The prototypical approach to AI fairness
4
sensitive
information
AI model
fairness
adjustment
predictions
information
fairness
properties
people
5
6
Lack of a ground truth
7
task: predict whether the defendant will commit a crime
be arrested and found guilty
| black defendants | white defendants |
false positive rate | 44.9% | 23.5% |
8
9
sensitive
information
AI model
fairness
adjustment
predictions
information
fairness
properties
people
EU’s proposed AI Act proposes exceptions �for processing sensitive data
10
11
Simpson’s Paradox highlights the need for a more granular perspective
12
Aggregation means losing nuance�see intersectional fairness
Aggregation requires a discretization of identity
United States Census 2020
Aggregation requires a discretization of identity
persons with disabilities
= ‘those who have long-term physical, mental, intellectual or sensory impairments which in interaction with various barriers may hinder their full and effective participation in society on an equal basis with others’.
The UN Convention on
the Rights of Persons with Disabilities
16
17
people
AI has little power over the full decision process
AI model
predictions
information
18
people
human
decision
makers
environ-
ment
AI model
predictions
information
technical scope
decision process
Alarming implications…
19
biased real-world evaluation
surveillance on vulnerable groups
trade-off statistical power with nuance
weaker impact in important tasks
AI fairness account for its limitations�and not work in defiance of them
AI fairness will not solve fairness (even in AI)
Making fairness adjustments may not always be beneficial�the ‘cure’ can be worse than the ‘disease’
AI fairness will benefit from more integration with its social context
… and this social context is in desperate need of new tools and viewpoints!
20
Soon to be published in Communications of the ACM…
21
Maarten Buyl and Tijl De Bie
maarten.buyl@ugent.be