1 of 22

Maarten Buyl & Tijl De Bie

Inherent Limitations of

AI Fairness

2 of 22

2

AI Fairness

*

*may not actually lead to fairness

3 of 22

3

a technical approach �to algorithmic bias

Fairness

a socio-technical notion

of non-discrimination

AI Fairness

4 of 22

The prototypical approach to AI fairness

4

sensitive

information

AI model

fairness

adjustment

predictions

information

fairness

properties

people

5 of 22

  1. Lack of a ground truth�

  • Need for sensitive data��
  • Categorization of groups��
  • Limited power over decisions

5

6 of 22

  1. Lack of a ground truth�

  • Need for sensitive data��
  • Categorization of groups��
  • Limited power over decisions

6

7 of 22

Lack of a ground truth

7

task: predict whether the defendant will commit a crime

be arrested and found guilty

black defendants

white defendants

false positive rate

44.9%

23.5%

8 of 22

  1. Lack of a ground truth

  • Need for sensitive data��
  • Categorization of groups��
  • Limited power over decisions

8

9 of 22

9

sensitive

information

AI model

fairness

adjustment

predictions

information

fairness

properties

people

10 of 22

EU’s proposed AI Act proposes exceptions �for processing sensitive data

10

11 of 22

  1. Lack of a ground truth�

  • Need for sensitive data��
  • Categorization of groups��
  • Limited power over decisions

11

12 of 22

Simpson’s Paradox highlights the need for a more granular perspective

12

13 of 22

Aggregation means losing nuance�see intersectional fairness

14 of 22

Aggregation requires a discretization of identity

United States Census 2020

15 of 22

Aggregation requires a discretization of identity

persons with disabilities

= ‘those who have long-term physical, mental, intellectual or sensory impairments which in interaction with various barriers may hinder their full and effective participation in society on an equal basis with others’.

The UN Convention on

the Rights of Persons with Disabilities

16 of 22

  1. Lack of a ground truth�

  • Need for sensitive data��
  • Categorization of groups��
  • Limited power over decisions

16

17 of 22

17

people

AI has little power over the full decision process

AI model

predictions

information

18 of 22

18

people

human

decision

makers

environ-

ment

AI model

predictions

information

technical scope

decision process

19 of 22

Alarming implications…

  1. Lack of a ground truth�

  • Need for sensitive data��
  • Categorization of groups��
  • Limited power over decisions

19

biased real-world evaluation

surveillance on vulnerable groups

trade-off statistical power with nuance

weaker impact in important tasks

20 of 22

AI fairness account for its limitationsand not work in defiance of them

AI fairness will not solve fairness (even in AI)

Making fairness adjustments may not always be beneficial�the ‘cure’ can be worse than the ‘disease’

AI fairness will benefit from more integration with its social context

… and this social context is in desperate need of new tools and viewpoints!

20

21 of 22

Soon to be published in Communications of the ACM

21

22 of 22

Maarten Buyl and Tijl De Bie

maarten.buyl@ugent.be