1 of 56

Bias in Bios

- Aakash Srinivasan and Arvind Krishna

2 of 56

Papers

  • Bias in Bios: A Case Study of Semantic Representation Bias in a High-Stakes Setting - De-Arteaga M, Romanov A, Wallach H, Chayes J, Borgs C, Chouldechova A, Geyik S, Kenthapadi K, Kalai AT. Conference on Fairness, Accountability, and Transparency 2019 (FAT).
  • What's in a Name? Reducing Bias in Bios without Access to Protected Attributes - Romanov A, De-Arteaga M, Wallach H, Chayes J, Borgs C, Chouldechova A, Geyik S, Kenthapadi K, Rumshisky A, Kalai AT. Best thematic paper award at NAACL 2019.

3 of 56

High Level Overview

  • Biases in Machine Learning models for NLP. Specifically, how even simple ML models on tasks like text classification are biased.
  • How the biases get compounded if these ML models are deployed in real world.
  • What are the ways to overcome the bias. Does simple modification of corpera ensure that the models learned are unbiased? Can we add some kind loss functions to overcome bias.

4 of 56

Setting

  • Task: Occupation Classification from biographies (Bios)�Input: Biographies, Output: The occupation of the author.
  • Goal: To study gender bias in occupation classification. Assume gender to be binary for the case study.

5 of 56

6 of 56

william henry gates iii ( born october 28 , 1955 ) is an american business magnate , investor , author , philanthropist , humanitarian , and principal founder of microsoft corporation . during his career at microsoft , gates held the positions of chairman , ceo and chief software architect , while also being the largest individual shareholder until may 2014 . in 1975 , gates and paul allen launched microsoft , which became the world 's largest pc software company . gates led the company as chief executive officer until stepping down in january 2000 , but he remained as chairman and created the position of chief software architect for himself . in june 2006 , gates announced that he would be transitioning from full-time work at microsoft to part-time work and full-time work at the bill & melinda gates foundation , which was established in 2000 .

Software Engineer

Model

7 of 56

Dataset

  • Common Crawl - Online Biographies in English. Choose biographies containing a pattern like - “is a(n) (xxx) title,” Eg: title: Software Engineer. xxx - Senior.
  • Merging occupations (eg: professor and economics professor) and a lot more preprocessing - 28 different occupations.
  • Biographies are typically written in the third person and because pronouns are gendered in English - extract (likely) self-identified binary gender

Data : (X,G,Y)

X: biographies

G: Gender (M/F) - inferred from X

Y: Occupation

8 of 56

Dataset

Nancy Lee is a registered nurse. She graduated from Lehigh University, with honours in 1998. Nancy has years of experience in weight loss surgery, patient support, education, and diabetes.

Original Biography

She graduated from Lehigh University, with honours in 1998. Nancy has years of experience in weight loss surgery, patient support, education, and diabetes.

Gender:Female

Occupation : Nurse

9 of 56

- 400k Bios

- Imbalance

- 20 - 200 tokens

Each occupation has gender imbalance: Eg: Surgeon - 14.6% Female

10 of 56

Semantic Representations

  • Bag of Words (BOW)
  • Word Embeddings (Fast Text)
  • GRU with Attention

Note: We are not trying to study bias in/ debias word embeddings

11 of 56

BOW

  • Good Baseline
  • Interpretable
  • Sparse

OVR - Logistic Reg + L2

12 of 56

Word Embeddings

It

is

raining

heavily.

Average

+

Sentence Encoding

OVR Logistic Regression with L2

13 of 56

GRU + Attention

Sentence Embedding

14 of 56

GRU + Attention

15 of 56

Quantification of Bias

G - gender

Y - True occupation

Y_{hat} - Predicted Occupation

Ideal Scenario: Gap = 0 for all occupations

16 of 56

More Accurate on Female

More Accurate on Male

17 of 56

Why is this Bad?

Applying this model to real world scenario will reflect the gender gap for the occupation but also amplify them!

"Who is offered a job today will affect the gender (im)balance in that occupation in the future"

18 of 56

19 of 56

20 of 56

Why is this Bad?

Leaky Pipeline - Classifier compounds existing imbalances

21 of 56

Why is this Bad?

She graduated from Lehigh University, with honours in 1998. Nancy has years of experience in weight loss surgery, patient support, education, and diabetes.

Nurse

He graduated from Lehigh University, with honours in 1998. Andrew has years of experience in weight loss surgery, patient support, education, and diabetes.

Surgeon

Counterfactual Analysis

Marianne Bertrand and Sendhil Mullainathan. 2004. Are Emily and Greg more employable than Lakisha and Jamal? A field experiment on labor market discrimination. American economic review 94, 4 (2004), 991–1013.

22 of 56

Why does the model learn this gender bias?

23 of 56

She graduated from Lehigh University, with honours in 1998. Nancy has years of experience in weight loss surgery, patient support, education, and diabetes.

Andrew attended Johns Hopkins Medical School and trained at the Massachusetts General Hospital in Boston. He pursued fellowship training at UCLA, studying liver transplantation

She, Nancy

Nurse

He, Andrew

Surgeon

24 of 56

What about "scrubbing" explicit bias indicators like First names and Pronouns in the training set?

<token> graduated from Lehigh University, with honours in 1998. <token> has years of experience in weight loss surgery, patient support, education, and diabetes.

<token> attended Johns Hopkins Medical School and trained at the Massachusetts General Hospital in Boston. <token> pursued fellowship training at UCLA, studying liver transplantation

he, she, her, his, him, hers, himself, herself, mr, mrs, and ms - removed

25 of 56

26 of 56

27 of 56

  • "scrubbing" reduces the gender bias but doesn't eliminate it!
  • Predicting people's gender from "scrubbed" biographies using DNNs on a balanced dataset of male and female. Expected accuracy - 50%, obtained accuracy: 68%!
  • Why does this happen, when we "scrubbed" the explicit indicators?

28 of 56

  • "scrubbing" reduces the gender bias but doesn't eliminate it!
  • Predicting people's gender from "scrubbed" biographies using DNNs on a balanced dataset of male and female. Expected accuracy - 50%, obtained accuracy: 68%!
  • Why does this happen, when we "scrubbed" the explicit indicators?

29 of 56

Proxy Candidates

Example: women, husband, mother, woman, and female

30 of 56

Recap

  • Occupation classification, quantification of bias and consequences of bias in ML models for tasks like occupation classification
  • "Scrubbing" reduces gender bias but doesn't necessarily eliminate them. Proxy candidates.

31 of 56

Desirable Properties

  • A method that can simultaneously consider and resolve multiple biases - eg. gender, race or sometimes combinations/intersections of them (eg. Asian Men). "Scrubbing" may not be able to handle this efficiently.
  • A method that can be easily compatible to the current ML/DL models. The desired condition is to come up with a framework that doesn't change the existing training mechanisms/model architectures.
  • Debiasing method shouldn't reduce the accuracy of the system significantly.
  • Not using protected attributes such as race, gender explicitly for reducing bias of the model.
    • It may not be available always
    • Not always legal to use them.
    • Sometimes, we may not know! Eg. Language disparities in different countries - culture based.

Qn: Is there any proxy that is 1) easily available, 2) "OKAY" to use, 3) represents societal bias?

32 of 56

What’s in a Name?

Core Idea: Word embeddings of people's names as universal proxies!

Word Embeddings contain several biases - including people names

No need to define protected groups

33 of 56

What’s in a Name?

Swinger, N., De-Arteaga, M., Heffernan IV, N.T., Leiserson, M.D. and Kalai, A.T., 2019, January. What are the biases in my word embedding?. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society

34 of 56

Method Formulation

1. Cluster Constrained Loss

2. Covariance Constrained Loss

35 of 56

Cluster Constrained Loss

Intuition: Name embeddings are quite indicative of the attributes susceptible to societal biases. Clustering by name embeddings may help us to discover some latent groups and each data point can now be associated with a latent group.

Data: (biography-text, name, occupation).

These latent groups are obtained by k-means clustering from embeddings of the name. It turns out that with appropriate k, the clusters are interpretable!

36 of 56

Cluster Constrained Loss

37 of 56

Cluster Constrained Loss

Given the "latent community" in which each biography belongs to, the task is to ensure that the pairwise GAP for each community and each occupation is as minimum as possible.

38 of 56

Intuition: Example

Asian Male

European Females

Middle-Aged Americans

- Professor

- S/W Engineer

- Nurse

39 of 56

Intuition: Example

Asian Male

European Females

Middle-Aged Americans

- Professor

- S/W Engineer

- Nurse

40 of 56

Intuition: Example

Asian Male

European Females

Middle-Aged Americans

- Professor

- S/W Engineer

- Nurse

p_avg=0.4

p_avg=0.2

p_avg=0.1

41 of 56

Intuition: Example

Asian Male

European Females

Middle-Aged Americans

- Professor

- S/W Engineer

- Nurse

p_avg=0.2

p_avg=0.3

p_avg=0.9

42 of 56

Intuition: Example

Asian Male

European Females

Middle-Aged Americans

- Professor

- S/W Engineer

- Nurse

p_avg=0.5

p_avg=0.95

p_avg=0.05

43 of 56

Any problem with CluCL?

How do we choose the number of clusters k, so that they are interpretable and the latent clusters have good interpretability?

44 of 56

Covariance Constrained Loss

Core Idea: Name (or the embeddings of name) shouldn't be determining the predicted probabilities of true occupation in training set.

Intuition: Each latent dimension of name embeddings may correspond to some feature that is associated with societal bias.

In this case, we would like to minimize the correlation between each features and predicted probability score of the correct occupation.

45 of 56

Covariance Constrained Loss

46 of 56

Intuition: Example

7

9

Alex

Gender:

Race:

Pr(S/W Engineer) = 0.9

5

0

Pr(S/W Engineer) = 0.1

Kenya

47 of 56

Lets check if it satisfies the desirable properties

  • Handle multiple biases? - Yes
  • Compatibility and generalizability - Yes
  • Does Debiasing reduce the accuracy of the system significantly? No
  • Not using protected attributes such as race, gender explicitly for reducing bias of the model - Yes

48 of 56

How do we evaluate?

For quantifying and evaluating bias, we need to access the protected attributes. However, the training/prediction is independent of the protected attributes. Also we need names of only training instances and not testing.

Balanced TPR

49 of 56

How do we evaluate?

Reason for RMSE over Average:

Interested in mitigating larger bias more

Model | Gender GAP

Surgeon

Rapper

Average

RMSE

Model 1

1

999

500

706.4

Model 2

500

500

500

500

50 of 56

Results

Note: The notion of race (r) and gender (g) are not inherent to the methodology itself.

51 of 56

52 of 56

Results

53 of 56

Interpreting the weights of the model

54 of 56

Key Takeaways

  • Gender bias in occupation classification and its quantification.
  • Compounding bias. "Scrubbing" doesn't necessarily remove bias.
  • Name embeddings are representative of demographic information and this idea can be exploited to debias a ML model without access to protected attributes.
  • Cluster Constrained and Covariance Constrained losses and their intuition.
  • These methods reduced bias but didn't eliminate them!

55 of 56

Critical analysis of proposed method

+ No need to explicitly specify the group(s) susceptible to occupational bias. Latent groups identified from clustering of names.

+ The clusters can be interpretable and can simultaneously consider multiple protected attributes without explicit use/access. Also, For example, a domain expert based on the United States may not think of testing for caste discrimination, hence biases that an embedding may have against certain Indian last names may go unnoticed.

+ The names are needed only in training phase.

- Choosing the number of clusters is not clear. We don't have an analysis whether the clusters are interpretable if smaller cluster size or larger size is taken. Way to overcome this: CoCL

- Analysis with respect to debiased word embeddings?

56 of 56

Questions?