1 of 32

Social Futures Lab

Machine Learning & Bias

UW Allen School of Computer Science & Engineering

Search

Waiting? Tell someone about the last time you used a voice assistant!

Spreading Awareness On the Biases Within Machine Learning Models

2 of 32

  • UW Social Futures Lab Researchers
    • Social Computing
      • Design
      • Social Science
      • Computer Science
  • Social Computing Curriculum Project
    • Involves High Schoolers
      • Learning Styles
      • Impacts of Lessons
      • Computational Learning

Who Are We?

3 of 32

  • IRB-Exempt Research Study Norms
    • Right to Information,
    • Right to Confidentiality, &
    • Right to Refuse Participation
  • Classwide Norms
    • General Respect to One Another
    • Will Follow Norms Set by Your Class

Let’s Talk Norms!

4 of 32

Machine Learning & Bias Warm-Up Link

bit.ly/uwsfl-pre

5 of 32

I Can...

Identify

Biases that are prevalent in current machine learning technology

3

2

1

Explain

How supervised learning is used to train machine learning models

Understand

The problems that can arise from biases in machine learning models

6 of 32

  • Form of Artificial Intelligence
  • Allows machines to learn/adapt based on data/information
  • Software applications become more accurate at predicting outcomes with more data
  • One type: Supervised Learning

What is machine learning?

7 of 32

Where is machine learning used?

  • Used in our everyday tools
    • Autocorrect
    • Amazon Alexa, Google Home, Siri
    • Search Engines

  • Used to identify patterns and solve problems

8 of 32

  • Use of labeled datasets to train algorithms that predict outputs accurately.

  • Training time (giving the model inputs and correct outputs)

  • Test time (model predicts outputs given inputs)

Supervised Learning

9 of 32

  • Incorrect or unfair predictions made by a machine learning model
  • Usually occurs from the unintended real-life biases of designers

What is machine bias?

Types of ML Bias

10 of 32

What are patterns in algorithmic bias?

Rapid Resume Reports

11 of 32

Observation Sheet

Resume Set

Materials

12 of 32

Activity

Goal: Grade the resume set with a letter grade (A+ to F-)

Think (5 min): Individually grade the resumes. Think about what skills and criteria is important to you!

Pair (2 min): Turn to someone next to you and talk about how you decided to grade the resumes!

Share: We will share our observations as a class afterward!

13 of 32

  • How well did you grade each resume?

  • What skills and experiences stood out and were most important to you?

  • In what ways were they similar or different?

Let’s Share!

14 of 32

Let’s see the Resume Screener Results!

James

Catherine

  • Why do you think certain resumes got the score they got?
    • In what ways did you score differently?

  • What did you think of the types of topics the resumes were being graded on?

15 of 32

  • Put “Never” in front of all of the experiences

  • Changed GPA to 0.0

  • Changed some Honors & Distinctions

  • How would you grade this resume?

James’ Anti-Resume

16 of 32

  • Still got a decent grade!

Anti-Resume Results

  • Similar skills and jobs noted

17 of 32

Main Takeaways

Being too confident in ML models can possibly lead to unintended results

Bias and overconfidence in ML models can cause barriers in the workforce

Unconscious bias of designers or unrepresentative data sets can lead to bias

Design

Barriers

Reliance

18 of 32

Some Types of Machine Bias

Bias within the algorithm processing data itself.

Implicit: Biases within the individuals creating the algorithm.

Social

Sample

Algorithmic

Bias within the sample of data which is being processed.

Reporting: Non-representative sample size in comparison to real-world.

Individual norms leading to biased perceptions of data.

19 of 32

Examples of Bias in ML

In 2018, Amazon discovered their resume screening AI was biased against women.

Facial Recognition

Resume Screening

Darker skin tones face great bias in online proctoring, Face ID, law enforcement surveillance.

Translation

Example of Google translation from Turkish to English in 2017.

Since updated to reflect masculine/feminine pronouns.

20 of 32

  • Choose the right learning model based on the dataset.
    • Have equitable representation in supervised learning.
    • Assess for biases in unsupervised learning.

  • Have more representation/diversity within the AI field and in datasets.

How do we mitigate bias?

21 of 32

New in Machine Learning and AI

Image Generation

22 of 32

Observation Sheet

Laptop/Phone

Materials

23 of 32

Activity

Goal: Explore generative AI and discover any underlying biases

Try one of the Observation Sheet prompts! Describe what generates and thoughts you have.

Try coming up with a prompt of your own! Fill out your Observation Sheet and share with your peers.

Share: We will share our observations as a class in about 10 minutes!

24 of 32

  • What did you see from the given prompts? What about the personal prompts you put in?

  • Did anything surprise you about what images came up?

Let’s Share!

25 of 32

  • What did you see from the given prompts?

  • Did anything surprise you about what images came up?

  • Any indications of bias from this prompt?

Wedding Dress Prompt

26 of 32

  • What did you see from the given prompts?

  • Did anything surprise you about what images came up?

  • Any indications of bias from this prompt?

Hairstyles Prompt

27 of 32

  • What did you see from the given prompts?

  • Did anything surprise you about what images came up?

  • Any indications of bias from this prompt?

Software Engineer Prompt

28 of 32

  • What did you see from the given prompts? What about the personal prompts you put in?

  • Did anything surprise you about what images came up?

Let’s Share!

29 of 32

  • Today, We Learned About
    • Machine Learning & Bias
      • Supervised Learning Models
      • Types of Bias in Machine Learning
      • Mitigation strategies for machine bias
  • This Was Just One Topic of Social Computing, a Field Surrounding:
    • Design,

Social Science,

& Computer Science!

  • Learn More: https://social.cs.washington.edu/

Recap

30 of 32

I Can...

Identify

Biases that are prevalent in current machine learning technology

3

2

1

Explain

How supervised learning is used to train machine learning models

Understand

The issues that can arise from biases in machine learning models

31 of 32

Survey Link

bit.ly/uwsfl-survey

32 of 32

Social Futures Lab

Thanks for Tuning In!

UW Allen School of Computer Science & Engineering

Search

Please Complete a Survey to Help Out With Our Research Study

Survey Link: bit.ly/uwsfl-survey