1 of 28

Explaining Algorithms: A day with some answers

1

See quiz on humanaiclass.org

Human - AI

Interaction

Human - AI

Interaction

Human - AI

Interaction

Chinmay Kulkarni and Mary Beth Kery

Fall 2019, Human-Computer Interaction Institute, Carnegie Mellon University

Human-AI Interaction Fall 19 .

2 of 28

2

Human-AI Interaction Fall 19 .

3 of 28

Previously…

  • Different explanations are useful for different questions
    • Can you just write an explanation without knowing the question?
    • How do you know what questions people have???
  • Can we always give people an explanation?

3

Human-AI Interaction Fall 19 .

4 of 28

From Week 1: Start at the End … 

  • Ask what the goals are for different stakeholders
  • What questions they may have
    • What answers we can give
    • What to do when you can’t give answers

Machine Learning Algorithm

(task)

Training Data (experience)

Model

New Example

??

Human-AI Interaction Fall 19 .

5 of 28

Today...

Are there principled approaches to creating explanations?

Are there trade-offs?

How does explanability intersect with accuracy and fairness?

Meanwhile, look for AI explanations in the products you use.

5

Human-AI Interaction Fall 19 .

6 of 28

6

Is this graph accurate?

Human-AI Interaction Fall 19 .

7 of 28

Why bother with explanations/interpretations?

  • Algorithms can be wrong on examples that human operators get right
  • Algorithms can be working on outdated data (and have no way to handle “distributional drift”)
  • Explainable algorithms can further science
  • People are happier if they feel like they know what’s going on (even if they can’t control it)

7

Human-AI Interaction Fall 19 .

8 of 28

8

Human-AI Interaction Fall 19 .

9 of 28

9

Human-AI Interaction Fall 19 .

10 of 28

10

We care about performance

of AI + human using it

A less performant model may be more accurate if you include human operator.

Human-AI Interaction Fall 19 .

11 of 28

Ways to explain

Questions:

  1. Why did you do that?
  2. Why not something else?
  3. When do you succeed?
  4. When do you fail?
  5. When can I trust you?
  6. How do I correct an error?

Hint: map questions to combinations of answers.

E.g. 1 -> [A+B]

Answers:

  1. Simulation “run through decision process”
  2. Decomposition “here’s the information I used to make decision”
  3. Algorithmic transparency: “Here’s math that proves this algorithm will work, given large enough data”
  4. Simplification: “My model is too complex to represent, but a simplified version is as follows…”
  5. Examples: “Here are five other examples that are similar to this one that I classified the same way”

11

Human-AI Interaction Fall 19 .

12 of 28

Combining multiple explanation modes

12

Decomposition

Simplification

Simulation

Human-AI Interaction Fall 19 .

13 of 28

Knowing the underlying algorithm is also useful

13

How do you correct for bad predictions?

Human-AI Interaction Fall 19 .

14 of 28

Design with the mind in mind

  • There seems to be a curious lack of focusing on how humans think in explanation work
  • When we talk about different explanation methods, ask yourself: how does this improve human cognition?
    • Answering this question helps you choose the right explanation mode

14

Human-AI Interaction Fall 19 .

15 of 28

Design with the mind in mind -1: How people reason

System 1 thinking�Fast, automatic, frequent:

  • determine that an object is at a greater distance than another
  • complete the phrase "war and ..."

Takes work to override. Brains are lazy.

(if you can’t remember which is System 1 vs. System 2, I can’t either -- I google it every time)

System 2 thinking:

Slow, conscious, infrequent:

  • “Give me a minute to think about it”
  • Park in a tight parking spot
  • Write a note for a project extension

Takes effort. => people are unable to do it if: tired, sleep deprived, task is repetitive, etc.

15

Human-AI Interaction Fall 19 .

16 of 28

16

Human-AI Interaction Fall 19 .

17 of 28

Designing with the mind in mind -2: How people learn

17

Human-AI Interaction Fall 19 .

18 of 28

Designing with the mind in mind -2: How people learn

18

Human-AI Interaction Fall 19 .

19 of 28

What kind of knowledge is this explanation providing?

19

Human-AI Interaction Fall 19 .

20 of 28

What kind of reasoning do we expect post-explanation?

20

Human-AI Interaction Fall 19 .

21 of 28

What kind of knowledge is this explanation providing?

21

Human-AI Interaction Fall 19 .

22 of 28

What kind of reasoning do we expect post-explanation?

22

Human-AI Interaction Fall 19 .

23 of 28

What kind of knowledge is this explanation providing?

23

Human-AI Interaction Fall 19 .

24 of 28

What kind of reasoning do we expect post-explanation?

24

Human-AI Interaction Fall 19 .

25 of 28

What kind of reasoning do we expect post-explanation?

25

Human-AI Interaction Fall 19 .

26 of 28

What kind of knowledge is this explanation providing?

26

Human-AI Interaction Fall 19 .

27 of 28

What kind of knowledge would be useful here?

27

https://www.wired.com/story/self-driving-cars-uber-crash-false-positive-negative/

Human-AI Interaction Fall 19 .

28 of 28

28

Human-AI Interaction Fall 19 .