1 of 20

What do we do about ChatGPT?

Andrew Jun Lee

PhD Student in Psychology

Reasoning Lab & Computational Vision Lab

2 of 20

What do we do about chatGPT?

ban

ban

ban

ban

ban

Students will figure it out…?

Ban it!

3 of 20

Lesson learned: Don’t ask chatGPT for sources!

Lawyer:

“Unaware of the possibility that chatGPT’s content could be false

4 of 20

The opportunity and danger of chatGPT lay in…

Currently good conversational ability

What is unknown to most everyday users, like the lawyer

5 of 20

An Ongoing Debate (Mitchell & Krakauer, 2023)

An illusory ability for intelligence

A “parrot” with no understanding that “haphazardly stiches together sequences of linguistic forms” (Bender et al., 2021)

Look, ma, it’s alive!

A bot that sounds human is not necessarily sentient. A bot that sounds intelligent may be led on to sound that way (Sejnowski, 2023)

6 of 20

An Ongoing Debate (Mitchell & Krakauer, 2023)

An illusory ability for intelligence

A “parrot” with no understanding that “haphazardly stiches together sequences of linguistic forms” (Bender et al., 2021)

Look, ma, it’s alive!

7 of 20

False Information and Being Led Astray

Bypassing internal safety blockers by “jailbreaking” chatGPT with “engineered prompts” like DAN

8 of 20

False Information and Being Led Astray

As educators, we are not only committed to teaching students descriptive facts, but ways of navigating truthhood from falsity

  • A capacity whose importance is increasingly discernable against a landscape of “fake news,” “alternative facts,” and conspiratorial beliefs

Guiding students requires an informed position, or at least as informed as one could realistically be

“Will you ever understand the enigma that I am?”

9 of 20

ChatGPT = A model trained for next-token prediction

Token = a word or sub-word, representing what can be a useful semantic unit

Sally put her books in the bookcase

Sally put her books in the ________

Task: Predict the missing token

10 of 20

ChatGPT = A model driven by attention-like processing

Hypothesized mechanism driving prediction: “Attention” (Rogers et al., 2021)

Sally put her books in the bookcase

Each token pays different amounts of “attention” to previous tokens

11 of 20

ChatGPT = A model driven by attention-like processing

Hypothesized mechanism driving prediction: “Attention” (Rogers et al., 2021)

Sally put her books in the bookcase

Learning to attend to some tokens more and other tokens less in the right ways may constrain possible words down to bookcase

The overall attention pattern of all words with respect to each other may approx. contextual info.

12 of 20

ChatGPT = A model that learns to attend in the right ways

Sally put her books in the ________

Given a task and a measurement of prediction error, we can “train” the model

Make a guess in the first round

Calculate error of guess/prediction

Change model settings accordingly

Repeat

Comprises roughly 3% of GPT-3’s encountered sentences

13 of 20

ChatGPT = A model that has acquired intelligence?

The sheer amount of text chatGPT consumes allows it to capture substantial factual information (Mahowald et al., 2023)

But does chatGPT have rich conceptual understanding of that information?

ChatGPT may have learned a complex superficial association between words that skids only the surface of conceptual structure, but the jury is still out (Mitchell & Krakauer, 2023)

But note: A failure to reason like we do does not mean it is a poor model of English grammar, or formal linguistic competence (Mahowald et al., 2023)

14 of 20

Consider this prompt to GPT-3 (chatGPT’s predecessor)

If GPT-3 has the correct concepts of sofas and houses, we might say:

    • It realized houses are tall, so it needs a ladder
    • It realized sofas are heavy, so it needs someone strong

These are anthropomorphized conclusions contingent on the presence of accurate conceptual knowledge

15 of 20

Consider this prompt to GPT-3 (chatGPT’s predecessor)

Follow-up prompts reveal that GPT-3’s concepts, if it has any, are at-odds:

    • Cutting a hole in the sofa won’t make it fit through a window
    • Typically shouldn’t break windows to houses!

GPT-3’s learned content hasn’t quite captured accurate semantics

16 of 20

Of course, humans make mistakes too…

So what constitutes a non-human-like error? What is the difference between conceptual misunderstanding and superficial association?

How do we evaluate conceptual knowledge beyond the linguistic output of chatGPT? How do we avoid the limitations of reverse-inference from output to internal content?

Do we need to know what a concept is in order to ascribe its presence or absence in chatGPT? Or will an intuitive understanding do?

For educators, these distinctions are not pragmatic

So long as chatGPT shows signs of failure in these critical ways, we should dispel any notion of clairvoyance and tread with controlled caution

ChatGPT is neither ground truth nor nonsense

17 of 20

What does this mean for educators and students?

1. It is a mistake to banish chatGPT into the realm of fads and trends

Khan Academy uses chatGPT for on-demand help

ChatGPT and politics

Our collective goal:

To determine with students how to deal with chatGPT’s benefits and shortcomings

18 of 20

What does this mean for educators and students?

2. It isn’t enough for educators to talk about chatGPT as a cheating tool

What does it mean to be a citizen of a modern era?

  • One must cope with information deluge
  • One must navigate truthhood and falsity
  • One must adapt to rapid change
  • One must think independently, empirically, and like a cosmopolitan

19 of 20

What does this mean for educators and students?

3. Develop lesson plans exploring the limitations of chatGPT

“I’m useful to a fault, but I won’t tell you in what ways!”

The purpose is twofold:

  1. Teach students that chatGPT is not always right—thereby discouraging misinformed cheating (though not cheating itself)

  • Teach students that, in general, AI is useful, but should not always be trusted

20 of 20

  • Blazing news headlines everywhere but not all of them are true

  • We live in a world of fake news and ambiguity of truth

  • We need empirically minded individuals, who know when data are trustworthy, insightful and sensible

  • We need people who are ready to deal with a complex world of AI

The Takeaway