✨The Librarian’s Dividend✨�Pathways towards Information Ethics and Literacy �in the Age of Generative AI
Casey Fiesler | casey.prof
Image credit: OpenAI’s DALL-E and countless uncredited and uncompensated artists whose work helped train the model
Image credit: Lone Thomasky & Bits&Bäume / https://betterimagesofai.org / CC-BY 4.0
2
information disorder
is the breakdown of the information ecosystem caused by the spread of misinformation, malinformation, and disinformation
3
bit.ly/ai-ethics-news
4
5
6
7
8
artificial intelligence
is when a machine does something that typically requires human thinking
9
10
CAT
DOG
11
Machine learning finds patterns in and makes predictions based on training data.
12
Machine learning finds patterns in and makes predictions based on training data.
Generative AI generates new data based on what’s in its training data.
13
14
15
16
Machine learning finds patterns in and makes predictions based on training data.
Generative AI generates new data based on what’s in its training data.
A large language model is a probability distribution over sequences of words in its training data.
17
18
19
20
21
22
Search engines search for information.
Language models generate or make up information (language).
… and are designed to be statistically probable and linguistically fluent, not verifiably accurate.
23
24
CAT
DOG
25
@cfiesler
26
@cfiesler
27
28
29
30
31
32
@cfiesler
33
34
35
36
the liar’s dividend
is the benefit that liars receive from existing in a world in which it is unclear what is true and what is false (and therefore one can claim anything is false)
Chesney, B., & Citron, D. (2019). Deep fakes: A looming challenge for privacy, democracy, and national security. Calif. L. Rev., 107, 1753.
37
@cfiesler
38
@cfiesler
39
the librarian’s dividend
is the social and ethical benefit that comes from access to people and institutions who can help us (and help teach us how to) evaluate information
40
@cfiesler
41
AI Literacy
Leo S. Lo. “AI Literacy: A Guide for Academic Libraries.”
42
43
Taxonomy of large language model risks:
Weidinger, Laura, et al. "Taxonomy of risks posed by language models." Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency. 2022.
44
Privacy
45
Data Ownership
46
47
Socioeconomic & Labor Harms
48
49
50
51
Environmental Harms
52
Environmental Harms
53
@cfiesler
54
@cfiesler
“Machines are made to behave in wondrous ways, often sufficient to dazzle even the most experienced observer. But once a particular program is unmasked, once its inner workings are explained in language sufficiently plain to induce understanding, its magic crumbles away; it stands revealed as a mere collection of procedures, each quite comprehensible.”
- Joseph Weizenbaum, 1966
55
Things everyone should know about AI to be able to think about responsible use:
56
Image credit: OpenAI’s DALL-E and countless uncredited and uncompensated artists whose work helped train the model
@cfiesler
@professorcasey
caseyfiesler.com | casey.prof
Professor Casey
Image credit: Lone Thomasky & Bits&Bäume / https://betterimagesofai.org / CC-BY 4.0