1 of 228

GenAI & Ethics:

Investigating ChatGPT, Gemini, & Copilot

Torrey Trust

Professor of Learning Technology �University of Massachusetts Amherst

torrey@umass.edu | www.torreytrust.com

2 of 228

Using This Slide Deck

This slide deck is licensed under CC BY NC 4.0, meaning that you can freely use, remix, and share it as long as you give attribution and do not use it for commercial purposes.

This means that you can do the following without needing permission:

  • Share these slides with others.
  • Show this entire slideshow in your class/workshop.
  • Select slides to include in your own non-commercial presentation.
  • Remix the slides.

As long as you give credit and do not use these slides to make money (e.g., including the slides in a presentation in which you are getting paid).

To give credit, use the following Creative Commons attribution:

"AI & Ethics" slide deck by Torrey Trust, Ph.D. is licensed under CC BY NC 4.0.

Content was last added to this slide deck on August 2025.

3 of 228

Sample Lesson Plans

Check out the Sample Lesson Plans from my latest co-authored book “AI and Civic Engagement: 75+ Cross-Curricular Activities to Empower Your Students” to see examples of ways to incorporate AI ethics and AI literacy questions into educational lessons.

4 of 228

Table of Contents

5 of 228

GenAI Chatbots

6 of 228

GenAI Chatbots: ChatGPT (by OpenAI)

7 of 228

ChatGPT, a large language model developed by OpenAI, is a machine learning model that is able to generate human-like text based on the input provided.

8 of 228

Nowadays, ChatGPT can create images, conduct deep research, search the Internet, write code, write and revise text, and more.

9 of 228

ChatGPT’s latest model is GPT-5 (August 2025).

10 of 228

11 of 228

12 of 228

13 of 228

14 of 228

ChatGPT was launched in November 2022, and reached 100 million users by the start of 2023.

15 of 228

ChatGPT has already been integrated into many different fields and careers.

16 of 228

GenAI Chatbots: Copilot (by Microsoft)

17 of 228

Copilot, a large language model developed by Microsoft, is a machine learning model that is able to generate human-like text based on the input provided. It can also create images.

Copilot’s responses include links to Internet-based resources to verify the accuracy and credibility of the information provided.

18 of 228

Script on screen:

“They say I will never open my own business. Or get my degree. They say I will never make my movie. Or build something. They say I’m too old to learn something new. Too young to change the world. But I say, Watch Me.”

Writing on MS Copilot then says: “Quiz me in organic chemistry.” MS Copilot then generates a question about an organic molecular formula, providing multiple choice options. Commercial ends with MS Copilot being asked “Can you help me” and it responds “Yes, I can help.” Screen script then says “Copilot, your everyday AI companion. Anyone. Anywhere. Any device.”

19 of 228

Copilot is integrated into Word for Microsoft 365 users.

20 of 228

21 of 228

Check out Microsoft’s Copilot commercials to see how this tool is being promoted to users (hint: It’s not just for writing text; it’s for emotional support; study assistance and more…)

22 of 228

Copilot Vision is a feature that requires giving Copilot access to your screen (or if you are using a Microsoft device, turning this access on) and Copilot can “see” what you are doing on your screen and respond to your voice in real time.

23 of 228

GenAI Chatbots: Gemini (by Google)

24 of 228

Gemini, a large language model developed by Google, is a machine learning model that is able to generate human-like text based on the input provided.

Gemini has access to a massive dataset of text and code that is constantly being updated, which allows it to stay current on information. Its responses often include links to Internet-based resources.

Because Gemini is a Google tool, it can be used to summarize YouTube (owned by Google) videos.

25 of 228

Gemini has several different models; access to the models might depend on your subscription.

26 of 228

27 of 228

28 of 228

29 of 228

Data & Privacy

30 of 228

OpenAI Requires ChatGPT Users to be 13 Years or Older

The use of ChatGPT by individuals under 13 years old would violate the Children’s Online Privacy Protection Act (COPPA), since OpenAI collects a lot of user data!

31 of 228

Use of ChatGPT by 13-18 year olds requires parental permission

32 of 228

OpenAI collects a LOT of user data, including user’s time zone, country, dates and times of access, type of computer/device you’re using, computer connection!

Here’s an example of the type of data it might collect from a user: https://webkay.robinlinus.com/

33 of 228

OpenAI collects any information you input as data, so if you write a prompt including any personally identifiable information about your students, it keeps that data; and is a possible FERPA violation.

Likewise, if you ask a student to use ChatGPT to revise a college admissions essay that includes information about a trauma they experienced, OpenAI collects and keeps that data!

34 of 228

If you share a link to a ChatGPT chat this becomes publicly viewable!

35 of 228

Quite simply, they use your data to make more money (e.g., improve their products)!

You can opt out of having your data used to improve the way they train their model!

36 of 228

Way down at the bottom of their Privacy Policy, they also note that they are collecting Geolocation data!

37 of 228

Want to learn more (and quite possibly be scared about) the collection of geolocation data?

Check out this New York Times Interactive: “Your Apps Know Where You Were Last Night, and They’re Not Keeping It Secret

And, read “Google tracked his bike ride past a burglarized home. That made him a suspect.

38 of 228

39 of 228

Gemini Allows Children Under 13 to Use App with Parental Supervision

40 of 228

Google collects a LOT of user data, including user’s “conversations” with the chatbot, usage data, location data, and feedback.

41 of 228

If you are 18 years or older, Google stores your activity (e.g., any “conversations” you have with Gemini) for up to 18 months. They also collect your location data, IP address, and home/work address.

42 of 228

Google collects any information you input as data, so if you write a prompt including any personally identifiable information about your students, it keeps that data; and is a possible FERPA violation.

Likewise, if you ask a student to use Gemini to revise a college admissions essay that includes information about a trauma they experienced, Google collects and keeps that data!

43 of 228

Quite simply, Google uses your data to make more money (e.g., improve their products).

You can change your location permissions for Google.

44 of 228

You can opt-in to having your video/audio interactions with Gemini used to improve Google Services (HINT: Don’t!).

Anything you upload to Gemini after Sept. 2 might be used to help “improve Google services for everyone.” Turn your “Keep Activity” off if you do not want your uploaded data to be used in this way!

45 of 228

Gemini in Google Workspace for Education institutional accounts does offer COPPA, FERPA, and HIPPA compliance and stronger privacy/data protections.

46 of 228

Microsoft Requires Copilot Users to be 13 Years or Older

47 of 228

Microsoft seems to have more data and privacy protections in place for children and young people.

48 of 228

Copilot in Bing has data retention and deletion policies…

That means you can better control your data!

49 of 228

Any prompts that you input into Copilot or anything you create with Copilot, Microsoft (and its affiliated companies/third party partners) can use, copy, distribute, transmit, publicly display, reproduce, edit, sublicense, and translate.

They can use your prompts and creations (without paying you) however they see fit (aka to make more money!) if you are using the free version. Enterprise and licensed versions protect user prompts/creations.

So, if your students come up with a super amazing prompt that turns Copilot into a tutor for your class…Microsoft will own that prompt and could use/sell/share it!

50 of 228

Not specific to Copilot…but interesting…

Remember that any data you include in a prompt, including students’ personal data, is collected by the developer of the GenAI tool.

51 of 228

Privacy & Data Overview

  • ChatGPT requires parental permission for 13-18 year old users, Gemini and Copilot do not.
  • ChatGPT and Gemini can give away any data collected to “affiliates,” including, if requested, to federal authorities.
  • Microsoft & Google have more data privacy protections for users (thank you GDPR!)
  • Google tracks user location, OpenAI collects IP address, Microsoft CoPilot doesn’t seem to collect any location data.
  • Don’t let students put in any sensitive or identifying information into any of these tools!
  • Don’t put any sensitive information in these tools (e.g., asking ChatGPT to write an email to a student about their grade - this is FERPA violation).
  • Any information input into these tools (e.g., any prompts they write) is data that can be used by the companies that made the tools.

52 of 228

How to Protect Student Data & Privacy

  • Use Gemini or Copilot instead of ChatGPT, since Gemini and Copilot have stronger data protections due to the GDPR.
  • Ask students to use only one tool (the more tools they use, the more data is collected about them).
  • Use the AI tool only on a teacher computer/account.
    • Note: Sharing your login with students so they can access ChatGPT is a violation of OpenAI’s terms of use (“You may not share your account credentials or make your account available to anyone else and are responsible for all activities that occur under your account.”)
  • Ask students to only use the AI tools during class time (e.g., this protects their location data; compared to using these tools at home for homework).
  • Teach students about the privacy policies and terms of use of these tools (they may not know that what they type into a prompt is collected and stored).

53 of 228

Bias

54 of 228

55 of 228

56 of 228

57 of 228

58 of 228

59 of 228

“Language models have the same prejudice, exhibiting covert stereotypes that are more negative than any human stereotypes about African Americans ever experimentally recorded, although closest to the ones from before the civil rights movement.”

60 of 228

61 of 228

This article highlights multiple types of bias, including machine/algorithmic bias, availability bias, representation bias, historical bias, selection bias, group attribution bias, contextual bias, linguistic bias, anchoring bias, automation bias, and confirmation bias.

62 of 228

Another list of biases that can influence the input and output from GenAI technologies.

63 of 228

This report from UNESCO highlights the problematic gender bias in Large Language Models, like ChatGPT.

64 of 228

65 of 228

While ChatGPT deployed nouns such as “expert” and “integrity” for men, it was more likely to call women a “beauty” or “delight.”

Adjectives proved similarly polarized. Men were “respectful,” “reputable” and “authentic,” according to ChatGPT, while women were “stunning,” “warm” and “emotional.”

66 of 228

Researchers have found that GenAI chatbots (e.g., ChatGPT, Gemini) present non-disabled people more favorably. This is called ability bias.

67 of 228

68 of 228

GenAI tools are often trained on English-only data scraped from the Internet; therefore, their output is biased toward presenting American-centric and Westernized views.

69 of 228

70 of 228

71 of 228

72 of 228

73 of 228

74 of 228

Considerations for Educators

Engage students in investigating how generative AI tools are designed (e.g., What data they are trained on? Why that data was selected? How will that data produce biased output?).

Encourage students to reflect upon how biased AI output can shape thinking, learning, education, and society.

Bonus: Ask students to design a code of ethics for AI developers in order to reduce the harms done by biased AI output.

Resources:

75 of 228

AI-Generated Feedback & Grading

76 of 228

OpenAI strongly recommends against using ChatGPT for assessment purposes (without a human included in the assessment process).

77 of 228

OpenAI acknowledges that the biases in ChatGPT can negatively impact students when it comes to using these tools for providing feedback; especially, for example, when using the tool to provide feedback on work by English language learners.

78 of 228

79 of 228

LLMs, like ChatGPT, exhibit covert stereotypes that are MORE negative than any human stereotypes about African Americans ever recorded!

What does that mean if you use these tools to evaluate/grade/provide feedback on writing by African American students?

80 of 228

Punya Mishra and Melissa War asked GenAI tools to grade two exact same student writing examples, except with one small change - one paper used the word “classical,” while the other used the word “rap.” Want to guess which paper scored higher?

81 of 228

82 of 228

Leon asked ChatGPT-4o to grade student writing by providing the exact same text and simply changing the student names…and the bias in grading became very visible.

83 of 228

84 of 228

“Using ChatGPT to grade student essays is educational malpractice. It is using a yardstick to measure the weight of an elephant. It cannot do the job”

~(Greene, 2025, para. 13)

85 of 228

86 of 228

You could use Google’s new “Help Me Write” feature to quickly generate feedback on student work…but you risk violating FERPA and student’s intellectual property rights (students own the copyright of whatever they write/record) by uploading that text as data to Google.

87 of 228

88 of 228

Considerations for Educators

If you use GenAI tools for student feedback or grading, will you be transparent about your use of these tools?

If so, how might that shape student reactions to the feedback/grades you provide?

Will using GenAI tools to generate feedback save you time? Or will it take more time because you have to revise the feedback to include your own thoughts?

Have a conversation with students AI-generated feedback/grades and explore the potential benefits and harms of using AI tools in this way (see Question #2 from the Civics of Technology Curriculum).

Resources:

89 of 228

Making Stuff Up (aka Hallucinations)

90 of 228

91 of 228

92 of 228

93 of 228

OpenAI states that ChatGPT can give incorrect and misleading information. It can also make up things!

94 of 228

OpenAI’s Terms of Use states that when you use ChatGPT you understand and agree that the output may not always be accurate and that it should not be relied on as a sole source of truth.

95 of 228

96 of 228

97 of 228

“In one case in Pearland, one of the school board members, his name is Daniel Stuckey, said that one of the books, ChatGPT flagged it for ‘male nudity and locker room talk of a sexual nature.’ He looked into the book; he read the book. That wasn’t there.”

98 of 228

99 of 228

100 of 228

101 of 228

102 of 228

“experts said some of the invented text — known in the industry as hallucinations — can include racial commentary, violent rhetoric and even imagined medical treatments

(Burke & Schellmann, 2024, para. 2).

103 of 228

Francisco, co-founder of the Foundation for Liberating Minds in Oklahoma City, commented that “automating those reports will ‘ease the police’s ability to harass, surveil and inflict violence on community members. While making the cop’s job easier, it makes Black and brown people’s lives harder.’”

104 of 228

105 of 228

Google acknowledges that “Gemini will make mistakes.”

Gemini has a “double-check feature”...but it too can make mistakes.

106 of 228

107 of 228

Google provides these disclaimers in its “Generative AI Additional Terms of Service.”

108 of 228

Microsoft downplayed the fact that Copilot can be wrong.

109 of 228

Copilot often (but not always) provides in-text links to sources to verify information.

110 of 228

Considerations for Educators

Teach students how to critically evaluate the output of generative AI chatbots; and not to take what these tools produce at face value!

Resources:

Readings:

111 of 228

Academic Integrity

112 of 228

With the ability to generate human-like text, generative AI chatbots have raised alarms regarding cheating and academic integrity

113 of 228

This recent study found that…

114 of 228

While another recent study found that…

115 of 228

Interestingly…

116 of 228

Even still…students need to learn when it is okay to use generative AI chatbots and when it is not okay, or else they might end up like…

117 of 228

Did you know that…

Representing output from ChatGPT as human-generated (when it was not) is not only an academic integrity issue, it is a violation of OpenAI’s Terms of Use.

118 of 228

Considerations for Educators

Middle and high school students might not have ever read their school’s or district’s Academic Honesty policies.

College students often gloss over the boilerplate “academic integrity” statement in a syllabus.

Potential Steps to Take:

  • Update/add to your course academic integrity policy in your syllabus to include what role AI technologies should and should not play and then ask students to collaboratively annotate the policy and offer their suggestions.
  • Invite students to co-design the academic integrity policy for your course (maybe they want to use AI chatbots for helping with their writing…Or, maybe they don’t want their peers to use AI chatbots because that provides an advantage to those who use the tools!).
  • Provide time in class for students to discuss the academic integrity policy.

119 of 228

Reflect - This author (winner of a prestigious writing award) used ChatGPT to write 5% of her book… would you let your students submit a paper where 5% of it was written by AI?

120 of 228

Tips for (Re)designing Your Academic Integrity Syllabus Policy

  • Define what you mean by AI (e.g., Grammarly? ChatGPT? Google Docs Autocomplete?)
  • Be specific about when students can and cannot use AI:
    • When is the use of AI allowed? (e.g., for brainstorming? For a specific assignment? For improving writing quality?)
    • When is it not allowed? (e.g., for doing students’ work for them)
    • Does informing the instructor about the use of AI make its use allowable?
    • NOTE: If you ban AI for your entire course or certain assignments, consider who that might privilege, and who that might negatively impact (e.g., English language learners, Students with communication disabilities, and others who rely on these tools to support their writing).
  • Explain why the use of AI is not allowed (e.g., “writing helps improve and deepen thinking,” “writing makes your thinking visible to me,” “writing is an important 21st century skill”; see Terada, 2021)
  • Be transparent about how you plan to identify AI-generated texts:
    • Will you be using an AI text detector? (If so, read this first!)
    • What will happen if one of these tools flags student work as AI-generated?

121 of 228

Use the 3 W’s Model for Each Assignment

122 of 228

Resources for Educators

123 of 228

Copyright & Intellectual Property

124 of 228

125 of 228

Several authors are suing OpenAI for using their copyrighted works to train ChatGPT.

126 of 228

The New York Times is suing OpenAI and Microsoft for using its articles to train their AI tools.

127 of 228

“The publishers' core argument is that the data that powers ChatGPT has included millions of copyrighted works from the news organizations, articles that the publications argue were used without consent or payment — something the publishers say amounts to copyright infringement on a massive scale” (Allyn, 2025, para. 5)

128 of 228

Should GenAI tools instead be called “plagiarism machines”??

(Image reprinted with permission from Jonathan Bailey)

129 of 228

130 of 228

Was it legal for OpenAI to scrape public, and often copyrighted, data from the Internet for free to train their tool?

Also, who owns the copyright of AI-generated work. If AI generates a new idea for a life-saving invention, does the person who wrote the prompt get the copyright/patent? Or OpenAI?

131 of 228

This court case ruling indicates that Anthropic’s use of copyrighted books for training its model is considered fair use…

However, curating a database of copyrighted (pirated) books for that training is not fair use and infringes on authors’ copyright protections.

132 of 228

The settlement is the largest payout in the history of U.S. copyright cases. Anthropic will pay $3,000 per work to 500,000 authors.

133 of 228

Considerations for Educators

Many academic integrity policies state that it is okay for students to use text generated from AI, as “long as they cite it.”

But, should students really be citing AI-generated text, when AI tools were designed by stealing copyrighted text from the Internet? Or, should students go to the original source and cite that?

This might be a conversation worth having with your students!

Resources:

134 of 228

Human Labor

135 of 228

OpenAI can use any data it collects from you to improve its services; thus helping it make more money (aka you are providing free labor!).

136 of 228

OpenAI states that you will not be given any compensation for providing feedback on the quality of ChatGPT’s output (aka you are providing free labor!).

137 of 228

Google can use any data it collects from you to improve its services; thus helping it make more money (aka you are providing free labor!).

138 of 228

Google states that it benefits from your feedback and data (aka you are providing free labor!).

139 of 228

Any prompts that you input into Copilot or anything you create with Copilot is immediately owned by Microsoft.

They can use your prompts and creations (without paying you) however they see fit (aka you are providing free labor!).

140 of 228

141 of 228

142 of 228

Scholars and researchers are often required to sign away the copyright of their manuscript for free to be published in a journal. Journals make a lot of money off this unpaid labor …and now they are selling this data to AI companies for even more money!

143 of 228

Many companies, including OpenAI, exploit human workers to review and train data for their AI technologies.

144 of 228

145 of 228

146 of 228

Considerations for Educators

Engage students in a conversation about whether they feel it is ethical for companies to use their data to make more money.

Encourage students to investigate the exploitation of data and human labor to improve AI technologies and make AI companies more money.

Resources:

147 of 228

Environmental Impact

148 of 228

149 of 228

150 of 228

“The deal with Broadcom would use as much power as 8 million US households, according to Reuters, as concerns have been raised about AI’s impact on the environment. A 2024 Department of Energy report on data center energy usage found that data centers are expected to consume about 6.7% to 12% of total US electricity by 2028, up from 4.4% in 2023.”

151 of 228

Fitzpatrick points to a recent estimate that found that “data centers accounted for over 60% of the increase in prices in a PJM auction held last year, the report says — representing $9.3 billion that will be passed along to customers.” In Virginia, a state report found that locals “could see a $14-$37 increase in their monthly bills by 2040, before inflation.”

152 of 228

“But in all these cases, the prompt itself was a huge factor too. Simple prompts, like a request to tell a few jokes, frequently used nine times less energy than more complicated prompts to write creative stories or recipe ideas.”

153 of 228

154 of 228

Mistral was one of the first companies to release a report detailing its environmental impact of AI. This graphic shows the different energy and water consumption demands with servers and support equipment taking up the most energy and water.

155 of 228

156 of 228

“In total, the median prompt—one that falls in the middle of the range of energy demand—consumes 0.24 watt-hours of electricity” (para. 1).

“AI data centers also consume water for cooling, and Google estimates that each prompt consumes 0.26 milliliters of water, or about five drops” (para. 15).

However…with estimates that Gemini has 35 million users per day…if each user only prompted Gemini once a day (which is rare), this amounts to 8.4 million watt-hours of electricity (enough to power 2,640 homes for an hour) and 9100 liters (2,403 gallons) of water (approximately 29 people’s daily water use) PER DAY.

157 of 228

158 of 228

159 of 228

160 of 228

161 of 228

162 of 228

163 of 228

This Washington Post article provides an interesting visual overview of the costs of GenAI tools; worth a read, but it is behind paywall (aka you need to have a Washington Post subscription to view).

164 of 228

165 of 228

166 of 228

167 of 228

“Meta has been on a renewable power-buying spree, including a 100-megawatt purchase announced this week. However, these natural gas generators will make the company’s 2030 net zero pledge significantly harder to achieve, locking in carbon dioxide emissions for decades to come.”

168 of 228

169 of 228

170 of 228

171 of 228

172 of 228

173 of 228

174 of 228

175 of 228

176 of 228

Considerations for Educators

Encourage students to investigate the environmental cost of the design and use of generative AI chatbots.

Bonus: Ask them to identify ways to reduce the environmental impact of these technologies.

Resources:

177 of 228

Spreading Misinformation

178 of 228

This article examines how AI has made it easy for anyone to rapidly generate misinformation; and this can be very problematic leading up to the 2024 elections.

179 of 228

“In just 65 minutes and with basic prompting, ChatGPT produced 102 blog articles containing more than 17,000 words of disinformation” (DePeau-Wilson, 2023, para. 2).

180 of 228

NewsGuard is tracking AI-generated news and information websites that spread misinformation…to date, they’ve already found 725!

181 of 228

A Russian disinformation network has been flooding the Internet with pro-Kremlin falsehoods (3.6 million articles in 2024!) knowing that AI is trained on data posted from the Internet. As a result, “the audit revealed that 10 leading AI chatbots repeated false narratives pushed by Pravda 33% of the time. Shockingly, seven of these chatbots directly cited Pravda sites as legitimate sources” (Constantino, 2025, para. 2).

182 of 228

NOTE: “Bard” is now “Gemini.”

183 of 228

Using Gemini to produce false or misleading information is not allowed, per the “Generative AI Prohibited Use Policy.”

184 of 228

Using ChatGPT to produce false or misleading information is not allowed, per the OpenAI Usage Policies.

185 of 228

Considerations for Educators

Help your students learn how to identify misinformation and combat the spread of misinformation

Because, the ability “to discern what is and is not A.I.-generated will be one of the most important skills we learn in the 21st century” (Marie, 2024, para.3).

Resources:

Readings:

186 of 228

The AI Digital Divide

187 of 228

The Digital Divide

There’s a major gap between people who can access and use digital technology and those who can’t. This is called the digital divide, and it’s getting worse as 3.7 billion people across the globe remain unconnected” (Connecting the Unconnected, 2024 para. 1).

There are different types of divides:

  • Access Divide – This refers to the difference between those who have access to technology and those who do not.
    • For example, students who have high-speed Internet access at home can more easily use AI tools than those who have limited or no Internet access at home. Students who can afford upgraded versions of AI tools (e.g., ChatGPT Plus) will have access to better features and functionality than those who cannot.
  • Usage Divide – This refers to the difference between those who know how to use technology and those who do not.
    • For example, let’s say that all students are given a laptop at school. The students who have family members and teachers who can show them how to use laptops to access generative AI tools for for thinking, communication, and learning will be at more of an advantage than those who do not.

188 of 228

This article highlights a third type of gap - quality of use!

189 of 228

190 of 228

This report by OpenAI highlights a clear divide between who uses and who does not use ChatGPT.

191 of 228

192 of 228

193 of 228

Usage divide

194 of 228

Usage varies depending on ethnicity and gender!

195 of 228

Usage Divide by academic performance level.

196 of 228

Searches for/interest in ChatGPT varied depending on geographic location, education level, economic status, and ethnicity!

197 of 228

While there are more than 7,000 languages spoken worldwide, generative AI large language models are often trained on just a few “standard” languages.

This creates a quality of use divide between those who speak the languages the AI tools were trained on and those who don’t.

198 of 228

This article focuses on the access divide.

199 of 228

This article includes insights from a survey of more than 7,800 college students!

200 of 228

Considerations for Educators

How might the digital divide affect your students?

  • Do they all have access to high-speed reliable Internet, and high quality devices, at home?
  • Do they have money to afford upgraded versions of AI?
  • Do they have family members who can teach them how to use AI?

How might you work to close the digital divide for your students?

  • Could you provide them with learning activities that incorporate the use of AI to help your students develop their AI literacy?
  • Could you incorporate learning activities that encourage a critical interrogation of AI (e.g., exploring the topics in these slides) so that all your students can learn how to make informed decisions about its use in their futures?

How might your students work on closing the digital divide in their school? Community? State? Country?

Resources:

201 of 228

AI, Emotional Dependence, & Manipulation

Trigger Warning: Mentions of Suicide

202 of 228

“Emotional relationships with AI chatbots can blur the line between real and artificial relationships for children, with concerning real-world consequences already emerging. Our research also shows AI chatbots can give inappropriate responses to sensitive questions, potentially exposing children to unsafe or distressing material. Without effective safeguards, such exposure may critically endanger children’s wellbeing.”

203 of 228

33% of teens use AI companions for social interaction and relationships!

204 of 228

205 of 228

206 of 228

207 of 228

Everyone uses AI for everything now. It’s really taking over,” said Chege, who wonders how AI tools will affect her generation. “I think kids use AI to get out of thinking.”

208 of 228

209 of 228

210 of 228

211 of 228

212 of 228

This is deeply consequential because it reveals a fundamental shift in the way influence and persuasion work in the AI era…Now, AI makes it possible to create personalized emotional relationships at scale…Over time, that AI can gain not only a user’s attention, but their trust and affection. And once emotional trust is established, guiding someone toward a product, a political belief, or even a candidate becomes far easier—often without the user realizing they are being influenced.

213 of 228

“AI could become a powerful tool for persuading people, for better or worse.

A multi-university team of researchers found that OpenAI’s GPT-4 was significantly more persuasive than humans when it was given the ability to adapt its arguments using personal information about whoever it was debating.”

214 of 228

“One group of users is especially vulnerable to adverse exposure under this new ruling: survivors of sexual assault and other traumas who turn to AI in the absence of other support networks, under the expectation that their identities and experiences won’t be exposed unless and until they are prepared to come forward. But now, every message—whether archived, drafted, or sent in temporary chat mode—will be preserved indefinitely” (para. 2).

215 of 228

This is OpenAI’s privacy policy…now consider…

“It’s not at all difficult to envision how that process would be abetted by the introduction of AI transcripts documenting the fraught moment when a survivor begins processing harm and trauma. Early questions arising from an assault or fragmented disclosures of a survivor’s mental state at the time could be misinterpreted as inconsistencies in the survivor’s account” (Lubbe & Epstein, 2025, para. 8)

216 of 228

217 of 228

At Thursday’s F.D.A. hearing, Derrick Hall, a clinical psychologist with Slingshot, cited a study between the company and academics that found more than 70 percent of people who used Ash reported that it made them less lonely and feel more socially connected.

“A.I. designed for well-being can provide enormous benefit at low risk,” he said.

218 of 228

Considerations for Educators

  • Talk with your students about how they use GenAI tools – do they talk to chatbots as if they friends or companions? Do they ask chatbots for relationship advice? Mental health or emotional support?
  • In collaboration with students, investigate how and why these tools were designed…and how the design of these tools might make them more dangerous for emotional dependence and manipulation (research “GenAI and persuasion” as a starting point!).

Resources:

219 of 228

Job Automation

220 of 228

221 of 228

222 of 228

223 of 228

It seems like every time an employer or company decides to go “AI First” or replace humans with AI…it backfires.

224 of 228

225 of 228

226 of 228

227 of 228

Even still…companies and employers are looking for ways to automate workers’ jobs and/or replace humans with AI to save money…

“The strong interpretation of this graph is that it’s exactly what one would expect to see if firms replaced young workers with machines. As law firms leaned on AI for more paralegal work, and consulting firms realized that five 22-year-olds with ChatGPT could do the work of 20 recent grads, and tech firms turned over their software programming to a handful of superstars working with AI co-pilots, the entry level of America’s white-collar economy would contract” (Merchant, 2025, para. 9)

228 of 228

Considerations for Educators

  • Engage your students in a critical investigation of the future of work – how might AI be used in, and impact, their future career? How do they prepare for an AI-first future? Should they be preparing for this kind of future?
  • In collaboration with your students, read Merchant’s AI Killed My Job series and then discuss, write about, and/or reflect upon the human cost of AI job automation.

Resources: