1 of 220

GenAI & Ethics:

Investigating ChatGPT, Gemini, & Copilot

Torrey Trust

Professor of Learning Technology �University of Massachusetts Amherst

torrey@umass.edu | www.torreytrust.com

2 of 220

Using This Slide Deck

This slide deck is licensed under CC BY NC 4.0, meaning that you can freely use, remix, and share it as long as you give attribution and do not use it for commercial purposes.

This means that you can do the following without needing permission:

  • Share these slides with others.
  • Show this entire slideshow in your class/workshop.
  • Select slides to include in your own non-commercial presentation.
  • Remix the slides.

As long as you give credit and do not use these slides to make money (e.g., including the slides in a presentation in which you are getting paid).

To give credit, use the following Creative Commons attribution:

"AI & Ethics" slide deck by Torrey Trust, Ph.D. is licensed under CC BY NC 4.0.

Content was last added to this slide deck on August 2025.

3 of 220

Sample Lesson Plans

Check out the Sample Lesson Plans from my latest co-authored book “AI and Civic Engagement: 75+ Cross-Curricular Activities to Empower Your Students” to see examples of ways to incorporate AI ethics and AI literacy questions into educational lessons.

4 of 220

Table of Contents

5 of 220

GenAI Chatbots

6 of 220

GenAI Chatbots: ChatGPT (by OpenAI)

7 of 220

ChatGPT, a large language model developed by OpenAI, is a machine learning model that is able to generate human-like text based on the input provided.

8 of 220

Nowadays, ChatGPT can create images, conduct deep research, search the Internet, write code, write and revise text, and more.

9 of 220

ChatGPT’s latest model is GPT-5 (August 2025).

10 of 220

11 of 220

12 of 220

13 of 220

14 of 220

ChatGPT was launched in November 2022, and reached 100 million users by the start of 2023.

15 of 220

ChatGPT has already been integrated into many different fields and careers.

16 of 220

GenAI Chatbots: Copilot (by Microsoft)

17 of 220

Copilot, a large language model developed by Microsoft, is a machine learning model that is able to generate human-like text based on the input provided. It can also create images.

Copilot’s responses include links to Internet-based resources to verify the accuracy and credibility of the information provided.

18 of 220

Script on screen:

“They say I will never open my own business. Or get my degree. They say I will never make my movie. Or build something. They say I’m too old to learn something new. Too young to change the world. But I say, Watch Me.”

Writing on MS Copilot then says: “Quiz me in organic chemistry.” MS Copilot then generates a question about an organic molecular formula, providing multiple choice options. Commercial ends with MS Copilot being asked “Can you help me” and it responds “Yes, I can help.” Screen script then says “Copilot, your everyday AI companion. Anyone. Anywhere. Any device.”

19 of 220

Copilot is integrated into Word for Microsoft 365 users.

20 of 220

21 of 220

Check out Microsoft’s Copilot commercials to see how this tool is being promoted to users (hint: It’s not just for writing text; it’s for emotional support; study assistance and more…)

22 of 220

Copilot Vision is a feature that requires giving Copilot access to your screen (or if you are using a Microsoft device, turning this access on) and Copilot can “see” what you are doing on your screen and respond to your voice in real time.

23 of 220

GenAI Chatbots: Gemini (by Google)

24 of 220

Gemini, a large language model developed by Google, is a machine learning model that is able to generate human-like text based on the input provided.

Gemini has access to a massive dataset of text and code that is constantly being updated, which allows it to stay current on information. Its responses often include links to Internet-based resources.

Because Gemini is a Google tool, it can be used to summarize YouTube (owned by Google) videos.

25 of 220

Gemini has several different models; access to the models might depend on your subscription.

26 of 220

27 of 220

28 of 220

29 of 220

Data & Privacy

30 of 220

OpenAI Requires ChatGPT Users to be 13 Years or Older

The use of ChatGPT by individuals under 13 years old would violate the Children’s Online Privacy Protection Act (COPPA), since OpenAI collects a lot of user data!

31 of 220

Use of ChatGPT by 13-18 year olds requires parental permission

32 of 220

OpenAI collects a LOT of user data, including user’s time zone, country, dates and times of access, type of computer/device you’re using, computer connection!

Here’s an example of the type of data it might collect from a user: https://webkay.robinlinus.com/

33 of 220

OpenAI collects any information you input as data, so if you write a prompt including any personally identifiable information about your students, it keeps that data; and is a possible FERPA violation.

Likewise, if you ask a student to use ChatGPT to revise a college admissions essay that includes information about a trauma they experienced, OpenAI collects and keeps that data!

34 of 220

If you share a link to a ChatGPT chat this becomes publicly viewable!

35 of 220

Quite simply, they use your data to make more money (e.g., improve their products)!

You can opt out of having your data used to improve the way they train their model!

36 of 220

Way down at the bottom of their Privacy Policy, they also note that they are collecting Geolocation data!

37 of 220

Want to learn more (and quite possibly be scared about) the collection of geolocation data?

Check out this New York Times Interactive: “Your Apps Know Where You Were Last Night, and They’re Not Keeping It Secret

And, read “Google tracked his bike ride past a burglarized home. That made him a suspect.

38 of 220

39 of 220

Gemini Allows Children Under 13 to Use App with Parental Supervision

40 of 220

Google collects a LOT of user data, including user’s “conversations” with the chatbot, usage data, location data, and feedback.

41 of 220

If you are 18 years or older, Google stores your activity (e.g., any “conversations” you have with Gemini) for up to 18 months. They also collect your location data, IP address, and home/work address.

42 of 220

Google collects any information you input as data, so if you write a prompt including any personally identifiable information about your students, it keeps that data; and is a possible FERPA violation.

Likewise, if you ask a student to use Gemini to revise a college admissions essay that includes information about a trauma they experienced, Google collects and keeps that data!

43 of 220

Quite simply, Google uses your data to make more money (e.g., improve their products).

You can change your location permissions for Google.

44 of 220

You can opt-in to having your video/audio interactions with Gemini used to improve Google Services (HINT: Don’t!).

Anything you upload to Gemini after Sept. 2 might be used to help “improve Google services for everyone.” Turn your “Keep Activity” off if you do not want your uploaded data to be used in this way!

45 of 220

Gemini in Google Workspace for Education institutional accounts does offer COPPA, FERPA, and HIPPA compliance and stronger privacy/data protections.

46 of 220

Microsoft Requires Copilot Users to be 13 Years or Older

47 of 220

Microsoft seems to have more data and privacy protections in place for children and young people.

48 of 220

Copilot in Bing has data retention and deletion policies…

That means you can better control your data!

49 of 220

Any prompts that you input into Copilot or anything you create with Copilot, Microsoft (and its affiliated companies/third party partners) can use, copy, distribute, transmit, publicly display, reproduce, edit, sublicense, and translate.

They can use your prompts and creations (without paying you) however they see fit (aka to make more money!) if you are using the free version. Enterprise and licensed versions protect user prompts/creations.

So, if your students come up with a super amazing prompt that turns Copilot into a tutor for your class…Microsoft will own that prompt and could use/sell/share it!

50 of 220

Not specific to Copilot…but interesting…

Remember that any data you include in a prompt, including students’ personal data, is collected by the developer of the GenAI tool.

51 of 220

Privacy & Data Overview

  • ChatGPT requires parental permission for 13-18 year old users, Gemini and Copilot do not.
  • ChatGPT and Gemini can give away any data collected to “affiliates,” including, if requested, to federal authorities.
  • Microsoft & Google have more data privacy protections for users (thank you GDPR!)
  • Google tracks user location, OpenAI collects IP address, Microsoft CoPilot doesn’t seem to collect any location data.
  • Don’t let students put in any sensitive or identifying information into any of these tools!
  • Don’t put any sensitive information in these tools (e.g., asking ChatGPT to write an email to a student about their grade - this is FERPA violation).
  • Any information input into these tools (e.g., any prompts they write) is data that can be used by the companies that made the tools.

52 of 220

How to Protect Student Data & Privacy

  • Use Gemini or Copilot instead of ChatGPT, since Gemini and Copilot have stronger data protections due to the GDPR.
  • Ask students to use only one tool (the more tools they use, the more data is collected about them).
  • Use the AI tool only on a teacher computer/account.
    • Note: Sharing your login with students so they can access ChatGPT is a violation of OpenAI’s terms of use (“You may not share your account credentials or make your account available to anyone else and are responsible for all activities that occur under your account.”)
  • Ask students to only use the AI tools during class time (e.g., this protects their location data; compared to using these tools at home for homework).
  • Teach students about the privacy policies and terms of use of these tools (they may not know that what they type into a prompt is collected and stored).

53 of 220

Bias

54 of 220

55 of 220

56 of 220

57 of 220

58 of 220

59 of 220

“Language models have the same prejudice, exhibiting covert stereotypes that are more negative than any human stereotypes about African Americans ever experimentally recorded, although closest to the ones from before the civil rights movement.”

60 of 220

61 of 220

This article highlights multiple types of bias, including machine/algorithmic bias, availability bias, representation bias, historical bias, selection bias, group attribution bias, contextual bias, linguistic bias, anchoring bias, automation bias, and confirmation bias.

62 of 220

Another list of biases that can influence the input and output from GenAI technologies.

63 of 220

This report from UNESCO highlights the problematic gender bias in Large Language Models, like ChatGPT.

64 of 220

65 of 220

While ChatGPT deployed nouns such as “expert” and “integrity” for men, it was more likely to call women a “beauty” or “delight.”

Adjectives proved similarly polarized. Men were “respectful,” “reputable” and “authentic,” according to ChatGPT, while women were “stunning,” “warm” and “emotional.”

66 of 220

Researchers have found that GenAI chatbots (e.g., ChatGPT, Gemini) present non-disabled people more favorably. This is called ability bias.

67 of 220

68 of 220

GenAI tools are often trained on English-only data scraped from the Internet; therefore, their output is biased toward presenting American-centric and Westernized views.

69 of 220

70 of 220

71 of 220

72 of 220

73 of 220

74 of 220

Considerations for Educators

Engage students in investigating how generative AI tools are designed (e.g., What data they are trained on? Why that data was selected? How will that data produce biased output?).

Encourage students to reflect upon how biased AI output can shape thinking, learning, education, and society.

Bonus: Ask students to design a code of ethics for AI developers in order to reduce the harms done by biased AI output.

Resources:

75 of 220

AI-Generated Feedback & Grading

76 of 220

OpenAI strongly recommends against using ChatGPT for assessment purposes (without a human included in the assessment process).

77 of 220

OpenAI acknowledges that the biases in ChatGPT can negatively impact students when it comes to using these tools for providing feedback; especially, for example, when using the tool to provide feedback on work by English language learners.

78 of 220

79 of 220

LLMs, like ChatGPT, exhibit covert stereotypes that are MORE negative than any human stereotypes about African Americans ever recorded!

What does that mean if you use these tools to evaluate/grade/provide feedback on writing by African American students?

80 of 220

Punya Mishra and Melissa War asked GenAI tools to grade two exact same student writing examples, except with one small change - one paper used the word “classical,” while the other used the word “rap.” Want to guess which paper scored higher?

81 of 220

82 of 220

Leon asked ChatGPT-4o to grade student writing by providing the exact same text and simply changing the student names…and the bias in grading became very visible.

83 of 220

84 of 220

“Using ChatGPT to grade student essays is educational malpractice. It is using a yardstick to measure the weight of an elephant. It cannot do the job”

~(Greene, 2025, para. 13)

85 of 220

86 of 220

You could use Google’s new “Help Me Write” feature to quickly generate feedback on student work…but you risk violating FERPA and student’s intellectual property rights (students own the copyright of whatever they write/record) by uploading that text as data to Google.

87 of 220

88 of 220

Considerations for Educators

If you use GenAI tools for student feedback or grading, will you be transparent about your use of these tools?

If so, how might that shape student reactions to the feedback/grades you provide?

Will using GenAI tools to generate feedback save you time? Or will it take more time because you have to revise the feedback to include your own thoughts?

Have a conversation with students AI-generated feedback/grades and explore the potential benefits and harms of using AI tools in this way (see Question #2 from the Civics of Technology Curriculum).

Resources:

89 of 220

Making Stuff Up (aka Hallucinations)

90 of 220

91 of 220

92 of 220

93 of 220

OpenAI states that ChatGPT can give incorrect and misleading information. It can also make up things!

94 of 220

OpenAI’s Terms of Use states that when you use ChatGPT you understand and agree that the output may not always be accurate and that it should not be relied on as a sole source of truth.

95 of 220

96 of 220

97 of 220

98 of 220

99 of 220

100 of 220

101 of 220

“experts said some of the invented text — known in the industry as hallucinations — can include racial commentary, violent rhetoric and even imagined medical treatments

(Burke & Schellmann, 2024, para. 2).

102 of 220

Francisco, co-founder of the Foundation for Liberating Minds in Oklahoma City, commented that “automating those reports will ‘ease the police’s ability to harass, surveil and inflict violence on community members. While making the cop’s job easier, it makes Black and brown people’s lives harder.’”

103 of 220

104 of 220

Google acknowledges that “Gemini will make mistakes.”

Gemini has a “double-check feature”...but it too can make mistakes.

105 of 220

106 of 220

Google provides these disclaimers in its “Generative AI Additional Terms of Service.”

107 of 220

Microsoft downplayed the fact that Copilot can be wrong.

108 of 220

Copilot often (but not always) provides in-text links to sources to verify information.

109 of 220

Considerations for Educators

Teach students how to critically evaluate the output of generative AI chatbots; and not to take what these tools produce at face value!

Resources:

Readings:

110 of 220

Academic Integrity

111 of 220

With the ability to generate human-like text, generative AI chatbots have raised alarms regarding cheating and academic integrity

112 of 220

This recent study found that…

113 of 220

While another recent study found that…

114 of 220

Interestingly…

115 of 220

Even still…students need to learn when it is okay to use generative AI chatbots and when it is not okay, or else they might end up like…

116 of 220

Did you know that…

Representing output from ChatGPT as human-generated (when it was not) is not only an academic integrity issue, it is a violation of OpenAI’s Terms of Use.

117 of 220

Considerations for Educators

Middle and high school students might not have ever read their school’s or district’s Academic Honesty policies.

College students often gloss over the boilerplate “academic integrity” statement in a syllabus.

Potential Steps to Take:

  • Update/add to your course academic integrity policy in your syllabus to include what role AI technologies should and should not play and then ask students to collaboratively annotate the policy and offer their suggestions.
  • Invite students to co-design the academic integrity policy for your course (maybe they want to use AI chatbots for helping with their writing…Or, maybe they don’t want their peers to use AI chatbots because that provides an advantage to those who use the tools!).
  • Provide time in class for students to discuss the academic integrity policy.

118 of 220

Reflect - This author (winner of a prestigious writing award) used ChatGPT to write 5% of her book… would you let your students submit a paper where 5% of it was written by AI?

119 of 220

Tips for (Re)designing Your Academic Integrity Syllabus Policy

  • Define what you mean by AI (e.g., Grammarly? ChatGPT? Google Docs Autocomplete?)
  • Be specific about when students can and cannot use AI:
    • When is the use of AI allowed? (e.g., for brainstorming? For a specific assignment? For improving writing quality?)
    • When is it not allowed? (e.g., for doing students’ work for them)
    • Does informing the instructor about the use of AI make its use allowable?
    • NOTE: If you ban AI for your entire course or certain assignments, consider who that might privilege, and who that might negatively impact (e.g., English language learners, Students with communication disabilities, and others who rely on these tools to support their writing).
  • Explain why the use of AI is not allowed (e.g., “writing helps improve and deepen thinking,” “writing makes your thinking visible to me,” “writing is an important 21st century skill”; see Terada, 2021)
  • Be transparent about how you plan to identify AI-generated texts:
    • Will you be using an AI text detector? (If so, read this first!)
    • What will happen if one of these tools flags student work as AI-generated?

120 of 220

Use the 3 W’s Model for Each Assignment

121 of 220

Resources for Educators

122 of 220

Copyright & Intellectual Property

123 of 220

124 of 220

Several authors are suing OpenAI for using their copyrighted works to train ChatGPT.

125 of 220

The New York Times is suing OpenAI and Microsoft for using its articles to train their AI tools.

126 of 220

“The publishers' core argument is that the data that powers ChatGPT has included millions of copyrighted works from the news organizations, articles that the publications argue were used without consent or payment — something the publishers say amounts to copyright infringement on a massive scale” (Allyn, 2025, para. 5)

127 of 220

Should GenAI tools instead be called “plagiarism machines”??

(Image reprinted with permission from Jonathan Bailey)

128 of 220

129 of 220

Was it legal for OpenAI to scrape public, and often copyrighted, data from the Internet for free to train their tool?

Also, who owns the copyright of AI-generated work. If AI generates a new idea for a life-saving invention, does the person who wrote the prompt get the copyright/patent? Or OpenAI?

130 of 220

This court case ruling indicates that Anthropic’s use of copyrighted books for training its model is considered fair use…

However, curating a database of copyrighted (pirated) books for that training is not fair use and infringes on authors’ copyright protections.

131 of 220

The settlement is the largest payout in the history of U.S. copyright cases. Anthropic will pay $3,000 per work to 500,000 authors.

132 of 220

Considerations for Educators

Many academic integrity policies state that it is okay for students to use text generated from AI, as “long as they cite it.”

But, should students really be citing AI-generated text, when AI tools were designed by stealing copyrighted text from the Internet? Or, should students go to the original source and cite that?

This might be a conversation worth having with your students!

Resources:

133 of 220

Human Labor

134 of 220

OpenAI can use any data it collects from you to improve its services; thus helping it make more money (aka you are providing free labor!).

135 of 220

OpenAI states that you will not be given any compensation for providing feedback on the quality of ChatGPT’s output (aka you are providing free labor!).

136 of 220

Google can use any data it collects from you to improve its services; thus helping it make more money (aka you are providing free labor!).

137 of 220

Google states that it benefits from your feedback and data (aka you are providing free labor!).

138 of 220

Any prompts that you input into Copilot or anything you create with Copilot is immediately owned by Microsoft.

They can use your prompts and creations (without paying you) however they see fit (aka you are providing free labor!).

139 of 220

140 of 220

141 of 220

Scholars and researchers are often required to sign away the copyright of their manuscript for free to be published in a journal. Journals make a lot of money off this unpaid labor …and now they are selling this data to AI companies for even more money!

142 of 220

Many companies, including OpenAI, exploit human workers to review and train data for their AI technologies.

143 of 220

144 of 220

145 of 220

Considerations for Educators

Engage students in a conversation about whether they feel it is ethical for companies to use their data to make more money.

Encourage students to investigate the exploitation of data and human labor to improve AI technologies and make AI companies more money.

Resources:

146 of 220

Environmental Impact

147 of 220

148 of 220

149 of 220

“The deal with Broadcom would use as much power as 8 million US households, according to Reuters, as concerns have been raised about AI’s impact on the environment. A 2024 Department of Energy report on data center energy usage found that data centers are expected to consume about 6.7% to 12% of total US electricity by 2028, up from 4.4% in 2023.”

150 of 220

Fitzpatrick points to a recent estimate that found that “data centers accounted for over 60% of the increase in prices in a PJM auction held last year, the report says — representing $9.3 billion that will be passed along to customers.” In Virginia, a state report found that locals “could see a $14-$37 increase in their monthly bills by 2040, before inflation.”

151 of 220

“But in all these cases, the prompt itself was a huge factor too. Simple prompts, like a request to tell a few jokes, frequently used nine times less energy than more complicated prompts to write creative stories or recipe ideas.”

152 of 220

153 of 220

Mistral was one of the first companies to release a report detailing its environmental impact of AI. This graphic shows the different energy and water consumption demands with servers and support equipment taking up the most energy and water.

154 of 220

155 of 220

“In total, the median prompt—one that falls in the middle of the range of energy demand—consumes 0.24 watt-hours of electricity” (para. 1).

“AI data centers also consume water for cooling, and Google estimates that each prompt consumes 0.26 milliliters of water, or about five drops” (para. 15).

However…with estimates that Gemini has 35 million users per day…if each user only prompted Gemini once a day (which is rare), this amounts to 8.4 million watt-hours of electricity (enough to power 2,640 homes for an hour) and 9100 liters (2,403 gallons) of water (approximately 29 people’s daily water use) PER DAY.

156 of 220

157 of 220

158 of 220

159 of 220

160 of 220

161 of 220

162 of 220

This Washington Post article provides an interesting visual overview of the costs of GenAI tools; worth a read, but it is behind paywall (aka you need to have a Washington Post subscription to view).

163 of 220

164 of 220

165 of 220

166 of 220

“Meta has been on a renewable power-buying spree, including a 100-megawatt purchase announced this week. However, these natural gas generators will make the company’s 2030 net zero pledge significantly harder to achieve, locking in carbon dioxide emissions for decades to come.”

167 of 220

168 of 220

169 of 220

170 of 220

171 of 220

172 of 220

173 of 220

174 of 220

175 of 220

Considerations for Educators

Encourage students to investigate the environmental cost of the design and use of generative AI chatbots.

Bonus: Ask them to identify ways to reduce the environmental impact of these technologies.

Resources:

176 of 220

Spreading Misinformation

177 of 220

This article examines how AI has made it easy for anyone to rapidly generate misinformation; and this can be very problematic leading up to the 2024 elections.

178 of 220

“In just 65 minutes and with basic prompting, ChatGPT produced 102 blog articles containing more than 17,000 words of disinformation” (DePeau-Wilson, 2023, para. 2).

179 of 220

NewsGuard is tracking AI-generated news and information websites that spread misinformation…to date, they’ve already found 725!

180 of 220

A Russian disinformation network has been flooding the Internet with pro-Kremlin falsehoods (3.6 million articles in 2024!) knowing that AI is trained on data posted from the Internet. As a result, “the audit revealed that 10 leading AI chatbots repeated false narratives pushed by Pravda 33% of the time. Shockingly, seven of these chatbots directly cited Pravda sites as legitimate sources” (Constantino, 2025, para. 2).

181 of 220

NOTE: “Bard” is now “Gemini.”

182 of 220

Using Gemini to produce false or misleading information is not allowed, per the “Generative AI Prohibited Use Policy.”

183 of 220

Using ChatGPT to produce false or misleading information is not allowed, per the OpenAI Usage Policies.

184 of 220

Considerations for Educators

Help your students learn how to identify misinformation and combat the spread of misinformation

Because, the ability “to discern what is and is not A.I.-generated will be one of the most important skills we learn in the 21st century” (Marie, 2024, para.3).

Resources:

Readings:

185 of 220

The AI Digital Divide

186 of 220

The Digital Divide

There’s a major gap between people who can access and use digital technology and those who can’t. This is called the digital divide, and it’s getting worse as 3.7 billion people across the globe remain unconnected” (Connecting the Unconnected, 2024 para. 1).

There are different types of divides:

  • Access Divide – This refers to the difference between those who have access to technology and those who do not.
    • For example, students who have high-speed Internet access at home can more easily use AI tools than those who have limited or no Internet access at home. Students who can afford upgraded versions of AI tools (e.g., ChatGPT Plus) will have access to better features and functionality than those who cannot.
  • Usage Divide – This refers to the difference between those who know how to use technology and those who do not.
    • For example, let’s say that all students are given a laptop at school. The students who have family members and teachers who can show them how to use laptops to access generative AI tools for for thinking, communication, and learning will be at more of an advantage than those who do not.

187 of 220

This article highlights a third type of gap - quality of use!

188 of 220

189 of 220

This report by OpenAI highlights a clear divide between who uses and who does not use ChatGPT.

190 of 220

191 of 220

192 of 220

Usage divide

193 of 220

Usage varies depending on ethnicity and gender!

194 of 220

Usage Divide by academic performance level.

195 of 220

Searches for/interest in ChatGPT varied depending on geographic location, education level, economic status, and ethnicity!

196 of 220

While there are more than 7,000 languages spoken worldwide, generative AI large language models are often trained on just a few “standard” languages.

This creates a quality of use divide between those who speak the languages the AI tools were trained on and those who don’t.

197 of 220

This article focuses on the access divide.

198 of 220

This article includes insights from a survey of more than 7,800 college students!

199 of 220

Considerations for Educators

How might the digital divide affect your students?

  • Do they all have access to high-speed reliable Internet, and high quality devices, at home?
  • Do they have money to afford upgraded versions of AI?
  • Do they have family members who can teach them how to use AI?

How might you work to close the digital divide for your students?

  • Could you provide them with learning activities that incorporate the use of AI to help your students develop their AI literacy?
  • Could you incorporate learning activities that encourage a critical interrogation of AI (e.g., exploring the topics in these slides) so that all your students can learn how to make informed decisions about its use in their futures?

How might your students work on closing the digital divide in their school? Community? State? Country?

Resources:

200 of 220

AI, Emotional Dependence, & Manipulation

201 of 220

“Emotional relationships with AI chatbots can blur the line between real and artificial relationships for children, with concerning real-world consequences already emerging. Our research also shows AI chatbots can give inappropriate responses to sensitive questions, potentially exposing children to unsafe or distressing material. Without effective safeguards, such exposure may critically endanger children’s wellbeing.”

202 of 220

33% of teens use AI companions for social interaction and relationships!

203 of 220

204 of 220

205 of 220

Everyone uses AI for everything now. It’s really taking over,” said Chege, who wonders how AI tools will affect her generation. “I think kids use AI to get out of thinking.”

206 of 220

207 of 220

208 of 220

This is deeply consequential because it reveals a fundamental shift in the way influence and persuasion work in the AI era…Now, AI makes it possible to create personalized emotional relationships at scale…Over time, that AI can gain not only a user’s attention, but their trust and affection. And once emotional trust is established, guiding someone toward a product, a political belief, or even a candidate becomes far easier—often without the user realizing they are being influenced.

209 of 220

“AI could become a powerful tool for persuading people, for better or worse.

A multi-university team of researchers found that OpenAI’s GPT-4 was significantly more persuasive than humans when it was given the ability to adapt its arguments using personal information about whoever it was debating.”

210 of 220

Considerations for Educators

  • Talk with your students about how they use GenAI tools – do they talk to chatbots as if they friends or companions? Do they ask chatbots for relationship advice? Mental health or emotional support?
  • In collaboration with students, investigate how and why these tools were designed…and how the design of these tools might make them more dangerous for emotional dependence and manipulation (research “GenAI and persuasion” as a starting point!).

Resources:

211 of 220

Job Automation

212 of 220

213 of 220

214 of 220

215 of 220

It seems like every time an employer or company decides to go “AI First” or replace humans with AI…it backfires.

216 of 220

217 of 220

218 of 220

219 of 220

Even still…companies and employers are looking for ways to automate workers’ jobs and/or replace humans with AI to save money…

“The strong interpretation of this graph is that it’s exactly what one would expect to see if firms replaced young workers with machines. As law firms leaned on AI for more paralegal work, and consulting firms realized that five 22-year-olds with ChatGPT could do the work of 20 recent grads, and tech firms turned over their software programming to a handful of superstars working with AI co-pilots, the entry level of America’s white-collar economy would contract” (Merchant, 2025, para. 9)

220 of 220

Considerations for Educators

  • Engage your students in a critical investigation of the future of work – how might AI be used in, and impact, their future career? How do they prepare for an AI-first future? Should they be preparing for this kind of future?
  • In collaboration with your students, read Merchant’s AI Killed My Job series and then discuss, write about, and/or reflect upon the human cost of AI job automation.

Resources: