1 of 204

GenAI & Ethics:

Investigating ChatGPT, Gemini, & Copilot

Torrey Trust

Professor of Learning Technology �University of Massachusetts Amherst

torrey@umass.edu | www.torreytrust.com

2 of 204

Using This Slide Deck

This slide deck is licensed under CC BY NC 4.0, meaning that you can freely use, remix, and share it as long as you give attribution and do not use it for commercial purposes.

This means that you can do the following without needing permission:

  • Share these slides with others.
  • Show this entire slideshow in your class/workshop.
  • Select slides to include in your own non-commercial presentation.
  • Remix the slides.

As long as you give credit and do not use these slides to make money (e.g., including the slides in a presentation in which you are getting paid).

To give credit, use the following Creative Commons attribution:

"AI & Ethics" slide deck by Torrey Trust, Ph.D. is licensed under CC BY NC 4.0.

Content was last added to this slide deck on August 2025.

3 of 204

Sample Lesson Plans

Check out the Sample Lesson Plans from my latest co-authored book “AI and Civic Engagement: 75+ Cross-Curricular Activities to Empower Your Students” to see examples of ways to incorporate AI ethics and AI literacy questions into educational lessons.

4 of 204

Table of Contents

5 of 204

GenAI Chatbots

6 of 204

GenAI Chatbots: ChatGPT (by OpenAI)

7 of 204

ChatGPT, a large language model developed by OpenAI, is a machine learning model that is able to generate human-like text based on the input provided.

8 of 204

Nowadays, ChatGPT can create images, conduct deep research, search the Internet, write code, write and revise text, and more.

9 of 204

ChatGPT’s latest model is GPT-5 (August 2025).

10 of 204

11 of 204

12 of 204

13 of 204

14 of 204

ChatGPT was launched in November 2022, and reached 100 million users by the start of 2023.

15 of 204

ChatGPT has already been integrated into many different fields and careers.

16 of 204

GenAI Chatbots: Copilot (by Microsoft)

17 of 204

Copilot, a large language model developed by Microsoft, is a machine learning model that is able to generate human-like text based on the input provided. It can also create images.

Copilot’s responses include links to Internet-based resources to verify the accuracy and credibility of the information provided.

18 of 204

Script on screen:

“They say I will never open my own business. Or get my degree. They say I will never make my movie. Or build something. They say I’m too old to learn something new. Too young to change the world. But I say, Watch Me.”

Writing on MS Copilot then says: “Quiz me in organic chemistry.” MS Copilot then generates a question about an organic molecular formula, providing multiple choice options. Commercial ends with MS Copilot being asked “Can you help me” and it responds “Yes, I can help.” Screen script then says “Copilot, your everyday AI companion. Anyone. Anywhere. Any device.”

19 of 204

Copilot is integrated into Word for Microsoft 365 users.

20 of 204

21 of 204

Check out Microsoft’s Copilot commercials to see how this tool is being promoted to users (hint: It’s not just for writing text; it’s for emotional support; study assistance and more…)

22 of 204

Copilot Vision is a feature that requires giving Copilot access to your screen (or if you are using a Microsoft device, turning this access on) and Copilot can “see” what you are doing on your screen and respond to your voice in real time.

23 of 204

GenAI Chatbots: Gemini (by Google)

24 of 204

Gemini, a large language model developed by Google, is a machine learning model that is able to generate human-like text based on the input provided.

Gemini has access to a massive dataset of text and code that is constantly being updated, which allows it to stay current on information. Its responses often include links to Internet-based resources.

Because Gemini is a Google tool, it can be used to summarize YouTube (owned by Google) videos.

25 of 204

Gemini has several different models; access to the models might depend on your subscription.

26 of 204

27 of 204

28 of 204

29 of 204

Data & Privacy

30 of 204

OpenAI Requires ChatGPT Users to be 13 Years or Older

The use of ChatGPT by individuals under 13 years old would violate the Children’s Online Privacy Protection Act (COPPA), since OpenAI collects a lot of user data!

31 of 204

Use of ChatGPT by 13-18 year olds requires parental permission

32 of 204

OpenAI collects a LOT of user data, including user’s time zone, country, dates and times of access, type of computer/device you’re using, computer connection!

Here’s an example of the type of data it might collect from a user: https://webkay.robinlinus.com/

33 of 204

OpenAI collects any information you input as data, so if you write a prompt including any personally identifiable information about your students, it keeps that data; and is a possible FERPA violation.

Likewise, if you ask a student to use ChatGPT to revise a college admissions essay that includes information about a trauma they experienced, OpenAI collects and keeps that data!

34 of 204

If you share a link to a ChatGPT chat this becomes publicly viewable!

35 of 204

Quite simply, they use your data to make more money (e.g., improve their products)!

You can opt out of having your data used to improve the way they train their model!

36 of 204

Way down at the bottom of their Privacy Policy, they also note that they are collecting Geolocation data!

37 of 204

Want to learn more (and quite possibly be scared about) the collection of geolocation data?

Check out this New York Times Interactive: “Your Apps Know Where You Were Last Night, and They’re Not Keeping It Secret

And, read “Google tracked his bike ride past a burglarized home. That made him a suspect.

38 of 204

39 of 204

Gemini Allows Children Under 13 to Use App with Parental Supervision

40 of 204

Google collects a LOT of user data, including user’s “conversations” with the chatbot, usage data, location data, and feedback.

41 of 204

If you are 18 years or older, Google stores your activity (e.g., any “conversations” you have with Gemini) for up to 18 months. They also collect your location data, IP address, and home/work address.

42 of 204

Google collects any information you input as data, so if you write a prompt including any personally identifiable information about your students, it keeps that data; and is a possible FERPA violation.

Likewise, if you ask a student to use Gemini to revise a college admissions essay that includes information about a trauma they experienced, Google collects and keeps that data!

43 of 204

Quite simply, Google uses your data to make more money (e.g., improve their products).

You can change your location permissions for Google.

44 of 204

You can opt-in to having your video/audio interactions with Gemini used to improve Google Services (HINT: Don’t!).

Anything you upload to Gemini after Sept. 2 might be used to help “improve Google services for everyone.” Turn your “Keep Activity” off if you do not want your uploaded data to be used in this way!

45 of 204

Gemini in Google Workspace for Education institutional accounts does offer COPPA, FERPA, and HIPPA compliance and stronger privacy/data protections.

46 of 204

Microsoft Requires Copilot Users to be 13 Years or Older

47 of 204

Microsoft seems to have more data and privacy protections in place for children and young people.

48 of 204

Copilot in Bing has data retention and deletion policies…

That means you can better control your data!

49 of 204

Any prompts that you input into Copilot or anything you create with Copilot, Microsoft (and its affiliated companies/third party partners) can use, copy, distribute, transmit, publicly display, reproduce, edit, sublicense, and translate.

They can use your prompts and creations (without paying you) however they see fit (aka to make more money!) if you are using the free version. Enterprise and licensed versions protect user prompts/creations.

So, if your students come up with a super amazing prompt that turns Copilot into a tutor for your class…Microsoft will own that prompt and could use/sell/share it!

50 of 204

Not specific to Copilot…but interesting…

Remember that any data you include in a prompt, including students’ personal data, is collected by the developer of the GenAI tool.

51 of 204

Privacy & Data Overview

  • ChatGPT requires parental permission for 13-18 year old users, Gemini and Copilot do not.
  • ChatGPT and Gemini can give away any data collected to “affiliates,” including, if requested, to federal authorities.
  • Microsoft & Google have more data privacy protections for users (thank you GDPR!)
  • Google tracks user location, OpenAI collects IP address, Microsoft CoPilot doesn’t seem to collect any location data.
  • Don’t let students put in any sensitive or identifying information into any of these tools!
  • Don’t put any sensitive information in these tools (e.g., asking ChatGPT to write an email to a student about their grade - this is FERPA violation).
  • Any information input into these tools (e.g., any prompts they write) is data that can be used by the companies that made the tools.

52 of 204

How to Protect Student Data & Privacy

  • Use Gemini or Copilot instead of ChatGPT, since Gemini and Copilot have stronger data protections due to the GDPR.
  • Ask students to use only one tool (the more tools they use, the more data is collected about them).
  • Use the AI tool only on a teacher computer/account.
    • Note: Sharing your login with students so they can access ChatGPT is a violation of OpenAI’s terms of use (“You may not share your account credentials or make your account available to anyone else and are responsible for all activities that occur under your account.”)
  • Ask students to only use the AI tools during class time (e.g., this protects their location data; compared to using these tools at home for homework).
  • Teach students about the privacy policies and terms of use of these tools (they may not know that what they type into a prompt is collected and stored).

53 of 204

Bias

54 of 204

55 of 204

56 of 204

57 of 204

58 of 204

59 of 204

“Language models have the same prejudice, exhibiting covert stereotypes that are more negative than any human stereotypes about African Americans ever experimentally recorded, although closest to the ones from before the civil rights movement.”

60 of 204

61 of 204

This article highlights multiple types of bias, including machine/algorithmic bias, availability bias, representation bias, historical bias, selection bias, group attribution bias, contextual bias, linguistic bias, anchoring bias, automation bias, and confirmation bias.

62 of 204

Another list of biases that can influence the input and output from GenAI technologies.

63 of 204

This report from UNESCO highlights the problematic gender bias in Large Language Models, like ChatGPT.

64 of 204

65 of 204

Researchers have found that GenAI chatbots (e.g., ChatGPT, Gemini) present non-disabled people more favorably. This is called ability bias.

66 of 204

GenAI tools are often trained on English-only data scraped from the Internet; therefore, their output is biased toward presenting American-centric and Westernized views.

67 of 204

68 of 204

69 of 204

70 of 204

71 of 204

72 of 204

Considerations for Educators

Engage students in investigating how generative AI tools are designed (e.g., What data they are trained on? Why that data was selected? How will that data produce biased output?).

Encourage students to reflect upon how biased AI output can shape thinking, learning, education, and society.

Bonus: Ask students to design a code of ethics for AI developers in order to reduce the harms done by biased AI output.

Resources:

73 of 204

AI-Generated Feedback & Grading

74 of 204

OpenAI strongly recommends against using ChatGPT for assessment purposes (without a human included in the assessment process).

75 of 204

OpenAI acknowledges that the biases in ChatGPT can negatively impact students when it comes to using these tools for providing feedback; especially, for example, when using the tool to provide feedback on work by English language learners.

76 of 204

77 of 204

LLMs, like ChatGPT, exhibit covert stereotypes that are MORE negative than any human stereotypes about African Americans ever recorded!

What does that mean if you use these tools to evaluate/grade/provide feedback on writing by African American students?

78 of 204

Punya Mishra and Melissa War asked GenAI tools to grade two exact same student writing examples, except with one small change - one paper used the word “classical,” while the other used the word “rap.” Want to guess which paper scored higher?

79 of 204

80 of 204

Leon asked ChatGPT-4o to grade student writing by providing the exact same text and simply changing the student names…and the bias in grading became very visible.

81 of 204

82 of 204

“Using ChatGPT to grade student essays is educational malpractice. It is using a yardstick to measure the weight of an elephant. It cannot do the job”

~(Greene, 2025, para. 13)

83 of 204

84 of 204

You could use Google’s new “Help Me Write” feature to quickly generate feedback on student work…but you risk violating FERPA and student’s intellectual property rights (students own the copyright of whatever they write/record) by uploading that text as data to Google.

85 of 204

86 of 204

Considerations for Educators

If you use GenAI tools for student feedback or grading, will you be transparent about your use of these tools?

If so, how might that shape student reactions to the feedback/grades you provide?

Will using GenAI tools to generate feedback save you time? Or will it take more time because you have to revise the feedback to include your own thoughts?

Have a conversation with students AI-generated feedback/grades and explore the potential benefits and harms of using AI tools in this way (see Question #2 from the Civics of Technology Curriculum).

Resources:

87 of 204

Making Stuff Up (aka Hallucinations)

88 of 204

89 of 204

90 of 204

OpenAI states that ChatGPT can give incorrect and misleading information. It can also make up things!

91 of 204

OpenAI’s Terms of Use states that when you use ChatGPT you understand and agree that the output may not always be accurate and that it should not be relied on as a sole source of truth.

92 of 204

93 of 204

94 of 204

95 of 204

96 of 204

97 of 204

98 of 204

“experts said some of the invented text — known in the industry as hallucinations — can include racial commentary, violent rhetoric and even imagined medical treatments

(Burke & Schellmann, 2024, para. 2).

99 of 204

Francisco, co-founder of the Foundation for Liberating Minds in Oklahoma City, commented that “automating those reports will ‘ease the police’s ability to harass, surveil and inflict violence on community members. While making the cop’s job easier, it makes Black and brown people’s lives harder.’”

100 of 204

101 of 204

Google acknowledges that “Gemini will make mistakes.”

Gemini has a “double-check feature”...but it too can make mistakes.

102 of 204

103 of 204

Google provides these disclaimers in its “Generative AI Additional Terms of Service.”

104 of 204

Microsoft downplayed the fact that Copilot can be wrong.

105 of 204

Copilot often (but not always) provides in-text links to sources to verify information.

106 of 204

Considerations for Educators

Teach students how to critically evaluate the output of generative AI chatbots; and not to take what these tools produce at face value!

Resources:

Readings:

107 of 204

Academic Integrity

108 of 204

With the ability to generate human-like text, generative AI chatbots have raised alarms regarding cheating and academic integrity

109 of 204

This recent study found that…

110 of 204

While another recent study found that…

111 of 204

Interestingly…

112 of 204

Even still…students need to learn when it is okay to use generative AI chatbots and when it is not okay, or else they might end up like…

113 of 204

Did you know that…

Representing output from ChatGPT as human-generated (when it was not) is not only an academic integrity issue, it is a violation of OpenAI’s Terms of Use.

114 of 204

Considerations for Educators

Middle and high school students might not have ever read their school’s or district’s Academic Honesty policies.

College students often gloss over the boilerplate “academic integrity” statement in a syllabus.

Potential Steps to Take:

  • Update/add to your course academic integrity policy in your syllabus to include what role AI technologies should and should not play and then ask students to collaboratively annotate the policy and offer their suggestions.
  • Invite students to co-design the academic integrity policy for your course (maybe they want to use AI chatbots for helping with their writing…Or, maybe they don’t want their peers to use AI chatbots because that provides an advantage to those who use the tools!).
  • Provide time in class for students to discuss the academic integrity policy.

115 of 204

Reflect - This author (winner of a prestigious writing award) used ChatGPT to write 5% of her book… would you let your students submit a paper where 5% of it was written by AI?

116 of 204

Tips for (Re)designing Your Academic Integrity Syllabus Policy

  • Define what you mean by AI (e.g., Grammarly? ChatGPT? Google Docs Autocomplete?)
  • Be specific about when students can and cannot use AI:
    • When is the use of AI allowed? (e.g., for brainstorming? For a specific assignment? For improving writing quality?)
    • When is it not allowed? (e.g., for doing students’ work for them)
    • Does informing the instructor about the use of AI make its use allowable?
    • NOTE: If you ban AI for your entire course or certain assignments, consider who that might privilege, and who that might negatively impact (e.g., English language learners, Students with communication disabilities, and others who rely on these tools to support their writing).
  • Explain why the use of AI is not allowed (e.g., “writing helps improve and deepen thinking,” “writing makes your thinking visible to me,” “writing is an important 21st century skill”; see Terada, 2021)
  • Be transparent about how you plan to identify AI-generated texts:
    • Will you be using an AI text detector? (If so, read this first!)
    • What will happen if one of these tools flags student work as AI-generated?

117 of 204

Use the 3 W’s Model for Each Assignment

118 of 204

Resources for Educators

119 of 204

Copyright & Intellectual Property

120 of 204

121 of 204

Several authors are suing OpenAI for using their copyrighted works to train ChatGPT.

122 of 204

The New York Times is suing OpenAI and Microsoft for using its articles to train their AI tools.

123 of 204

“The publishers' core argument is that the data that powers ChatGPT has included millions of copyrighted works from the news organizations, articles that the publications argue were used without consent or payment — something the publishers say amounts to copyright infringement on a massive scale” (Allyn, 2025, para. 5)

124 of 204

Should GenAI tools instead be called “plagiarism machines”??

(Image reprinted with permission from Jonathan Bailey)

125 of 204

126 of 204

Was it legal for OpenAI to scrape public, and often copyrighted, data from the Internet for free to train their tool?

Also, who owns the copyright of AI-generated work. If AI generates a new idea for a life-saving invention, does the person who wrote the prompt get the copyright/patent? Or OpenAI?

127 of 204

This court case ruling indicates that Anthropic’s use of copyrighted books for training its model is considered fair use…

However, curating a database of copyrighted (pirated) books for that training is not fair use and infringes on authors’ copyright protections.

128 of 204

Considerations for Educators

Many academic integrity policies state that it is okay for students to use text generated from AI, as “long as they cite it.”

But, should students really be citing AI-generated text, when AI tools were designed by stealing copyrighted text from the Internet? Or, should students go to the original source and cite that?

This might be a conversation worth having with your students!

Resources:

129 of 204

Human Labor

130 of 204

OpenAI can use any data it collects from you to improve its services; thus helping it make more money (aka you are providing free labor!).

131 of 204

OpenAI states that you will not be given any compensation for providing feedback on the quality of ChatGPT’s output (aka you are providing free labor!).

132 of 204

Google can use any data it collects from you to improve its services; thus helping it make more money (aka you are providing free labor!).

133 of 204

Google states that it benefits from your feedback and data (aka you are providing free labor!).

134 of 204

Any prompts that you input into Copilot or anything you create with Copilot is immediately owned by Microsoft.

They can use your prompts and creations (without paying you) however they see fit (aka you are providing free labor!).

135 of 204

136 of 204

137 of 204

Scholars and researchers are often required to sign away the copyright of their manuscript for free to be published in a journal. Journals make a lot of money off this unpaid labor …and now they are selling this data to AI companies for even more money!

138 of 204

Many companies, including OpenAI, exploit human workers to review and train data for their AI technologies.

139 of 204

140 of 204

141 of 204

Considerations for Educators

Engage students in a conversation about whether they feel it is ethical for companies to use their data to make more money.

Encourage students to investigate the exploitation of data and human labor to improve AI technologies and make AI companies more money.

Resources:

142 of 204

Environmental Impact

143 of 204

144 of 204

145 of 204

Fitzpatrick points to a recent estimate that found that “data centers accounted for over 60% of the increase in prices in a PJM auction held last year, the report says — representing $9.3 billion that will be passed along to customers.” In Virginia, a state report found that locals “could see a $14-$37 increase in their monthly bills by 2040, before inflation.”

146 of 204

“But in all these cases, the prompt itself was a huge factor too. Simple prompts, like a request to tell a few jokes, frequently used nine times less energy than more complicated prompts to write creative stories or recipe ideas.”

147 of 204

148 of 204

Mistral was one of the first companies to release a report detailing its environmental impact of AI. This graphic shows the different energy and water consumption demands with servers and support equipment taking up the most energy and water.

149 of 204

150 of 204

“In total, the median prompt—one that falls in the middle of the range of energy demand—consumes 0.24 watt-hours of electricity” (para. 1).

“AI data centers also consume water for cooling, and Google estimates that each prompt consumes 0.26 milliliters of water, or about five drops” (para. 15).

However…with estimates that Gemini has 35 million users per day…if each user only prompted Gemini once a day (which is rare), this amounts to 8.4 million watt-hours of electricity (enough to power 2,640 homes for an hour) and 9100 liters (2,403 gallons) of water (approximately 29 people’s daily water use) PER DAY.

151 of 204

152 of 204

153 of 204

154 of 204

155 of 204

156 of 204

This Washington Post article provides an interesting visual overview of the costs of GenAI tools; worth a read, but it is behind paywall (aka you need to have a Washington Post subscription to view).

157 of 204

158 of 204

159 of 204

160 of 204

“Meta has been on a renewable power-buying spree, including a 100-megawatt purchase announced this week. However, these natural gas generators will make the company’s 2030 net zero pledge significantly harder to achieve, locking in carbon dioxide emissions for decades to come.”

161 of 204

162 of 204

163 of 204

164 of 204

165 of 204

166 of 204

167 of 204

168 of 204

169 of 204

Considerations for Educators

Encourage students to investigate the environmental cost of the design and use of generative AI chatbots.

Bonus: Ask them to identify ways to reduce the environmental impact of these technologies.

Resources:

170 of 204

Spreading Misinformation

171 of 204

This article examines how AI has made it easy for anyone to rapidly generate misinformation; and this can be very problematic leading up to the 2024 elections.

172 of 204

“In just 65 minutes and with basic prompting, ChatGPT produced 102 blog articles containing more than 17,000 words of disinformation” (DePeau-Wilson, 2023, para. 2).

173 of 204

NewsGuard is tracking AI-generated news and information websites that spread misinformation…to date, they’ve already found 725!

174 of 204

A Russian disinformation network has been flooding the Internet with pro-Kremlin falsehoods (3.6 million articles in 2024!) knowing that AI is trained on data posted from the Internet. As a result, “the audit revealed that 10 leading AI chatbots repeated false narratives pushed by Pravda 33% of the time. Shockingly, seven of these chatbots directly cited Pravda sites as legitimate sources” (Constantino, 2025, para. 2).

175 of 204

NOTE: “Bard” is now “Gemini.”

176 of 204

Using Gemini to produce false or misleading information is not allowed, per the “Generative AI Prohibited Use Policy.”

177 of 204

Using ChatGPT to produce false or misleading information is not allowed, per the OpenAI Usage Policies.

178 of 204

Considerations for Educators

Help your students learn how to identify misinformation and combat the spread of misinformation

Because, the ability “to discern what is and is not A.I.-generated will be one of the most important skills we learn in the 21st century” (Marie, 2024, para.3).

Resources:

Readings:

179 of 204

The AI Digital Divide

180 of 204

The Digital Divide

There’s a major gap between people who can access and use digital technology and those who can’t. This is called the digital divide, and it’s getting worse as 3.7 billion people across the globe remain unconnected” (Connecting the Unconnected, 2024 para. 1).

There are different types of divides:

  • Access Divide – This refers to the difference between those who have access to technology and those who do not.
    • For example, students who have high-speed Internet access at home can more easily use AI tools than those who have limited or no Internet access at home. Students who can afford upgraded versions of AI tools (e.g., ChatGPT Plus) will have access to better features and functionality than those who cannot.
  • Usage Divide – This refers to the difference between those who know how to use technology and those who do not.
    • For example, let’s say that all students are given a laptop at school. The students who have family members and teachers who can show them how to use laptops to access generative AI tools for for thinking, communication, and learning will be at more of an advantage than those who do not.

181 of 204

This article highlights a third type of gap - quality of use!

182 of 204

183 of 204

This report by OpenAI highlights a clear divide between who uses and who does not use ChatGPT.

184 of 204

185 of 204

186 of 204

Usage divide

187 of 204

Usage varies depending on ethnicity and gender!

188 of 204

Usage Divide by academic performance level.

189 of 204

Searches for/interest in ChatGPT varied depending on geographic location, education level, economic status, and ethnicity!

190 of 204

While there are more than 7,000 languages spoken worldwide, generative AI large language models are often trained on just a few “standard” languages.

This creates a quality of use divide between those who speak the languages the AI tools were trained on and those who don’t.

191 of 204

This article focuses on the access divide.

192 of 204

This article includes insights from a survey of more than 7,800 college students!

193 of 204

Considerations for Educators

How might the digital divide affect your students?

  • Do they all have access to high-speed reliable Internet, and high quality devices, at home?
  • Do they have money to afford upgraded versions of AI?
  • Do they have family members who can teach them how to use AI?

How might you work to close the digital divide for your students?

  • Could you provide them with learning activities that incorporate the use of AI to help your students develop their AI literacy?
  • Could you incorporate learning activities that encourage a critical interrogation of AI (e.g., exploring the topics in these slides) so that all your students can learn how to make informed decisions about its use in their futures?

How might your students work on closing the digital divide in their school? Community? State? Country?

Resources:

194 of 204

AI, Emotional Dependence, & Manipulation

195 of 204

“Emotional relationships with AI chatbots can blur the line between real and artificial relationships for children, with concerning real-world consequences already emerging. Our research also shows AI chatbots can give inappropriate responses to sensitive questions, potentially exposing children to unsafe or distressing material. Without effective safeguards, such exposure may critically endanger children’s wellbeing.”

196 of 204

33% of teens use AI companions for social interaction and relationships!

197 of 204

198 of 204

199 of 204

Everyone uses AI for everything now. It’s really taking over,” said Chege, who wonders how AI tools will affect her generation. “I think kids use AI to get out of thinking.”

200 of 204

201 of 204

202 of 204

This is deeply consequential because it reveals a fundamental shift in the way influence and persuasion work in the AI era…Now, AI makes it possible to create personalized emotional relationships at scale…Over time, that AI can gain not only a user’s attention, but their trust and affection. And once emotional trust is established, guiding someone toward a product, a political belief, or even a candidate becomes far easier—often without the user realizing they are being influenced.

203 of 204

“AI could become a powerful tool for persuading people, for better or worse.

A multi-university team of researchers found that OpenAI’s GPT-4 was significantly more persuasive than humans when it was given the ability to adapt its arguments using personal information about whoever it was debating.”

204 of 204

Considerations for Educators

  • Talk with your students about how they use GenAI tools – do they talk to chatbots as if they friends or companions? Do they ask chatbots for relationship advice? Mental health or emotional support?
  • In collaboration with students, investigate how and why these tools were designed…and how the design of these tools might make them more dangerous for emotional dependence and manipulation (research “GenAI and persuasion” as a starting point!).

Resources: