GenAI & Ethics:
Investigating ChatGPT, Gemini, & Copilot
Torrey Trust
Professor of Learning Technology �University of Massachusetts Amherst
Using This Slide Deck
This slide deck is licensed under CC BY NC 4.0, meaning that you can freely use, remix, and share it as long as you give attribution and do not use it for commercial purposes.
This means that you can do the following without needing permission:
As long as you give credit and do not use these slides to make money (e.g., including the slides in a presentation in which you are getting paid).
To give credit, use the following Creative Commons attribution:
"AI & Ethics" slide deck by Torrey Trust, Ph.D. is licensed under CC BY NC 4.0.
Content was last added to this slide deck on August 2025.
Sample Lesson Plans
Check out the Sample Lesson Plans from my latest co-authored book “AI and Civic Engagement: 75+ Cross-Curricular Activities to Empower Your Students” to see examples of ways to incorporate AI ethics and AI literacy questions into educational lessons.
Table of Contents
GenAI Chatbots
GenAI Chatbots: ChatGPT (by OpenAI)
ChatGPT, a large language model developed by OpenAI, is a machine learning model that is able to generate human-like text based on the input provided.
Nowadays, ChatGPT can create images, conduct deep research, search the Internet, write code, write and revise text, and more.
ChatGPT’s latest model is GPT-5 (August 2025).
ChatGPT was launched in November 2022, and reached 100 million users by the start of 2023.
ChatGPT has already been integrated into many different fields and careers.
GenAI Chatbots: Copilot (by Microsoft)
Copilot, a large language model developed by Microsoft, is a machine learning model that is able to generate human-like text based on the input provided. It can also create images.
Copilot’s responses include links to Internet-based resources to verify the accuracy and credibility of the information provided.
Script on screen:
“They say I will never open my own business. Or get my degree. They say I will never make my movie. Or build something. They say I’m too old to learn something new. Too young to change the world. But I say, Watch Me.”
Writing on MS Copilot then says: “Quiz me in organic chemistry.” MS Copilot then generates a question about an organic molecular formula, providing multiple choice options. Commercial ends with MS Copilot being asked “Can you help me” and it responds “Yes, I can help.” Screen script then says “Copilot, your everyday AI companion. Anyone. Anywhere. Any device.”
Copilot is integrated into Word for Microsoft 365 users.
Check out Microsoft’s Copilot commercials to see how this tool is being promoted to users (hint: It’s not just for writing text; it’s for emotional support; study assistance and more…)
Copilot Vision is a feature that requires giving Copilot access to your screen (or if you are using a Microsoft device, turning this access on) and Copilot can “see” what you are doing on your screen and respond to your voice in real time.
GenAI Chatbots: Gemini (by Google)
Gemini, a large language model developed by Google, is a machine learning model that is able to generate human-like text based on the input provided.
Gemini has access to a massive dataset of text and code that is constantly being updated, which allows it to stay current on information. Its responses often include links to Internet-based resources.
Because Gemini is a Google tool, it can be used to summarize YouTube (owned by Google) videos.
Gemini has several different models; access to the models might depend on your subscription.
Data & Privacy
OpenAI Requires ChatGPT Users to be 13 Years or Older
The use of ChatGPT by individuals under 13 years old would violate the Children’s Online Privacy Protection Act (COPPA), since OpenAI collects a lot of user data!
Use of ChatGPT by 13-18 year olds requires parental permission
OpenAI collects a LOT of user data, including user’s time zone, country, dates and times of access, type of computer/device you’re using, computer connection!
Here’s an example of the type of data it might collect from a user: https://webkay.robinlinus.com/
OpenAI collects any information you input as data, so if you write a prompt including any personally identifiable information about your students, it keeps that data; and is a possible FERPA violation.
Likewise, if you ask a student to use ChatGPT to revise a college admissions essay that includes information about a trauma they experienced, OpenAI collects and keeps that data!
If you share a link to a ChatGPT chat this becomes publicly viewable!
Quite simply, they use your data to make more money (e.g., improve their products)!
You can opt out of having your data used to improve the way they train their model!
Way down at the bottom of their Privacy Policy, they also note that they are collecting Geolocation data!
Want to learn more (and quite possibly be scared about) the collection of geolocation data?
Check out this New York Times Interactive: “Your Apps Know Where You Were Last Night, and They’re Not Keeping It Secret”
And, read “Google tracked his bike ride past a burglarized home. That made him a suspect.”
Gemini Allows Children Under 13 to Use App with Parental Supervision
Google collects a LOT of user data, including user’s “conversations” with the chatbot, usage data, location data, and feedback.
If you are 18 years or older, Google stores your activity (e.g., any “conversations” you have with Gemini) for up to 18 months. They also collect your location data, IP address, and home/work address.
Google collects any information you input as data, so if you write a prompt including any personally identifiable information about your students, it keeps that data; and is a possible FERPA violation.
Likewise, if you ask a student to use Gemini to revise a college admissions essay that includes information about a trauma they experienced, Google collects and keeps that data!
Quite simply, Google uses your data to make more money (e.g., improve their products).
You can change your location permissions for Google.
You can opt-in to having your video/audio interactions with Gemini used to improve Google Services (HINT: Don’t!).
Anything you upload to Gemini after Sept. 2 might be used to help “improve Google services for everyone.” Turn your “Keep Activity” off if you do not want your uploaded data to be used in this way!
Gemini in Google Workspace for Education institutional accounts does offer COPPA, FERPA, and HIPPA compliance and stronger privacy/data protections.
Microsoft Requires Copilot Users to be 13 Years or Older
Microsoft seems to have more data and privacy protections in place for children and young people.
Copilot in Bing has data retention and deletion policies…
That means you can better control your data!
Any prompts that you input into Copilot or anything you create with Copilot, Microsoft (and its affiliated companies/third party partners) can use, copy, distribute, transmit, publicly display, reproduce, edit, sublicense, and translate.
They can use your prompts and creations (without paying you) however they see fit (aka to make more money!) if you are using the free version. Enterprise and licensed versions protect user prompts/creations.
So, if your students come up with a super amazing prompt that turns Copilot into a tutor for your class…Microsoft will own that prompt and could use/sell/share it!
Not specific to Copilot…but interesting…
Remember that any data you include in a prompt, including students’ personal data, is collected by the developer of the GenAI tool.
Privacy & Data Overview
How to Protect Student Data & Privacy
Bias
“Language models have the same prejudice, exhibiting covert stereotypes that are more negative than any human stereotypes about African Americans ever experimentally recorded, although closest to the ones from before the civil rights movement.”
This article highlights multiple types of bias, including machine/algorithmic bias, availability bias, representation bias, historical bias, selection bias, group attribution bias, contextual bias, linguistic bias, anchoring bias, automation bias, and confirmation bias.
Another list of biases that can influence the input and output from GenAI technologies.
This report from UNESCO highlights the problematic gender bias in Large Language Models, like ChatGPT.
Researchers have found that GenAI chatbots (e.g., ChatGPT, Gemini) present non-disabled people more favorably. This is called ability bias.
GenAI tools are often trained on English-only data scraped from the Internet; therefore, their output is biased toward presenting American-centric and Westernized views.
Considerations for Educators
Engage students in investigating how generative AI tools are designed (e.g., What data they are trained on? Why that data was selected? How will that data produce biased output?).
Encourage students to reflect upon how biased AI output can shape thinking, learning, education, and society.
Bonus: Ask students to design a code of ethics for AI developers in order to reduce the harms done by biased AI output.
Resources:
AI-Generated Feedback & Grading
OpenAI strongly recommends against using ChatGPT for assessment purposes (without a human included in the assessment process).
OpenAI acknowledges that the biases in ChatGPT can negatively impact students when it comes to using these tools for providing feedback; especially, for example, when using the tool to provide feedback on work by English language learners.
LLMs, like ChatGPT, exhibit covert stereotypes that are MORE negative than any human stereotypes about African Americans ever recorded!
What does that mean if you use these tools to evaluate/grade/provide feedback on writing by African American students?
Punya Mishra and Melissa War asked GenAI tools to grade two exact same student writing examples, except with one small change - one paper used the word “classical,” while the other used the word “rap.” Want to guess which paper scored higher?
Leon asked ChatGPT-4o to grade student writing by providing the exact same text and simply changing the student names…and the bias in grading became very visible.
“Using ChatGPT to grade student essays is educational malpractice. It is using a yardstick to measure the weight of an elephant. It cannot do the job”
You could use Google’s new “Help Me Write” feature to quickly generate feedback on student work…but you risk violating FERPA and student’s intellectual property rights (students own the copyright of whatever they write/record) by uploading that text as data to Google.
Considerations for Educators
If you use GenAI tools for student feedback or grading, will you be transparent about your use of these tools?
If so, how might that shape student reactions to the feedback/grades you provide?
Will using GenAI tools to generate feedback save you time? Or will it take more time because you have to revise the feedback to include your own thoughts?
Have a conversation with students AI-generated feedback/grades and explore the potential benefits and harms of using AI tools in this way (see Question #2 from the Civics of Technology Curriculum).
Resources:
Making Stuff Up (aka Hallucinations)
OpenAI states that ChatGPT can give incorrect and misleading information. It can also make up things!
OpenAI’s Terms of Use states that when you use ChatGPT you understand and agree that the output may not always be accurate and that it should not be relied on as a sole source of truth.
“experts said some of the invented text — known in the industry as hallucinations — can include racial commentary, violent rhetoric and even imagined medical treatments”
(Burke & Schellmann, 2024, para. 2).
Francisco, co-founder of the Foundation for Liberating Minds in Oklahoma City, commented that “automating those reports will ‘ease the police’s ability to harass, surveil and inflict violence on community members. While making the cop’s job easier, it makes Black and brown people’s lives harder.’”
Google acknowledges that “Gemini will make mistakes.”
Gemini has a “double-check feature”...but it too can make mistakes.
Google provides these disclaimers in its “Generative AI Additional Terms of Service.”
Microsoft downplayed the fact that Copilot can be wrong.
Copilot often (but not always) provides in-text links to sources to verify information.
Considerations for Educators
Teach students how to critically evaluate the output of generative AI chatbots; and not to take what these tools produce at face value!
Resources:
Readings:
Academic Integrity
With the ability to generate human-like text, generative AI chatbots have raised alarms regarding cheating and academic integrity
This recent study found that…
While another recent study found that…
Interestingly…
Even still…students need to learn when it is okay to use generative AI chatbots and when it is not okay, or else they might end up like…
Did you know that…
Representing output from ChatGPT as human-generated (when it was not) is not only an academic integrity issue, it is a violation of OpenAI’s Terms of Use.
Considerations for Educators
Middle and high school students might not have ever read their school’s or district’s Academic Honesty policies.
College students often gloss over the boilerplate “academic integrity” statement in a syllabus.
Potential Steps to Take:
Reflect - This author (winner of a prestigious writing award) used ChatGPT to write 5% of her book… would you let your students submit a paper where 5% of it was written by AI?
Tips for (Re)designing Your Academic Integrity Syllabus Policy
Use the 3 W’s Model for Each Assignment
Resources for Educators
Copyright & Intellectual Property
Several authors are suing OpenAI for using their copyrighted works to train ChatGPT.
The New York Times is suing OpenAI and Microsoft for using its articles to train their AI tools.
“The publishers' core argument is that the data that powers ChatGPT has included millions of copyrighted works from the news organizations, articles that the publications argue were used without consent or payment — something the publishers say amounts to copyright infringement on a massive scale” (Allyn, 2025, para. 5)
Should GenAI tools instead be called “plagiarism machines”??
(Image reprinted with permission from Jonathan Bailey)
Was it legal for OpenAI to scrape public, and often copyrighted, data from the Internet for free to train their tool?
Also, who owns the copyright of AI-generated work. If AI generates a new idea for a life-saving invention, does the person who wrote the prompt get the copyright/patent? Or OpenAI?
This court case ruling indicates that Anthropic’s use of copyrighted books for training its model is considered fair use…
However, curating a database of copyrighted (pirated) books for that training is not fair use and infringes on authors’ copyright protections.
Considerations for Educators
Many academic integrity policies state that it is okay for students to use text generated from AI, as “long as they cite it.”
But, should students really be citing AI-generated text, when AI tools were designed by stealing copyrighted text from the Internet? Or, should students go to the original source and cite that?
This might be a conversation worth having with your students!
Resources:
Human Labor
OpenAI can use any data it collects from you to improve its services; thus helping it make more money (aka you are providing free labor!).
OpenAI states that you will not be given any compensation for providing feedback on the quality of ChatGPT’s output (aka you are providing free labor!).
Google can use any data it collects from you to improve its services; thus helping it make more money (aka you are providing free labor!).
Google states that it benefits from your feedback and data (aka you are providing free labor!).
Any prompts that you input into Copilot or anything you create with Copilot is immediately owned by Microsoft.
They can use your prompts and creations (without paying you) however they see fit (aka you are providing free labor!).
Scholars and researchers are often required to sign away the copyright of their manuscript for free to be published in a journal. Journals make a lot of money off this unpaid labor …and now they are selling this data to AI companies for even more money!
Many companies, including OpenAI, exploit human workers to review and train data for their AI technologies.
Considerations for Educators
Engage students in a conversation about whether they feel it is ethical for companies to use their data to make more money.
Encourage students to investigate the exploitation of data and human labor to improve AI technologies and make AI companies more money.
Resources:
Environmental Impact
Fitzpatrick points to a recent estimate that found that “data centers accounted for over 60% of the increase in prices in a PJM auction held last year, the report says — representing $9.3 billion that will be passed along to customers.” In Virginia, a state report found that locals “could see a $14-$37 increase in their monthly bills by 2040, before inflation.”
Brian Merchant
“
“But in all these cases, the prompt itself was a huge factor too. Simple prompts, like a request to tell a few jokes, frequently used nine times less energy than more complicated prompts to write creative stories or recipe ideas.”
Mistral was one of the first companies to release a report detailing its environmental impact of AI. This graphic shows the different energy and water consumption demands with servers and support equipment taking up the most energy and water.
“In total, the median prompt—one that falls in the middle of the range of energy demand—consumes 0.24 watt-hours of electricity” (para. 1).
“AI data centers also consume water for cooling, and Google estimates that each prompt consumes 0.26 milliliters of water, or about five drops” (para. 15).
However…with estimates that Gemini has 35 million users per day…if each user only prompted Gemini once a day (which is rare), this amounts to 8.4 million watt-hours of electricity (enough to power 2,640 homes for an hour) and 9100 liters (2,403 gallons) of water (approximately 29 people’s daily water use) PER DAY.
This Washington Post article provides an interesting visual overview of the costs of GenAI tools; worth a read, but it is behind paywall (aka you need to have a Washington Post subscription to view).
“Meta has been on a renewable power-buying spree, including a 100-megawatt purchase announced this week. However, these natural gas generators will make the company’s 2030 net zero pledge significantly harder to achieve, locking in carbon dioxide emissions for decades to come.”
Considerations for Educators
Encourage students to investigate the environmental cost of the design and use of generative AI chatbots.
Bonus: Ask them to identify ways to reduce the environmental impact of these technologies.
Resources:
Spreading Misinformation
This article examines how AI has made it easy for anyone to rapidly generate misinformation; and this can be very problematic leading up to the 2024 elections.
“In just 65 minutes and with basic prompting, ChatGPT produced 102 blog articles containing more than 17,000 words of disinformation” (DePeau-Wilson, 2023, para. 2).
NewsGuard is tracking AI-generated news and information websites that spread misinformation…to date, they’ve already found 725!
A Russian disinformation network has been flooding the Internet with pro-Kremlin falsehoods (3.6 million articles in 2024!) knowing that AI is trained on data posted from the Internet. As a result, “the audit revealed that 10 leading AI chatbots repeated false narratives pushed by Pravda 33% of the time. Shockingly, seven of these chatbots directly cited Pravda sites as legitimate sources” (Constantino, 2025, para. 2).
NOTE: “Bard” is now “Gemini.”
Using Gemini to produce false or misleading information is not allowed, per the “Generative AI Prohibited Use Policy.”
Using ChatGPT to produce false or misleading information is not allowed, per the OpenAI Usage Policies.
Considerations for Educators
Help your students learn how to identify misinformation and combat the spread of misinformation…
Because, the ability “to discern what is and is not A.I.-generated will be one of the most important skills we learn in the 21st century” (Marie, 2024, para.3).
Resources:
Readings:
The AI Digital Divide
The Digital Divide
“There’s a major gap between people who can access and use digital technology and those who can’t. This is called the digital divide, and it’s getting worse as 3.7 billion people across the globe remain unconnected” (Connecting the Unconnected, 2024 para. 1).
There are different types of divides:
This article highlights a third type of gap - quality of use!
This report by OpenAI highlights a clear divide between who uses and who does not use ChatGPT.
Usage divide
Usage varies depending on ethnicity and gender!
Usage Divide by academic performance level.
Searches for/interest in ChatGPT varied depending on geographic location, education level, economic status, and ethnicity!
While there are more than 7,000 languages spoken worldwide, generative AI large language models are often trained on just a few “standard” languages.
This creates a quality of use divide between those who speak the languages the AI tools were trained on and those who don’t.
This article focuses on the access divide.
This article includes insights from a survey of more than 7,800 college students!
Considerations for Educators
How might the digital divide affect your students?
How might you work to close the digital divide for your students?
How might your students work on closing the digital divide in their school? Community? State? Country?
Resources:
AI, Emotional Dependence, & Manipulation
“Emotional relationships with AI chatbots can blur the line between real and artificial relationships for children, with concerning real-world consequences already emerging. Our research also shows AI chatbots can give inappropriate responses to sensitive questions, potentially exposing children to unsafe or distressing material. Without effective safeguards, such exposure may critically endanger children’s wellbeing.”
Internet Matters Report
Me, myself and AI:Understanding and safeguarding children’s use of AI chatbots (2025)
“
33% of teens use AI companions for social interaction and relationships!
“Everyone uses AI for everything now. It’s really taking over,” said Chege, who wonders how AI tools will affect her generation. “I think kids use AI to get out of thinking.”
This is deeply consequential because it reveals a fundamental shift in the way influence and persuasion work in the AI era…Now, AI makes it possible to create personalized emotional relationships at scale…Over time, that AI can gain not only a user’s attention, but their trust and affection. And once emotional trust is established, guiding someone toward a product, a political belief, or even a candidate becomes far easier—often without the user realizing they are being influenced.
Stefan Bauschard
Education Disrupted: Teaching and Learning in An AI World (2025)
“
“AI could become a powerful tool for persuading people, for better or worse.
A multi-university team of researchers found that OpenAI’s GPT-4 was significantly more persuasive than humans when it was given the ability to adapt its arguments using personal information about whoever it was debating.”
Considerations for Educators
Resources: