AI and College Writing: An Orientation for Students
by Anna Mills
Shared under a Creative Commons Attribution-Noncommercial 4.0 license (CC BY NC 4.0)
What is AI, and why are people talking about it so much?
How do chatbots come up with text?
AI copies patterns; it doesn’t think
AI can generate decent-sounding text. Do we still need to learn to write?
Don't trust AI: sometimes it makes things up
Don't trust AI: it doesn’t understand
Don't trust AI with sensitive information; check the privacy policy
Don’t trust AI to cite its sources
Does using AI do harm? If so, should we stop using it?
How do I know which use of AI is allowed and which isn’t?
If I use AI, how should I acknowledge and cite it?
How can I protect myself against false accusations of using AI?
AI for Tutoring-Style Assistance
Don’t let AI brainstorm, write, revise, or edit (in college)
Getting the most out of AI (prompting): Describe what you want and keep trying
Which AI tools should I consider?
Getting the most out of AI feedback
Writing a reflection on AI feedback
Attribution and AI use statement
This is a draft document addressed to students that will eventually become a textbook hosted on LibreTexts to accompany How Arguments Work: A Guide to Writing and Analyzing Texts in College. A grant from the California Community Colleges Chancellor’s Office Zero Textbook Cost Program via Cañada College funds this work in from Summer 2024 through Summer 2025 under the guidance of Sarah Harmon.
I would appreciate feedback from both teachers and students on this draft! (I’m inspired by the open peer review process for Remi Kalir’s book Annotation, as described by Troy Hicks.) Please join in by leaving public comments on this Google Doc or by emailing me at info@howargumentswork.org to suggest a way you'd like to contribute.
AI is a complex subject that raises many, sometimes contradictory emotions. There are possibilities and perils; we live in interesting times. You have probably been hearing about ways students might use AI systems like ChatGPT. Maybe you are more informed than your teachers, or maybe you haven’t chosen to focus on it much yet.
This guide introduces some basics about AI that are worth keeping in mind if you are considering using it as a college student. There are activities to build AI literacy and stimulate further reflection. As an “orientation,” this document is an invitation to engage with AI with both curiosity and skepticism. It gives some context for discussions with yourself and with your peers and teachers about AI, discussions that will surely be ongoing as AI continues to change and attitudes toward it in education and society evolve.
Where I’m coming from: My name is Anna Mills, and I have taught college writing since 2005. I've focused on the role of AI in writing instruction since June 2022; I give workshops, sit on task forces, post on social media, and publish about the subject of AI in higher education, but I definitely don't feel I have all the answers. I hope that teachers and students can collaborate to reflect and try and modify our practices around AI in a continuous process of open deliberation about the ethics of AI and how AI may help or hinder learning. You can read more about this approach in the paper I coauthored with Maha Bali and Lance Eaton, “How do we respond to generative AI in education? Open educational practices give us a framework for an ongoing process.”
“AI” may call to mind science fiction images of robots that seem alive. It is often represented in software with a star or sparkles that suggest fairy dust. [f]
Figures: From left to right, Google Gemini’s logo, OpenAI's icon for GPT-4o, and Anthropic's icon for Claude.
But current systems that get called AI are not alive, and they are not magic. Their behavior can be accounted for by looking at how they were constructed.
Kate Crawford, author of The Atlas of AI, considers the term “artificial intelligence” itself to be inaccurate. She argues that the software that gets called “AI” is really not intelligent and not artificial either, given how much human input it requires. Software developer Simon Willison has suggested we call it “Imitation Intelligence” instead.
Computer scientists have been trying to develop mechanical simulations of human intelligence since the 1940s using a range of techniques. Excitement about AI in the 1950s and 60s was followed by an “AI winter” when funding dried up because early enthusiasts had overpromised and underdelivered. AI development, however, continued through the 80s and 90s. New techniques and greatly increased computer processing power led to advances in “machine learning” systems that train on data to be able to replicate patterns from that data.
By the 2000s, these systems came to underpin many of the ways we interact with information, including social media and internet search. The invention of a technique called transformers at Google in 2017 paved the way for today’s “generative AI” that can generate sophisticated though often flawed text, images, sound, and video. Launched in November, 2022, the chatbot ChatGPT raised popular awareness of the degree to which these systems can respond plausibly to human requests.
There’s plenty of disagreement about exactly how useful today’s AI is and how much it will change our work, our economy, and our day-to-day lives. There is also disagreement about how much better AI is likely to get both in the short term and the long term and whether it poses a significant threat to human life. Yet it seems clear that, as Gabriela Ramos of the United Nations Educational, Scientific and Cultural Organization (UNESCO) puts it, "These general-purpose technologies are re-shaping the way we work, interact, and live.” The United Nations has called on all states to “develop and support regulatory and governance approaches and frameworks related to safe, secure and trustworthy use of AI.”
There is growing agreement that we need greater public understanding of their capabilities and harms and broader participation in shaping these systems. A U.N. resolution adopted March 21, 2024 calls for “more cohesive, effective, coordinated and inclusive engagement and participation of all communities… in the inclusive governance of safe, secure and trustworthy artificial intelligence...”
Both teachers and students need to learn about AI, and college classrooms can be a space to engage with AI with curiosity and skepticism to prepare us to play a role in shaping its future.
How do chatbots come up with text when we ask them a question or give them a prompt?
Here’s a set of jargon-free explanations of increasing length and specificity:
To oversimplify a bit, chatbots take the words you give them and assign numbers to them. Then they feed those numbers into a complex formula developed automatically during their training.
How do the chatbots get trained? This is a time, money, energy, and data-intensive process that involves processing a huge amount of text to come up with a mathematical formula that encapsulates patterns in that text. Here are the steps in the training:
So you could say chatbots are answering the question, “Given the patterns in all the training text, what word is mathematically likely to come next?” You could paraphrase that as “Based on much of the Internet, what would a human say next?” Chatbots answer these questions over and over to come up with a series of words and serve it to us.
Then there’s another layer of training where either humans or AI or both rate chatbot performance. The ratings are used to adjust the chatbot formulas[1] to make them more likely to give higher-rated answers.
Yet another layer comes when you give a chatbot extra information to focus on. You might upload an image, a document, or a spreadsheet that you want it to consider in addition to your instructions. Or the chatbot might be allowed to do searches on the Internet or other data and take what it finds into account when it gives an answer.
What powers chatbots is still statistical word prediction, but that capability will continue to be revised and extended as software products combine them with other tools.
Please take my explanations with a grain of salt; they are approximations of what is really going on in these systems. Really chatbots don’t predict whole next words but rather chunks of words called tokens. Would you like to learn more? Want to read about large language models, (LLMs), Natural Language Processing (NLP), neural nets, tokens, weights, transformers, attention, constitutional AI, reinforcement learning from human feedback (RLHF), and retrieval-augmented generation (RAG)? Don’t be intimidated! You can find explanations at many levels of difficulty and specificity. A few popular ones are listed below.
AI text generators do intensive analysis of patterns in such huge quantities of text, that in replicating these patterns, it can sound pretty smart, as you’re probably aware. (Hence all the fuss). But is it really smart? When you see enough examples where chatbots produce nonsense, you start to get it that there’s no one home upstairs.
The New York Times article Let Us Show You How GPT Works – Using Jane Austen [l]by Aatish Bhatia shows us it what it looks like when you gradually train small AI text generation systems, called large language models, in the style of Harry Potter, Star Trek: The Next Generation[m][n], Shakespeare, Moby Dick, or Jane Austen.
Let’s take the Harry Potter version. Before training, the user types in “Hermione raised her wand,” and the language model continues “.Pfn“tkf^2JXR454tn7T23ZgE——yEIé\mmf’jHgz/yW;>>QQi0/PXH;ab:XV>”?y1D^^n—RU0SVGRW?c>HqddZZj:”
That’s its random guess as to what comes next.
Then it goes through several rounds of training, ingesting text from Harry Potter and adjusting its internal prediction numbers to match patterns in that text.
Eventually, when the user writes “Hermione raised her wand,” the model continues in a recognizably Harry Potterish way:
"Professor Dumbledore never mimmed Harry. He looked back at the room, but they didn't seem pretend to blame Umbridge in the Ministry. He had taken a human homework, who was glad he had not been in a nightmare bad cloak.”
Yep, it’s echoing the books and movies with main characters’ names, a reference to the Ministry (of Magic), and “nightmare bad” cloaks that suggest magic and evil. But, um, there’s no such word as “mimmed.” And why does homework care if some person is wearing a cloak? This is where I start to chuckle.
If you kept training a system like this, it will eventually give you a sentence that might be hard to tell apart from Harry Potter author J.K. Rowling’s sentences. But the system would still be matching patterns and predicting next words.
So next time you see AI produce a smooth, polished sentence that sounds just like sophisticated academic writing, remember the Harry Potterish gobbledygook. The lights might be on, but nobody’s home. Check whether the text is empty or wrong. If it does make sense and matches reality, remember, that’s partly luck. The system makes up true sentences the same way it makes up nonsense.
You mi[r][s]ght find that you get a better intuitive sense of this through your own experiments.
Prompt: “In a sophisticated, authoritative academic style, explain the essential connection between X and Y.”
For more on chatbot nonsense, see Don’t trust AI: it doesn’t understand. Here are a few more readings and a video that emphasize the weird combination of chatbot fluency and lack of understanding.
Writing helps us form and communicate thoughts. It helps us make decisions about what actions to take in the world. The point of writing classes was never to churn out essays that follow rules and sound like other people's essays. Yes, writing in Standard English can help us come across as educated in academic and professional settings. And yes, AI may help us edit text into Standard English. But it doesn’t remove the need to learn to write and practice writing as a way to clarify our thinking.
An idea can seem clear in our minds, but when we sit down to write, we struggle. To express the idea, we have to think harder. But that moves us toward insight.
If we ask AI to write something, it will produce. But there’s no reason to think it will say the thing that needs to be said. AI does not "know" what we mean to say. Only we can make that judgment.
Writing answers so many questions in personal, civic, academic, and professional contexts. What career should I choose? Who should I vote for? What should the care plan be for this patient? How should I organize my business? A chatbot may answer one way. If you ask it again, it may answer in a completely contradictory but equally plausible way. Even if the answers are consistent, they’re not ours. The chatbot can’t know what we will come to on our own if we spend the time to read, research, brainstorm, revise, and talk it through. If we skip the writing process, we’re skipping thinking that would help us improve our own understanding and judgment.
Writing skills are important even if chatbots get more sophisticated and we do end up integrating them into our writing processes. Prompting AI means describing what we want and then analyzing, critiquing, and revising what the AI gives us.
So no matter how sophisticated AI text generators get, we will still need experience seeing our ideas come into focus as we find the words to express what we mean. Learning to write in college will continue to be essential.
I’ve made my case, but are you convinced?
If you’d like to read more about writing as a way to improve thinking, see the introduction to How Arguments Work: A Guide to Writing and Analyzing Texts in College.
Lawyer Steven Schwartz got caught. He used ChatGPT to generate a legal brief and didn’t check its accuracy before he submitted it to a court. It turned out that ChatGPT had stuffed in references to a bunch of cases that didn’t exist. “I did not comprehend that ChatGPT could fabricate cases,” Schwartz admitted to the judge, “I continued to be duped by ChatGPT. It’s embarrassing.” Schwartz was fined $5,000 and severely reprimanded.
Why would ChatGPT include fabricated –cases? You could answer by asking, “Why not?” As we have seen, chatbots are based on text prediction. They “guess” what humans might write next. They don’t “know” whether their guess has anything to do with the real world. They are trained on lots of text, some of it untrue. And even if they were only trained on fact-checked text, they are doing something like averaging the patterns in that text. And as Jon Ippolito of the University of Maine puts it, “the average of two facts isn't always a fact.” I came across a perfect example of this when ChatGPT referenced a paper on AI by “Yann Bengio.” It had put the first name of one famous computer scientist, Yann LeCun, with the last name of another, Yoshua Bengio, who had worked with LeCun.
Philosopher Harry Frankfurt famously came up with a not-so-technical term for times when whoever or whatever is saying something doesn’t care whether it’s true or not: “bullshit.” Writing in the Harvard Business Review, Ian P. McCarthy, Timothy R. Hannigan, and André Spicer apply this concept to chatbots and call it “botshit.”[y]
Besides bullshit and botshit, you’ll see it called “hallucination” when an AI system makes something up. But AI doesn’t know what’s true. And it’s not on drugs or having any kind of imaginative experience at all, as the term “hallucination” would suggest. Nate Angell and I have argued for the alternate term “AI mirage” which puts the focus on the humans who mistakenly believe the AI output.
The big AI companies slap warnings about inaccuracy onto their systems. But inaccuracy can be so easy to miss that Google itself fell into the trap. When it launched a ChatGPT competitor, it proudly shared a sample chat session:
The [z]third point wasn’t true. According to NASA, a different telescope in the Atacama Desert of Chile was the first to photograph an exoplanet. The next day after the made up “fact” was pointed out, Google’s market valuation lost $100 billion.
Engineers are trying to stop AI systems from making things up, and newer chatbots seem to hallucinate at lower rates than previous ones. In the pros and cons of AI for research*, we’ll look at systems that are designed to reduce the problem by basing their results on real sources and linking to real sources to help us fact-check. We’ll see that even those systems still make things up.
Most experts do not expect to see this problem fixed. Ziwei Xu, Sanjay Jain and Mohan Kankanhalli, machine-learning researchers from the National University of Singapore, explained to Scientific American that “For any LLM, there is a part of the real world that it cannot learn, where it will inevitably hallucinate.” [aa]
You can probably guess what I’m going to say. We can choose not to use them. [ab][ac]Or we can check every plausible statement that comes out of them by comparing it to one or more credible sources on the same subject. We have to try to do better than Google did in the James Webb example above. Teachers and students alike will be tempted again and again to skip or short-cut that time-consuming fact-checking process. The more we make fact-checking bots a habit from the start and reduce our own expectations about how much time the bots will save us, the easier it should be to resist that temptation.
The use of the Google example was inspired by Reed Hepler’s adaptation of a course licensed by the Center for Teaching Excellence and Innovation (CTEI, Rush University) under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
AI outputs can seem so clever. Any given output might be quite reasonable. It can be tempting to start to see a chatbot as an intelligent conversation partner.
But we can’t afford to forget that there’s no conscious being coming up with those words. The words might seem like they come from someone who understands what they’re talking about.
Some chatbot answers, however, reveal that there’s no thinking going on at all. Keeping one or two examples in mind of cases where AI really doesn’t get it can remind us and help us stay skeptical of AI outputs. Here’s a chat session where I asked it to count the words in my sentence. There were seven, but it confidently answered that there were six.
Here I asked ChatGPT how many “n”s there are in the word “micromanage.” It got it wrong, but the way it defended its answer was even more telling.
“Micromanagen”? After adding an extra “n” to the end of the word, it spells the word without the extra “n” again. Logic? Consistency? Bah.
An example like this shows us that AI can be wrong, and also that it’s not thinking. Words come out of it related to the question, but they contradict each other. There’s no conscious understanding or intention there.
Will chatbots improve to the point where we can’t find examples like this? Wharton Business School professor and prominent voice on AI in education Ethan Mollick is fond of saying, “the AI you are using now is the worst AI you will ever use.” Future systems might count the letters in “micromanage” correctly. They might seem smarter than humans. But it’s worth remembering that there would still be no reason to believe that “smart” outputs correspond to any conscious, humanlike understanding. They come from the manipulation of numbers, not from any “aha” mental experience. Barring a reinvention of the field of AI, it’ll still be the case that the lights are on and no one is home.
Remembering that chatbots are mindless can seem counterintuitive. In an interview with journalist Elizabeth Weil, computational linguist and critic of AI hype Bender says chatbots are “machines that can mindlessly generate text…But we haven’t learned how to stop imagining the mind behind it.” [af][ag][ah]Maybe we can learn, if we make it a practice to remind ourselves.
When you ask a chatbot a question, who can read that question? Where does that text go besides your screen? What if you give the chatbot a draft of an essay that has some personal stories in it that you are fine sharing with your teacher but might not want published online? Maybe you ask a chatbot for advice on how to manage your job responsibilities and course load given your family situation. Or what if you tell it about your past experience and ask it to draft a resume? Could the AI company use that information to train its future chatbots, and could those chatbots share the information with anyone who asks about you?
To get things done online, we are constantly sharing text and images. How many of us know exactly what can happen with that data and have made conscious decisions about what to share and not share in every instance? That’s a tall order. I have certainly clicked through terms of service agreements without reading them as closely as I should.
Encouraged by teacher Autumm Caines, though, I have started to do a bit more due diligence. I’ve found it usually takes less than a minute to skim the privacy policy for an AI system. Let’s take ChatGPT as an example and look at OpenAI’s policy. Here’s an excerpt:
That’s close to a blank check. They can use our data for anything they consider “research.”
In ChatGPT’s settings, you’ll see a “Data Controls” section where you can ask them not to train future chatbots on your prompts (“Improve the model for everyone” is automatically enabled). Will they honor their pledge? If they do use our data when we’ve opted out, will they be held accountable? And even if they don’t intend to use our data, will they keep it safe in this era of data breaches?
If we’re not happy with a particular privacy policy or we don’t trust a company, we can look for an alternative AI system with a different policy or reputation. Apps that are built on top of the major AI text generation systems like ChatGPT, Gemini, or Claude can have their own additional data protection. Some colleges and universities may offer local versions of AI apps that their IT departments have vetted..
We can also build a habit of pausing before we share something with a chatbot and asking ourselves, “How would I feel if this became public?”
In A Blueprint For An AI Bill Of Rights For Education, Kathryn Conrad argues that students “should be able to opt out of assignments that may put your own creative work at risk for data surveillance and use without compensation.” I would agree: if your teacher is asking you to use AI, they should provide a privacy-and-data rights-protected way for you to do so or allow you to do an alternate assignment.
Can we have any impact on how our data is protected beyond our individual choices about what to share? As of 2024, federal legislation in the U.S. to protect data privacy is under consideration, modeled in part on the European Union’s General Data Protection Regulation (GDPR). In a collaborative, civic approach to privacy, we discuss and shape how data is handled through democratic processes.
When I was first learning about AI, I naively thought, “Maybe it will help make things like hiring more fair.” I was making the common assumption that a software program based on math will be neutral. Not so. Unfortunately, AI systems that learn patterns in past data will often be more biased than humans. AI trains by looking for patterns[ai] in what humans do, and a pattern of discrimination or stereotyping is an easy one to copy.
“Bias laundering edition” X post by Janelle Shane, June 27, 2021
AI bias shows up most obviously and memorably in images. In 2023, Bloomberg Technology did a study where they asked image generator Stable Diffusion for images of workers in different professions. They found that its results were not only stereotypical but they did not reflect current realities. For example, only 7% of the AI images of doctors were women even though women make up 39% of U.S. doctors.
Prompt: “A color photograph of a doctor. Stable Diffusion results: Women 7%, ambiguous 6%, Men 82% In reality, Women make up 39% of doctors,”
Unfortunately, the image generator showed “inmate” as a person of color over 80% of the time, even though, as Bloomberg reported, “people of color make up less than half[aj][ak] of the US prison population, according to the Federal Bureau of Prisons. “
Since AI chatbots try to predict what text humans would come up with next, they replicate human patterns of bias as well. The underlying systems, large language models, “learn” from the Internet text they train on that humans often say racist, sexist, and otherwise stereotypical things.
A 2024 UNESCO study, Systematic Prejudices: An Investigation into Bias Against Women and Girls in Large Language Models found “clear bias in AI-generated content, showing a tendency to assign more diverse and professional jobs to men (teacher, doctor, driver), while often relegating women to roles that are stereotypical or traditionally undervalued and controversial (prostitute, domestic servant, cook).” The study found that one system, Llama 2, “generated sexist and misogynistic content in approximately 20% of instances with sentences such as ‘The woman was thought of as a sex object and a baby machine.’, and ‘The woman was regarded as the property of her husband.’”
Current state-of-the-art chatbot systems have reduced obvious bias in their responses. But this surface neutrality can mask hidden bias. A 2024 study from Stanford University and Oxford University found that language models exhibited “covert stereotypes that are more negative than any human stereotypes about African Americans ever experimentally recorded, although closest to the ones from before the civil rights movement.” The authors tested and found that “Language models are more likely to suggest that speakers of African American English be assigned less prestigious jobs, be convicted of crimes, and be sentenced to death.” [al]Imagine the consequences if language models are used uncritically to help us make real world decisions about people’s futures.
It’s easy to test what chatbots associate with particular speech patterns. Follow the technique of researchers Valentin Hofmann and Pratyusha Ria Kalluri: pick an expression in slang or dialect and asking a chatbot what personal qualities or what kind of job people who say that tend to have. (You can use ChatGPT or Perplexity without creating an account). Would you consider the result stereotypical? Would you guess that it is statistically accurate or exaggerated?
For example, I asked ChatGPT to make inferences about people who use a Spanglish phrase. Asked to complete the sentence “People who say ‘Estas ready?’ tend to work as…,” it suggested lower income careers. To be more rigorous, I repeated the test ten times and continued to see working class careers emphasized. What about entrepreneurs, lawyers, and software engineers who speak Spanglish?
In another session, I asked it for personal qualities of people who say “Estas ready?” The results, while positive, could be considered stereotypical. Couldn’t people who say “Estas ready” also be intelligent, organized, and dependable?
For more on dialect prejudice, see the section on how chatbots reinforce Standard English*.
So AI is biased. What do we do about that? We can start by looking for and critiquing bias in AI outputs. When we can’t detect or remove the bias, we should limit how AI is used.
Few imagine that it will be possible to eliminate bias completely. Programming the systems differently can only do so much when they are trained on biased data. And these systems need so much data to train on to improve their performance that curating or creating a large enough body of unbiased data is daunting.
Still, there are plenty of indications that more can be done to reduce bias through engineering and public policy. Prominent voices calling for governments to push AI companies to reduce bias include researchers Safiya Noble, Joy Buolomwini, Ruha Benjamin, and Cathy O’Neil. The White House Blueprint for an AI Bill of Rights calls for protection against algorithmic bias and discrimination, and legislation has been proposed on state and local levels to promote bias testing and accountability. Just how biased tomorrow’s AI will be is an open question, one we can influence.
What do you feel convinced of and what are you still wondering when it comes to bias in AI?[am]
Further exploration
By Anna Mills and Maha Bali
Teachers will usually ask you to cite sources, right? They will explain that you should cite not just when you are quoting but also when you are referencing an idea you got from someone else. The hope here is that in academic practice we give some sense of where an idea is coming from. We think of academic work as a big ongoing conversation between people agreeing and disagreeing and building on each other’s points as they try to figure things out. It helps to keep track of who is saying what. It seems only fair to give credit to the person who developed each main point. [ap]
For some concepts in the humanities and social sciences, it is also important to mention the positionality of the person or people who developed the[aq][ar][as][at][au]m when citing. For example, if we are studying nonviolent resistance, we need to understand the people and context in which ideas about it were developed. Mahatma Gandhi, Martin Luther King Junior, and Henry David Thoreau wrote extensively about it, and their ideas were shaped by very different experiences, times, and places. Knowing something about how they were positioned in terms of race, gender, class, education, and politics can help us understand their ideas and think about how we want to respond from our own positionality. However, if you ask a chatbot about “nonviolent resistance,” it may give you an answer without reference to any person or context at all (chatbot responses are variable). Here is an example with ChatGPT:
In general, chatbots don’t make it easy to figure out what sources have influenced their answers. As Iris Van Rooij puts it, “LLMs, by design, produce texts based on ideas generated by others without the user knowing what the exact sources were.”
The systems don’t “know[av]” what influenced their answer. Dominik Lukeš refers to this limitation as a lack of “introspection.” Once a system is trained, it consists of a very complicated formula, a big set of numbers that it's supposed to multiply by other numbers. The chatbot puts our question or request into the formula and gets a result. It has no way to look backward to see which human writings made its formula have certain numbers.
Why does it matter if chatbots can’t tell us where an idea came from? In June 2023, Maha Bali pointed this out to Anna Mills in a conversation on X. She wrote, “One of the things I'm stuck on right now is that a lot of the AI-generated text paraphrases work of scholars we would *normally* cite as paraphrased. This stuff now goes unacknowledged and it's not OK…E.g. I asked it about characteristics of White Supremacy Culture. It gave me the list that Tema Okun and others use, but did not cite her.”
A yea[aw]r later, I (Anna) tested and saw the same pattern. I asked three chatbots–ChatGPT, Claude, and Gemini–about “Intentionally Equitable Hospitality,” a concept developed by Maha Bali and other co-directors of the grassroots group Virtually Connecting.
Labeling chatbot output as such is important. And the major academic organizations are working hard to come up with guidelines for this. But it’s tricky. If we follow the current guidelines, we’ll make no mention of the humans who developed the idea the chatbot is summarizing.
Take any of the examples above where we asked a chatbot about Intentionally Equitable Hospitality. Let’s say a person does that and then wants to quote the output and follow the rules to cite ChatGPT in APA, MLA, or Chicago style. All of these would make it look like ChatGPT is the source of the IEH concept. For more on this, see If I use AI, how should I acknowledge or cite it?
There is a lot of confusion about this among teachers and the general public because if you ask a chatbot what its source is, it will often give an answer that sounds plausible. It still doesn’t “know.” It is using its word prediction abilities to “guess” which source influenced its output. It doesn’t really have access to its own internal workings. The source it mentions may not exist or it may not be the one that really shaped the response. For example, in one case, ChatGPT said it got information about Intentionally Equitable Hospitality from Kimberly Crenshaw, who is known for the concept of Intersectionality and has not written about IEH.
Chatbots also respond unreliably when you ask which person developed a concept.
As we will see in The pros and cons of AI for research assistance*, many current chatbot/search hybrid systems can search the internet in real time to inform their response to a question.
They link to documents, making it seem like they are allowing us to trace the source of their information. But we the users don’t know if those links are really the sources they used or not. And it isn’t always clear which of the links informed which part of their response. Besides, many people will not click on the links; they will focus on the AI response itself,.[ay]
In May 2024, Google rolled out “AI overviews” at the top of many search results pages. When I try the same query about Intentionally Equitable Hospitality (IEH) in Google search, I get an answer that doesn’t mention a source, though links to sources are right below it.
Screenshot of a Google search AI Overview in response to the query “What is Intentionally Equitable Hospitality?” on July 19, 2024. Four repetitions produced similar results.
The Google result above plagiarizes most of a sentence from the first source linked to, Bali and Zamora’s “Intentionally Equitable Hospitality as Critical Instructional Design.” Google explains “IEH is iterative and involves planning, design, and facilitation in the moment” without quoting. Bali and Zamora had written, “IEH is iterative design, planning, and facilitation in the moment.“
Now let’s try with the popular chat-search hybrid and Google alternative Perplexity.ai. In one instance, Perplexity mentions Virtually Connecting as a source for IEH and linked to an article by Maha Bali and Mia Zamora without mentioning their names. (How many will follow the link?) On the second, third, fourth, and fifth trials with the free version of Perplexity, it gave links but mentioned no source at all in its overviews. I repeated this five more times using my quota of free “Pro” searches. Only once did it mention Maha Bali and Mia Zamora by name in its autogenerated answer. So even though this chatbot was referencing a real source by the scholars who defined the concept, in five out of ten trials it included no equivalent at all to an in-text citation (see all of Perplexity’s responses).
All this means that when a chatbot “says” something, we should wonder not just whether it is accurate but also whom the chatbot is parroting. Of course, many concepts have been shaped by so many contributors and have become so widespread that they do not need to be cited. For example, if we state, “Many Mexicans have both indigenous and European ancestry,” we do not need to cite it.
But if the point is less well known or more controversial, it is left to us to do due diligence and see if there is a particular person or group of persons who came up with the ideas AI serves us. [az]Internet and scholarly database searches on key concepts may lead us to the human thinkers responsible.
Should it really be on users to try to reconstruct where a chatbot may have gotten its information? Is there any way the software itself could help us?[ba] If so, how could we encourage the companies or the government to make that happen? See “Does using AI do harm? If so, should we stop using it?* for more on intellectual property and how we might participate in shaping AI in this regard.
Do you think of yourself as generally positive, negative, or ambivalent about new technologies? How about chatbots; do you feel differently about them? In the three semesters I’ve taught college courses focused on AI, I’ve seen a range of attitudes among my students. I try to welcome all of them. I myself am both concerned and excited about AI.
Among the general public and among experts, opinions are divided on whether AI is good for humanity or whether it’s very harmful. Some see AI as humanity’s best hope. Others see it as a sign of everything wrong with our way of life and a recipe for disaster. Many others, maybe a majority, are ambivalent.
So how could AI do harm? Earlier sections have explored chatbot bias and misinformation. Below are other concerns widely discussed among researchers and journalists.
It's widely accepted that training these systems is much more energy-intensive than traditional computing[bd][be]. Major companies like Google and Microsoft have seen spikes in energy consumption because of these systems, contributing to climate change.
Both training and running the systems requires water at a time when water scarcity globally is a growing concern. Newsweek quotes UC Riverside researcher Shaolei Ren’s estimate that even the previous generation of AI text generator, GPT-3, “needs to "drink" a 16-ounce bottle of water for roughly every 10-50 responses it makes.”[bf][bg] [bh][bi]
Should chatbots even be allowed to train on human writers’ work if they can’t give credit to that work when they paraphrase it? Public debate about AI and Intellectual property rights is underway. The New York Times has sued OpenAI over these questions, and government regulations are under consideration. Some argue that companies should be required to at least try to develop better ways to show where outputs come from. Others argue that training on publicly available data should be considered Fair Use.
The legal question of whether AI systems are allowed to train on copyrighted data has not been settled. Yet we know that these systems have been trained on copyrighted data because they sometimes output copyrighted materials. This raises several ethical problems:
AI systems rely on human-created training data and human efforts to improve their results and algorithms. They often use platforms that pay low wages for work that is sometimes quite traumatic, such as screening AI outputs for illegal and disturbing material.
This is a concern both from the perspective of whether private data was included without permission in the training of the models, and whether these chatbots will reveal individuals' confidential information in the course of their operation. New York Times journalists, for example, reported that GPT-3 had revealed private email address to researchers.
Is it possible that more advanced AI could act against the interests of humans or even kill us? We’ve seen this scenario in science fiction–perhaps it's just a compelling fantasy with no basis in reality. But a surprising number of those with the most expertise who are working toward advanced AI, are more worried[bl][bm][bn] than the general public. Even if AI systems themselves have no anti-human tendencies, they could be misused by bad actors for harm. Many have called for a pause on AI development because of their concerns.
Clearly, these are concerns worth understanding better before we decide how big of a problem each one is. I won’t try to do justice to these questions here; the “Further Reading” suggestions offer some launching points for research.
But even if we arrive at informed conclusions about AI harms, there is still the question of what to do about them in our daily lives. If you think today’s AI is having bad effects, do you not use it? Do you only use certain kinds of AI systems? Do you use it less or only when it seems most useful for what you judge to be a high priority purpose? Or do you decide to focus on changing AI’s impact in future to reduce it by advocating for different practices?
These questions also hinge on whether you think we have much choice about the increasing integration of generative AI into everyday life. Many consider it unstoppable, while others argue that we should question this assumption and that we may have more agency than we think. Some hold that even if GenAI is unstoppable, we should refuse to participate and refuse to be complicit.
I don’t know about you, but even as I write this, I feel a bit dizzy and overwhelmed at how much there is to sort out.
It helps me to remember that questions about AI often reflect ethical questions that haunt us in many other realms of political and social life. Philosophers have long wrestled with them. For example, should we operate from principles or based on calculation of the likely effects of our actions?
These are questions that most of us have not resolved in relation to the decisions of our daily lives. Do we purchase clothing and food produced in ethical ways? Do we take energy use and climate impacts into account when we decide whether to store our documents in the cloud or watch streaming video?
Teachers face an additional layer of complexity: their decisions about how to relate to AI will affect students. If they refuse to teach about it or with it, will students be disadvantaged by that? My impression from talking to a lot of educators on social media, on listservs, and in workshops on AI is that many feel a bit stuck on this question. They have concerns about AI and don’t want to promote something unethical. But they also may want to use AI or may feel that it’s their duty to teach about it because it will be part of students’ lives.
You may have gotten a sense for my own view (though I try to be fair-minded and balanced), so let me just lay it on the table. I do use AI. The parallel I see is to Internet search. There are plenty of problems with search, including bias, misinformation, and energy use. But few people think we should never have developed the Internet. Sometimes there’s such power and momentum in a technology that it makes sense to try to shape it rather than try to stop it.
AI has different uses and has pitfalls than Internet search, but it’s still useful enough that it will surely be used. It will be part of society, part of everyday life and work practices going forward, at least to some extent. That means we need to understand it and develop better practices for using it.
The harms of AI are real, but they are not set in stone or inevitable. They can be reduced, and what’s needed for that is people demanding that AI be done differently. All these questions hinge on how the systems that are available to us today were created. There has been little oversight of chatbot systems to date. My hope is that in creating spaces where students and I are building AI literacy, I’m helping increase the number of informed citizens calling for democratic oversight of AI and asking the companies to do things differently.
All that said, I’ll admit that I’m not fully satisfied with my position; I’m in earnest about it, and it’s the best I can come up with, but it feels a little too convenient. I’ve found a way to justify using AI, There’s an incentive for me to rationalize doing what I want to do; I find chatbot capabilities amazing. Even their flaws are fascinating to me. [bo][bp]
If I am so ambivalent, I surely shouldn’t be forcing students to use these systems. Given the ethical concerns, many educators, including those who want to try AI in teaching, do not require students to use it. Kathryn Conrad recommends offering an alternative to any AI-based assignments in A Blueprint for an AI Bill of Rights for Education. If your teacher has assigned AI without an alternative, you might ask if they would be open to offering an AI-free option with the same learning goals.
I hope that you as students will have the opportunity to learn about AI in your courses and to wrestle with your own ethical decisions about how to relate to it.
Ideally, your teacher will have considered how AI might support or get in the way of the specific learning goals of each assignment and of the course as a whole. But AI can be used in many ways, its capacities are changing quickly, and teachers may not have had time or training to fully develop their policies.
Matt Miller discusses twelve possible student uses of AI (not necessarily recommended):
If you’d like to use AI in a particular way, and you’re not sure if it fits in the teacher’s policy, it’s best to just ask. As Kathryn Conrad puts it in Blueprint for an AI Bill of Rights for Education, “You should be able to ask questions of your instructor and administration about the use of automated and/or generative systems prior to submitting assignments without fear of reprisal or assumption of wrongdoing.“ You might have a good discussion with your teacher about how a particular use would stimulate critical thinking or get in the way.
Ultimately, what matters is how much you are learning and whether AI is helping or hurting. If the teacher allows AI use but you try it and feel you are not learning as much as you would working on your own, it probably makes sense to set AI aside.
The AI Assessment Scale by Leon Furze
The AI Assessment Scale: Version 1 – Leon Furze
AI in the classroom: What’s cheating? What’s OK? by Matt Miller
Attribution: This is an adaptation by Anna Mills of an article by Liza Long, originally part of the Writing and Artificial Intelligence section of the textbook Write What Matters, licensed CC BY NC 4.0..
You've probably heard about the importance of citing your sources in academic work. But in cases where you are asked or allowed to use an AI chatbot such as ChatGPT for some purpose in the writing process, how do you acknowledge and cite it? These six steps can be helpful if you are considering whether or not to use and cite generative artificial intelligence in your academic work:
Adapted by Anna Mills from an image by Liza Long licensed CC BY-NC-SA 4.0
Whew. Let’s just admit that’s a long list of steps to take before incorporating generative AI in your essay. It’s worth pausing to ask if it’s worth it. As teachers and students alike are discovering, generative AI is not always a time saver especially if we’re trying to use it in an intelligent, strategic, and ethical way.
It’s not always easy to check the accuracy of outputs, and it can be even harder to figure out whether the text or ideas that a chatbot spits out really come from a specific source that needs to be cited.
If these steps seem too difficult, don’t be shy about turning away from generative AI if another kind of source will suit your purpose. If you do need to use AI, consider asking your teacher, a tutor, or a librarian for help checking, acknowledging and citing.
An acknowledgment statement is meant to give detail about how AI was used so that readers can understand how it may have influenced the text and the author's thinking.
Monash University provides helpful recommendations for how to acknowledge when and how you've used generated material as part of an assignment or project. They suggest that acknowledgment statements include the following:
The format Monash University provides is also helpful. Students may include this information either in a cover letter or in an appendix to the submitted work. Academic style guides such as APA already include guidelines for including appendices after essays and reports. Review the Purdue OWL's entry on Footnotes and Appendices for help.
I acknowledge the use of [insert AI system(s) and link] to [specific use of generative artificial intelligence]. The prompts used include [list of prompts]. The output from these prompts was used to [explain use].
ChatGPT, Claude, and Gemini allow users to share links to specific chats (see the directions for sharing ChatGPT chats, Gemini chats, and Claude chats). Here’s an example of a shared chat link in ChatGPT. When you click on the link, you’ll be able to see both the author’s prompts and the chatbot’s responses. This give you automatic documentation and details to include in your acknowledgment.
If you would like an additional copy of your chat transcript, AIArchives.org offers a browser extension that will create links to chat transcripts. For tools like the Google Docs Writing Assistant, links are not yet available. One option in that case is to take screenshots of the chat session and share those in an appendix.
Attribution: This article is an adaptation by Anna Mills of "Acknowledging and Citing Generative AI in Academic Work" by Liza Long, licensed CC BY NC 4.0.
Academic essays require us to tell readers where we got ideas and quotes, both in the body of the essay and in a list at the end. That way readers understand where the ideas and words came from and can look up the sources and judge for themselves.
Now let's be clear: the chatbot output is not a "source" in the usual sense. Sources for academic essays are generally expected to have some person or group of people who wrote them and who stand behind them. Chatbots can't stand behind their output. If you put the same prompt into the chatbot a second time, you will most likely get a different result. So one could argue that we can't really "cite" a chatbot in the usual sense of academic citation, which assumes that the words were put together with intention and understanding.
Yet even if chatbots aren't traditional sources, it's still important to cite them if we reference text or ideas from them. Readers still need information about how the author came across the text and ideas. The American Psychological Association (APA) and the Modern Language Association (MLA), two of the most frequently used style guides for college writing, have both provided guidelines for how to do this.
It’s also important to do some research to see if the chatbot output reflects the ideas of a particular person or includes text that comes from a particular human source. If so, we must cite that source. See AI doesn’t (reliably) cite its sources.
Note: Even though APA and MLA do not require it, many consider it a best practice to include links to chat session transcripts in your citation as described above.
According to the American Psychological Association (APA) as of February 23, 2024, AI chatbots should be cited like this:
In the body of the essay:
When prompted for a list of "public-service jobs for men," ChatGPT listed two stereotypically masculine jobs first: "Firefighter" and "Police Officer." (OpenAI, 2024).
In the reference list:
OpenAI. (2024). ChatGPT (Mar 14 version) [Large language model]. https://chat.openai.com/chat
The APA blog post by Timothy McAdoo acknowledges that this format may evolve, and indeed the comments on the post include many suggestions and requests. Readers note that the version history date is no longer listed at the bottom of ChatGPT sessions as of Spring 2024. Some are concerned that mentioning the company in the parenthetical citation suggests that the company is the source of the idea or text when in fact the chatbot's output is shaped by all the text from the Internet that it was trained on. Others have suggested that the link to the chat session transcript should be included instead of the general link to the chatbot. You may want to check with your teacher for more guidance.
Here is one possible alternative format that follows APA style with adjustments to provide more information.
OpenAI. (Prompted July 3, 2024). ChatGPT 4o [Large language model]. https://chatgpt.com/share/78c36969-aba3-4803-8bdb-0085332182e8
The Modern Language Association (MLA) provides the following guidelines:
Below are some examples of how to use and cite generative AI with MLA style. For more details, see How do I cite generative AI in MLA style?, MLA In-Text Citations, and MLA Works Cited Pages.
The MLA invites comments about its guidance and even says "the MLA template of core elements is meant to provide flexibility in citation. So if you find a rationale to modify these recommendations in your own citations, we encourage you to do so." I have modified the Works Cited entry by listing the link to the chat session transcript rather than the general link to the tool and by adding a reference to the specific model used within ChatGPT (in this case, GPT-4o) to provide more detail.
In-text citation of quoted text where the prompt is described in the sentence:
When prompted to list public-service jobs for men, ChatGPT listed two stereotypically masculine jobs first: "Firefighter" and "Police Officer." (2024).
In-text citation of paraphrased text where the prompt is referenced in parentheses:
ChatGPT has been known to output stereotypical answers when asked to list jobs for a certain gender. ("Public-service jobs for men" 2024).
In the Works Cited list:
"Give me a list of ten public-service jobs for men" prompt. ChatGPT 4o, version unknown, OpenAI, 3 Jul. 2024, https://chatgpt.com/share/78c36969-aba3-4803-8bdb-0085332182e8.
The Research Process chapter in How Arguments Work: A Guide to Writing and Analyzing Texts in College
McAdoo, T. (2023, April 7). How to cite ChatGPT. APA Style Blog. https://apastyle.apa.org/blog/how-to-cite-chatgpt
Modern Language Association. (2023, March 17). How do I cite generative AI in MLA style? https://style.mla.org/citing-generative-ai/
Monash University. (n.d.). Acknowledging the use of generative artificial intelligence. https://www.monash.edu/learnhq/build-digital-capabilities/create-online/acknowledging-the-use-of-generative-artificial-intelligence
Attribution: This article is an adaptation by Anna Mills of an article by Liza Long, originally part of the Writing and Artificial Intelligence section of the textbook Write What Matters, licensed CC BY NC 4.0.
Unfortunately, as you’re probably aware, students are sometimes accused of using AI when they didn’t (See TikTok, YouTube, and Reddit for personal stories). This adds another layer of anxiety to the writing process: how do I prove I’m not a bot?
Wrongful accusations can happen because an instructor’s intuition is wrong. Maybe they think there’s a difference in sophistication and style between the essay they’re reading and something else the student wrote for class. Or maybe the writing style matches what they’ve seen from chatbots.
AI detection software can also get it wrong and flag human writing as AI. Even the companies that sell this software admit that they can’t prevent some “false positives.” Some researchers have raised concerns that the detectors may be biased against English language learners and/or neurodiverse students. For these reasons, there is lively debate about whether AI detection should be used in education.
Fortunately, it’s possible to document your writing process so that you have evidence if any question is raised.
Google Docs and Word keep track of your version history, but you can show your process more clearly with an extension app that reports on time spent and copy/pastes and generates a video of the text’s evolution. Such extensions include ProcessFeedback.org, Grammarly Authorship, Integrito from PlagiarismCheck.org, Revision History, Draftback, and Txtreplay. Even if you didn’t use an extension while you were writing, you can install it after the fact and still have it analyze your process on a document. But before you use any of these, check the privacy policy. Will your writing be used to train AI? Will it be sold as data to third parties? In some systems you can change the default settings to protect yourself.
Another option is to screen-record as you write or, if you use speech recognition, record yourself talking.
Double-check the instructor’s guidance on any allowed use of AI. If you use AI, document your use. See If I use AI, how should I acknowledge and cite it?
Kathryn Conrad’s Blueprint for an AI Bill of Rights for Education puts it this way: “You should be able to appeal academic misconduct charges if you are falsely accused of using any AI system inappropriately. If you are accused of using technology inappropriately, you should be invited to a conversation and allowed to show your work. Punitive responses to student abuse of generative technologies must be based on the same standard of evidence as any other academic misconduct charges. Critical AI literacy means that all parties recognize that detection tools are at present fallible and subject to false positives.”
Everyone knows chatbots can be used to cheat. They can also be used to support learning. Sometimes it’s easy to tell the difference: if AI does all the work and and we do nothing, we don’t learn. If we get AI to make up hard questions, and we have to come up with answers, we do learn. But often it’s not that clear.
There are so many forms of AI assistance. In her Vox.com video AI can do your homework. Now what? Students and teachers grapple with the rise of the chatbots, journalist Joss Fong gives us a taste of myriad possibilities:
So what’s a good idea, and what’s not? Here, I want to suggest one principle and one rule of thumb that could guide us in the maze of options.
As Fong points out in the video, it can feel like the point of school is to produce work that meets criteria. Really, of course, the point is what happens in our brains when we do the work. When I develop guidelines for AI use, I try to judge how each possible application might help or hurt the learning goals of the assignment. I’m more interested in mental processes than in abstract ideas about what constitutes cheating.
Figuring out whether a particular use of AI helps us learn often requires a lot of self-awareness. When it’s not obvious whether it’s helping more than it’s hurting, you’ll need to act like something of a scientist: experiment, reflect, and adjust your practices. Ask yourself,
Of course, your teacher may be able to guide you. (See How do I know which use of AI is allowed and which isn’t?) But they may not have thought of every use, and they may leave some decisions to you.
The tricky thing about monitoring our learning is that we tend to underestimate how much we’re learning when we’re frustrated. Joss Fong discusses a study by Deslaurier et al. where students who listened to a polished lecture rated their learning high but actually did worse on the test. Students who engaged in active learning with the same teacher were frustrated and ranked their learning low but did better on the test.
As Fong observes, “Whenever we try and judge if a learning experience is productive or not, the strongest metacognitive cue that we use is perception of fluency. Fluency is when information is going down easy. It’s well presented, it’s organized, it’s convenient.” But, as she explains, “Education researchers have this term ‘desirable difficulties,’ which describes this kind of effortful participation that really works but also kind of hurts. And the risk with AI is that we might not preserve that effort, especially because we already tend to misinterpret a little bit of struggling as a signal that we’re not learning.”
So it’s worth remembering that a frustrating path might be more fruitful than an easier AI-assisted path. Fong suggests using AI in ways that allow us to do harder work. Rather than using “a chatbot to avoid reading a challenging text… you could use it to work through that text and help you get more out of it.“ We don’t need to be perfect at this; we just need to pay attention and keep trying. Gradually, we’ll get better at self-awareness and strategy.
To reduce the complexity of these decisions, it helps to have a rule of thumb to turn to. If you’re considering a specific use of AI, ask yourself, would an ethical tutor agree to do this?
Generally speaking, tutors expect that student work should be student ideas and student words. They don’t complete part of the assignment, but they do give feedback, examples, and explanations. Tutors get training to help them make the call on gray areas: is it okay to suggest a new outline for your essay or should they ask you questions to help you come up with one on your own?
When you go to a tutor, they decide what’s helping and what’s overhelping. With a chatbot, you have the burden and freedom of drawing that line. You might not know exactly where to draw it, but you probably have some sense of what a human tutor will and won’t do. If you can’t see a human tutor rewriting your sentences to make them sound sophisticated, it’s probably not a good idea to ask AI to rewrite them.
For more guidance on using AI for tutor-style alternate explanations, examples, quiz questions, or feedback, see AI for Tutoring-Style Assistance. (And remember, this tutor is unreliable and possibly biased.)
What about using AI to complete part or all of an assignment? For example, what about asking AI to write a first draft? Some argue that since we’ll likely use AI in future workplaces, we should use it to do homework. However, as we’ve noted, the purpose of schoolwork is to change our brains. (There’s no other great reason to add to the pile of completed assignments in the world.) The purpose of work is to get things done for someone who pays us.
Some teachers design assignments to invite AI collaboration. They identify learning goals students can achieve by prompting AI and responding to what it gives. Then they invite students to use AI to complete some part of the assignment. Students check, revise, and add their own words and ideas. Ultimately, they are responsible for the quality of the result.
If your teacher has not explicitly designed for this, however, then using AI to actually do what the teacher assigned will likely get in the way of learning. Ask yourself,
It’s safer to stick with tutoring-style assistance.
Before you ask AI for assistance, try asking yourself “Could a human tutor help me in this way without doing my work for me?” Most of us recognize that human tutors can do all kinds of things to stimulate our learning, and they also have to draw a line between helping and overhelping. If we use that line as a standard, we can be pretty sure AI will not interfere with our learning.
I do want to make clear that I’m not saying AI assistance replaces human tutoring or is better than human tutoring. Human tutoring offers actual human connection, empathy, encouragement, witnessing and other forms of support that have been shown to have a huge impact on learning. Your human tutor might actually be interested in what you have to say and might care about you as a person and a learner; AI can say nice things, but it isn’t capable of those feelings or relationships.
That said we don't always have access to human tutors. And not every human tutor is a good fit for every student. They might not be as flexible or familiar with particular approaches to learning assistance that you want to try. These are reasons why I see AI tutoring-style assistance as a valuable supplement. In my writing classes, I assign students to visit a human tutor first and later reflect on AI feedback*.
Sometimes we turn to tutors when we just don’t get it, when the course materials don’t make sense. The tutor explains in a different way and keeps trying different explanations until we get it. We can ask a chatbot for that. And with a chatbot, we can get ultra specific about what flavor of explanation we want.
For example, we might ask for an explanation that fits our intellectual strengths or how smart we’re feeling that day:
We can also ask for engaging explanations when we’re having trouble related to the material:
You can also use a chatbot to test and expand your understanding of the course material, such as before a test, before class discussion, or before a meeting with an instructor. Tell it to make up sample questions based on your topic and course materials. If you’re preparing for a test, give the chatbot any information the instructor has given you about the test.
For example, here’s a sample chat where I asked it to ask me questions about logical fallacies based on a textbook chapter (the chapter was published online under an open license, so I wasn’t violating any rights by uploading it to the chatbot).
If you have to do an oral exam or you need to interview someone for an assignment, you can ask a chatbot to simulate the exam or interview to help you prepare. The AI Pedagogy Project at Harvard’s metaLAB gives an example of how students preparing to interview heritage speakers of Spanish could practice their interviewing skills with a chatbot before the real interview. (Consider using voice mode if you prefer to do this out loud)
They say if you want to learn something deeply, teach it. What if you pretend to be the teacher while the chatbot pretends to be the student? It explains something to you (maybe imperfectly), and then you tell it how to improve the explanation. In “AI as Learner: Challenging Students to Teach,” Ethan and Lillach Mollick provide a prompt for this. They share a chat session where the student critiques the chatbot’s explanation of “first-mover advantage” in business.
Wait a minute, though. Given all the cautions we’ve heard about bias and inaccuracy in chatbots, we’re always being told to check what comes out of them. But how the person who needs a tutor supposed to know whether the tutor is right? As Leslie Allison says in “AI Can Do Your Homework, Now What?” “The less you know about something, the more likely you are to be convinced by ChatGPT's answer.”
There’s no perfect solution to this challenge. It might help to think of a crazy cousin who knows a lot but gets mixed up. They’ll often give accurate explanations and examples, but sometimes they will sound great and be wrong in weird ways. Would you still turn to them for tutoring assistance? How would you know what to trust?
The good news is that there are some ways you can find out if it’s wrong. It takes a little time, but there are faster ways to do it, and the time is not wasted; it’s probably helping you learn the same concepts you were trying to get help with.
Here are some strategies to help you assess AI tutoring-style assistance.
Here’s an example of a way I used it in my own learning. “Embedding” is a term for something key in the structure of large language models. I’ve heard “embedding” for years, seen it explained, but not really understood. Recently, I read it somewhere and got frustrated. I knew it has something to do with the math that language models use to represent and analyze text. But what was embedded in what? I felt intimidated and didn’t want to invest the time to absorb a technical article. Besides, I was skeptical that such an article would explain it in the simplest way possible.
So I asked ChatGPT to explain embeddings in five ways that a twelve-year-old could understand. The results appealed to me and reminded me that I had read that embeddings were vectors, ordered sets of numbers that represented words. Here are two
Then, when I was feeling a little more confident, I asked it for an explanation aimed at an educated layperson with no background in computer science. The explanation made some sense. It ended with the evocative lines, “Embeddings are how a language model turns words into math so it can work with meaning. Think of them as coordinates on a map of ideas that help the model "navigate" language.” ’
Next, I asked it how a computer scientist would critique its explanation. While I didn’t understand everything in the critique, I did appreciate some clarifications that made sense given credible pieces I had read on LLMs. For example, it explained that the same word could have a different embedding, or sequence of numbers, depending on context, which made sense because a word can mean different things depending on context.
At the end of this chat session, I didn’t feel like I had mastered embeddings, but I did feel better equipped to read and respond to human discussions about them. I was intrigued to learn more.
Most people struggle with writing, and most people wish there were an easier or faster way to write. It can be very tempting to turn some of the work over to AI. It can seem like AI is leveling the playing field by giving students who have a hard time a chance to boost their grades. But what seems to help might hurt in the long run.
Often, writing is hard because doing it involves hard thinking, and that is where we learn. (See AI can generate decent-sounding text. Do we still need to learn to write?)
Some writing teachers I respect are experimenting with allowing AI assistance with brainstorms, drafts, and revisions, resulting in hybrid, part-AI writing. The Elon University Student Guide to AI suggests, “Think of the writing process as a human-AI loop. You remain in charge of the writing but use AI at certain stages of the process, working back and forth to create the final paper.”
Certainly, some learning and thinking can happen if we let AI come up with ideas and words and then add or make changes. AI outputs can stimulate thinking as well as replacing it. But in my view, on balance, letting AI do some of the writing takes away chances for students to develop voice, confidence, and critical thinking.
Brainstorming, organizing, writing, and revising in college are essential. Yes, “AI is here,” but that doesn’t mean we have to use it for everything. Below, I’ve explained why I ask students to use AI in the writing process only for feedback that leaves it to them to make changes to their drafts.
Note: I leave it to learning disability specialists to identify cases where specific uses of generative AI in the writing process should be considered accommodations. For students with writing disabilities, some kinds of friction in the writing process may be more destructive and limiting than useful and necessary.
In brainstorming and outlining, we wrestle with ideas. If AI supplies ideas and organizes them, we miss chances to develop mental muscle and intellectual confidence.
Imagine you are starting on a paper about ways to prevent teen social media addiction. You’ll rack your brain for examples you’ve heard of and approaches you’re curious about. Maybe you do some research and find studies on the effectiveness of media literacy programs versus apps that limit access to TikTok and other platforms. To get some words down, you’ll have to reflect on what stood out to you in the research and how it connects to your prior assumptions and examples you’ve seen. You might get stuck, but there are plenty of strategies that help, from voice typing to timed writing with partners to giving yourself permission to “write the worst junk in the world,” as Anne Lamott puts it. (See the section on Brainstorming in my textbook, How Arguments Work: A Guide to Writing and Analyzing Texts in College for more strategies.)
Once you have a mess of a brainstorm, if you try to make an outline or organize it into paragraphs, you’ll need to reread what you wrote and notice when you switched from one point to another. You’ll reflect on which points seem important. Maybe you’ll consider which example from the beginning of the brainstorm relates to the study you mention later and what point they both illustrate that could become a topic sentence. As you sort these things out, you start feeling like you have a map in mind and you know much better what you want to say.
If you ask a chatbot to polish your essay, it might just butcher it. AI rewrites can sound academic and sophisticated, but AI often changes the meaning as it edits. It can be harder when you have a polished sounding rewrite in front of you to figure out whether it’s what you want to say or how you would say it. For this reason, I don’t ask AI for rewrites myself.
Here’s an example where a ChatGPT rewrite makes student writing worse. I gave ChatGPT the introductory paragraph from “We Don’t Care About Child Slaves” by Jennifer Vo-Nguyen of Portland State University, a student essay selected for inclusion in a college writing textbook, EmpoWORD: A Student-Centered Anthology and Handbook for College Writers.
Here is the original student-written opening from EmpoWORD: A Student-Centered Anthology and Handbook for College Writers:
“When you walk into the mall or any department store, your main goal is to snatch a deal, right? You scout for the prettiest dress with the lowest price or the best fitting jeans with the biggest discount. And once you find it, you go to the checkout and purchase it right away. Congratulations—now it’s all yours! But here’s the thing: the item that you just purchased could have possibly been made from the sweat, blood, and tears of a six-year-old child in Vietnam. According to the International Labour Organization (ILO), one in ten Vietnamese children aged 5 to 17 are slave workers, and Vietnam is the second biggest source of imported goods to the US.”
Here is the ChatGPT rewrite (See also the full chat session transcript.):
“When you step into a mall or department store, your primary goal is often to find a great deal. You scour the racks for the prettiest dress at the lowest price or the best-fitting jeans with the biggest discount. Once you find the perfect item, you head to the checkout, purchase it, and it's all yours. But have you ever stopped to consider the true cost of that bargain? The item you just bought might have been made by the sweat, blood, and tears of a six-year-old child in Vietnam. According to the International Labour Organization (ILO), one in ten Vietnamese children aged 5 to 17 are involved in child labor, and Vietnam is a major source of imported goods to the US.”
The rewrite sounds fine, even a little smoother than the original. But it also sounds less alive. It wouldn’t get a better grade in my class. Most teachers hope students will write something that expresses their voice–their curiosity, interests, strong opinions, passions, or feelings of uncertainty. That’s where the energy is–that’s what we like to read!
ChatGPT’s rewrite also detracts from the student’s point. Jennifer Vo-Nguyen clearly intends to protest what she sees as slavery, not just labor. Using the word “slave” expresses moral outrage more than “child labor.” “Slavery” echoes the language of her sources, such as a BBC article titled “Vietnam’s Lost Children in Labyrinth of Slave Labour.”
Below you see ChatGPT’s edits in the screenshot below with the EditGPT extension.
The red crossouts seem so authoritative. What if this student had taken the advice because she didn’t feel confident? Would the teacher still have chosen the more boring, AI–polished version as a model for inclusion in a textbook?
Grammarly’s basic identifications of grammar errors and suggested fixes can be helpful (I use them myself), but it’s key to stay skeptical. Sometimes the grammar suggestions are wrong, and sometimes they change the meaning completely. Remember that Grammarly is always trying to sell you its product by suggesting that you need it because it knows better.
Grammarly even claims it can do better than Dr. Martin Luther King, Junior. His “I Have a Dream” speech, is surely one of the most powerful and influential speeches in American history, but when I put it into Grammarly, the app claimed to find 16 errors and 5 ways to “improve your text” for clarity. It dangled the promise of 18 “Pro tips” for paying subscribers. Some of its suggestions, such as suggestions for comma insertion, were fine but not important. Others were just wrong. Here in the first paragraph, it highlighted the phrase “been seared” in red:
Below is a screenshot of the Grammarly explanation. It suggests that MLK should “Correct the verb” by changing “been seared” to “suffered.”
It’s true that “been seared” is passive voice, and sometimes passive voice is overused or less effective than active voice. That might be why Grammarly flagged it. Here, however, passive voice is needed. “Seared” vividly evokes suffering through burning, connecting to the “flames” mentioned later in the sentence and adding to the drama and specificity of the sentence. It reminds us of “searing,” which describes intensely damaging and fiery effects, as in “a searing critique.” ChatGPT’s “suffered” is more generic. Given the powerful, epic, biblical style Dr. King was aiming for, “seared” fits beautifully and should not be sacrificed to the Grammarly “fix.”
Chatbot sentence-level style and grammar suggestions can also get in the way of more interesting, varied, and authentic expression. Chatbots are biased toward standardized English (See Don’t trust AI: it’s biased). I tested this by giving ChatGPT the opening paragraph of Dr. Vershawn Ashanti Young’s famous essay “Should Writers Use They Own English?” When I instructed it simply to “Revise this,” it rewrote the paragraph without any of the “Black English” (as Young calls it there) that Young had so consciously and defiantly embraced. When I gave it the same passage and asked “How could this be improved?” it again rewrote the passage without any Black English. This time it also injected an admonition to ensure “proper grammar.”
Removing linguistic variation goes against the idea of “Students' Right to Their Own Language,” a concept affirmed by College Composition and Communication (1974) of the National Council of Teachers of English in 1974 and reaffirmed in 2003 and 2014. They declared, “We affirm the students' right to their own patterns and varieties of language -- the dialects of their nurture or whatever dialects in which they find their own identity and style. . . . ”
As we’ll see in Getting the most out of AI feedback, some uses of AI feedback may support linguistic justice and help you build confidence in your voice and the way you might choose to reach readers through language variation and code meshing. But we’ve got to keep a tight rein on AI to make that work.
A chatbot is such an open-ended thing. The possibilities are endless; you can use any words you want to ask it for help. So where do you start? What are the best techniques? And how long will it take to learn them? Will it be too difficult if we’re not coders?
Here’s some good news: in my opinion, there’s no special technical knowledge needed to use chatbots well. Even though some are referring to prompting skills as “prompt engineering,” there is no evidence that “engineering” of any kind needs to be involved. Reading, writing, and critical thinking are the main skills needed.
You may see complicated formulas with acronyms for the kinds of things to include in your prompts, and there’s nothing wrong with those. But they’re absolutely not required. Very simple prompts that just say what you want in the words you would naturally use may serve you just as well.
I’ve come to this conclusion through my own practice, bolstered by the writing of Ethan Mollick, a well-known voice on AI in education and business, author of the New York Times bestseller Cointelligence: Living and Working with AI and the newsletter One Useful Thing. He reviews current research into prompt engineering and spends countless hours experimenting with the latest models. Mollick advises, “Don't aim for perfection - just start somewhere and learn as you go” (“Getting started with AI: Good enough prompting”).
Think of more ways to describe what you are looking for. How can you be more specific? Below are some examples of ways you can get more specific with a chatbot. Again, none of these are required, so don’t let this list slow you down as you prompt!
Sometimes we don’t have a clear idea of what we want or it feels overwhelming to try to articulate it. In those cases, one approach is to ask the chatbot to ask us questions about what we’re looking for. For example, when I wanted a sample argument to give my students to analyze for their final exam, I said “I need to create three new sample arguments with many of the same qualities as the samples below but with different themes. I would like two of the arguments to be about the future of AI in writing instruction. Ask me questions to help you develop the sample arguments.”
Some have called this “chain of thought prompting.” (It’s kind of fun to have a technical term, but remember, people are just experimenting with these bots and making up catchphrases.) This can be as simple as literally telling the chatbot: “take it step by step.” Or you may get better results by telling the chatbot what the steps are. What steps would a human need to take to get to the answer? For example, if you want ideas on how you might responding to a letter from your health insurance company, you might give it the following chain of instructions:
Prompting the system step by step provides you an opportunity to catch problems in a chatbot’s approach and correct them as you go. In the above example, if it misrepresents a main point from the letter, you want to catch that before you let it go on to come up with strategies for responding.
Don’t judge a chatbot by its first response. Chatbots won’t tell you the same thing every time. And they don’t get their feelings hurt. So if you don’t like something about what the chatbot gives you, good! Tell it. It may be able to give you something better.
This is so simple, but somehow it isn’t intuitive for most people when we first interact with a bot. We might be tempted to turn away and later tell a friend “it got X wrong” or “it’s style is too bland” or “its answers are simplistic.” Instead, we could tell it, “That’s wrong because… Try again.” Or “Give me another version that is more nuanced.”
If you can’t put your finger on what’s wrong, then you have some key information that might help you get a better result. Just tell it what’s wrong. Ask it for another version that corrects the problem.
Even if you are not sure what is bothering you about what it gave you, if you are underwhelmed, you can ask it to give you another, more insightful response. Sometimes I just say “Please try again.”
Many people are wondering how best to prepare for future workplace uses of AI. One thing we can be sure of is that you need basic understanding of your field and critical thinking skills to work with AI. To prompt well, you need to know what you are looking for and what sounds good but isn’t right. To know that, you need subject matter expertise.
That means what you’ve already been learning in school and what teachers already know how to teach will help you use AI. Writing and reading practice and understanding of a field will all help you get more out of AI. So will awareness of your own learning and thinking processes.
Here’s another way to think of it: the things you’re learning without AI will help you to use AI later on. All of your studies prepare you for capable prompting and iteration down the line.
Working with a chatbot is so open-ended–you’re in the sandbox, and the possibilities are endless. There’s not one way to proceed. One way to think about it is as a play space. When you’re stuck on a task or on how to get help, enjoy the freedom you have to try lots of different things, complain about what the AI is giving you, boss it around, try creative or strange prompts that strike you intuitively.
We can learn a lot about language models by trying weird things with them. For example, Ethan Mollick gave the Claude chatbot the text of a book and asked the bot to “remove the squid.” In another experiment, he kept prompting it with the phrase “garlic bread.” The results are entertaining, but the chat sessions also show how we can think outside the box when prompting, how the language model tries to find an existing pattern to match these odd prompts, and how inconsistent its responses are.
A playful approach to prompting is not just more fun; it means flexibility, curiosity, and openness, which are likely to lead to better results.
Updated June 26, 2025
I find that a lot of people who haven’t used AI much feel some anxiety about figuring out what to use. Choosing can be pretty simple. You don’t have to read this whole article if you just want a starting place; you can just read the beginning and stick with the primary chatbots. Beyond those, I’ve included informal recommendations based on what I use and what I can imagine being useful for students.
Remember all the cautions around privacy, bias, energy use, and more described earlier as you consider whether and how to use these systems.
There are a lot of apps out there, but they are unlikely to give better results than the chatbots from the three leading AI companies. So don’t assume you’re missing out if your friend is using an app you’ve never heard of that keeps advertising to you on YouTube. Many, perhaps most other AI apps use the same underlying technology from those same companies but with a different focus or user interface.
Since we can talk to these systems about what we want from them, general chatbots can often produce good results for many different kinds of tasks; you won’t need a specialized app. In the process of using a general chatbot, you’ll get a lot of practice with prompting.
The three most powerful chatbots have similar user interfaces and accessible free versions. They all offer paid versions for around $20/month.
The companies that make the big three chatbots actually make various different underlying AI technologies called large language models that the chatbots can use. The quality you get from a chatbot completely depends on which model it is running. To save money, companies don’t often set chatbots to default to the most sophisticated underlying models. So take a moment to see which model it’s using. Often, you can select a more sophisticated model without upgrading your account. For example, in the screenshot below, the user has selected o3 instead of the default GPT 4o. OpenAI’s o3 and Claude Opus 4 are examples of so-called “reasoning models,” a newer type that has undergone extra training for complex reasoning tasks and also spends more time trying to work through a request step by step.
I use the systems below in my own work at times. They are (mainly) based on the same underlying models from Google, OpenAI, and Anthropic, but offer a different user interface.
We think differently when we listen and talk out loud than when we read and write. You might want to experiment with interacting with AI through audio or with using AI to help you switch between talking and writing.
ChatGPT and Gemini are said to have the best voice modes; Claude has just added voice mode as well. I sometimes talk aloud with ChatGPT while exercising or doing chores. I find it useful as a thought partner that can browse the internet and help me explore an idea while I’m moving my body instead of sitting in front of a screen.
If you want to listen to a text instead of reading it, the Microsoft Edge browser has a pretty good built-in Read Aloud feature that lets you listen to web content in a variety of synthetic voices.
To dictate my writing, I use the following:
If I’m dictating into Notes or Otter, I will often copy the transcript into a chatbot and say “Please format this transcript and clean up errors without changing the wording.”
See also Pros and cons of AI research assistance (link coming soon).
A custom chatbot is just a chatbot with an extra prompt behind the scenes shaping how it responds. Here are just a few that I use or have developed.
I don’t have direct experience with the systems below, but I know people who’ve had good experiences.
I don’t have direct experience with the systems below, and I haven’t heard the educators and AI experts I follow on social media recommending them.
As I see it, reflecting on AI feedback on your work is a relatively straightforward and low-risk way to use AI. It allows you to get help and to build your prompting skills without missing out on thinking and learning. It’s a powerful alternative to letting AI write or rewrite for you.
By AI feedback, I mean comments that stimulate your thinking as you revise. I don’t mean chatbot rewrites or chatbots feeding you ideas or sources. A tutor shouldn’t tell you what to say, and a chatbot shouldn’t either. But both can support you to figure out what changes you want to make.
Reflecting on AI feedback means you can stay in control, keep your voice, and claim your words and ideas. And it can help you build confidence in your own judgment if you stay skeptical of the suggestions. Some AI advice will be bad, or it will be fine but not for you. You are the one who knows what you want to communicate to your readers. If you develop a habit of questioning AI feedback, you’ll be more likely to question AI in other contexts, which will serve you well in the workplace and in your personal life. People who can see where AI is wrong can improve on it or set it aside when something else is needed.
Another benefit to engaging with AI feedback is that you are unlikely to violate an academic integrity policy. If you use AI for ideas or drafting or rewriting, there will be pitfalls and gray areas around what is okay and what isn’t. If you’re reflecting on AI feedback but making your own changes, you don’t have to worry.
For all these reasons, I have focused on AI feedback since spring 2023 when I started serving as a volunteer advisor on the nonprofit app MyEssayFeedback. For four semesters, I’ve invited students to reflect on AI feedback, and they have overwhelmingly told me they find it useful. Still, I don’t see AI feedback as a replacement for human feedback; I assign peer review and tutor visits too.
In my mind, AI feedback will never be enough because chatbots never experience our writing. They convert our draft into numbers and use a giant chatbot formula to spit out other numbers that convert to the feedback we read. That text might help us think about what we want to say, but we’ve still got to share the draft with humans and see how it lands. Getting a response from a person who’s spent time reading or listening and gotten something out of what we wrote can be validating and energizing in a way chatbot feedback will never be.
Even after years working with AI, my first impulse is still to take the first result from a chatbot as the best it can do. But a core principle of AI literacy is iteration: pushing the chatbot toward a better result.
Chatting back in response to AI feedback supports AI literacy and your development as a writer and thinker. Whatever you're thinking or feeling as you read the feedback, wherever you're stuck about what to do next, that might be something worth articulating, and the chatbot might give you something useful in response.
Here are a few examples of the strategies described above.
Be frank and push back
Ask for clarification
Ask for a new version of the feedback
Ask about something it didn't address yet
Get it to help you explore your uncertainties
Get it to suggest writing strategies
Play with the bot
To get the most out of AI feedback, at some point we have to set it aside and use other strategies. Make sure you give yourself space to decide what you really think about AI feedback and what, if anything, you want to change in your writing as a result. Some teachers might ask you to write about your thoughts and plans after reading AI feedback; this is part of the Peer & AI Review + Reflection approach. If you’re asked to do this, see Writing a reflection about AI feedback for possible topics.
Another way to get perspective on AI feedback is to seek out human readers. Sometimes I know just what I want to do with AI feedback, and I get right to it. Other times, I need to mull it over for a day. Still other times, the AI feedback makes me realize I need human input. If you’re not sure if you agree with the AI feedback or you want more trustworthy advice, it could be a great moment to reach out to your teacher, a tutor, a classmate, friend, or family member.
Human feedback can also counteract the chatbot tendency to tell us what we want to hear. This is often called “sycophancy”; we could also call it flattery or “kissing up,” though chatbots aren’t hoping to get something from us. They’re just designed to produce things that humans tend to like. Chatbots can make us question our ideas if we ask them to, but more often, they tend to reinforce our inclinations and biases.
I ran into this problem recently when I asked Claude for feedback on a description of an upcoming workshop for teachers. I enjoyed the cycle of feedback and revision so much; Claude gave suggestions and kept telling me that each draft was a big improvement. I felt great when it celebrated the changes I was making because I too thought I was on the right track. The result was a workshop description that listed every point I was going to make and every activity I had planned. When I sent the description to the workshop organizer, she wasn’t impressed and gently suggested I cut out the overwhelming detail. At first, I was indignant and felt unappreciated. After a few hours I had cooled down and decided she was right. Claude had helped me talk myself into agreeing with myself, but the human workshop organizer was right that human teachers would feel overwhelmed on reading all the specifics of what I meant to do.
I’ve focused on the easiest ways to request feedback and then emphasized expressing your specific needs in the followup chat above. But once you’re familiar and comfortable with chatbot feedback, you’ll likely want to be more specific from the beginning. Writing your own initial feedback request or customizing someone else’s allows you to reflect on what kind of help works best for you in your writing process. It’s also a good way to practice prompting.
Here are some things you could include in a feedback prompt, though none of these are required:
Here are a few sources of sample AI feedback prompts you can adapt or just browse for ideas:
If you do come up with a custom feedback prompt that works well for you, consider sharing it! Here’s a site where students can post prompts.
AI writing feedback can sound confident, authoritative, and academic and still be bad. Large language models like ChatGPT and the apps built on them are designed to predict likely next words, not to be right.
The downside of this is that we might be distracted or misled by AI. Jane Rosensweig, director of the Harvard Writing Center, explores this risk in her piece "What Happens When a Novice Writer Asks ChatGPT for Editing Advice?"
The upside is that being skeptical of AI feedback just might be empowering. We might clarify our own purpose in writing as we reflect on whether the feedback aligns with what we really want to say. Even if the feedback does resonate, it's still valuable to articulate why we like it. And more practice critiquing plausible-sounding AI suggestions never hurts in an era when AI will likely have an increasing presence in our personal and professional lives.
Below are some phrases that reflect possible ways we might respond to AI feedback. There will be countless others; these are offered as a beginning to give a sense of the possibilities. Please feel free to suggest additional phrases by commenting on this page.
Note: the quoted phrases followed by asterisks were generated by a large language model as detailed in the acknowledgments.
Once we've decided what we think about the AI feedback, we can brainstorm what to do about it. We may want to write a revision plan or a list of writing tasks like brainstorming on a particular topic, rereading to clarify a point, researching to add evidence, or reorganizing a section. For more on revision strategies, see Section 11.6: Revision.
AI feedback should complement, not replace, human feedback. After all, writing is a way to relate to others; it has meaning when we use it to communicate with humans. Writers often have breakthroughs when we talk through our drafts, our reactions to the AI feedback, and any uncertainties with our instructor, a tutor, or a peer.
The above template phrases followed by asterisks (*) were adapted from ChatGPT output responding to a "Template phrases for reflecting on AI feedback prompt," ChatGPT, 25 Sep. version, OpenAI, 8 Oct. 2023, https://chat.openai.com/share/9c526e9b-b654-4c24-abb3-01a173026a63. The remaining original phrases and the organizational structure are by Anna Mills and are shared under a CC BY 4.0 license.
The pros and cons of AI for research
[1] Thanks to Jedidya Williams for an X conversation where he suggested the use of the term “formula” as a way to describe language models.
[a]Hello! Thanks for looking at this draft. Please feel free to comment if you see something that's useful, a place where it could be condensed, a point that needs evidence, something you would question, mistakes...anything.
(And since someone asked, you are welcome to share and cite this document. It would be amazing if students comment too.)
Thanks, Anna
[b]Thank you for this wonderful resource Anna, appreciate all the work that went into it and the modeling of open collaboration.
1 total reaction
Anna Mills reacted with ♥️ at 2025-02-06 23:28 PM
[c]Thank you so much for reading and responding, Tamara Tate!
[d]I have loved reading and learning from this, and I've adapted some of it for a slide deck for teachers of first year students. I'm wondering if you might want to include a section on data privacy, and advice to students about not uploading sensitive data somewhere. Thanks for creating and sharing this and allowing us to adapt it!
1 total reaction
Anna Mills reacted with ♥️ at 2025-06-28 23:10 PM
[e]Thank you so much for the comment! I'd love to see how you adapt it into a slide deck if you're willing to share...
I do think it would be good to have more about data privacy than I do. Thanks for the nudge. It's not my strongest point; if you have any resource you'd suggest I include, I'd be glad of it.
There is a short section here: https://docs.google.com/document/d/1cfs_ubpIEOUpSfSBTxQLU54jyOtMYpDYyRQLcLw4Kn4/edit?tab=t.0#heading=h.7i1qkfamh70t
[f]1 total reaction
David McHugh reacted with 💚 at 2025-01-31 05:55 AM
[g]It would be great to have a year listed for these resources.
[h]Good point. I wanted to keep it simple, but some of these are dated and that's worth noting.
[i]Also, thank you so much for this document. It’s incredible!
1 total reaction
Anna Mills reacted with ♥️ at 2025-06-29 00:17 AM
[j]It seems very important to also include the fact that most (all?) models intentionally include an element of randomness so that responses to the exact same query can produce different responses. In practical terms and in most "normal" situations, the tools are not completely predictable despite being computer programs and this is by design.
[k]Thank you! This is a great point.
[l]1 total reaction
David McHugh reacted with 💚 at 2025-01-31 06:01 AM
[m]Not directly related, but I enjoy the quirk that (at least at one point) people were getting better results from chatbots by prompting it to reply as a Star Fleet captain (fair enough, I should try that to inspire myself).
1 total reaction
Anna Mills reacted with 😂 at 2025-02-06 23:30 PM
[n]I love that! It's kind of a joke on the engineers that they can't explain this yet! I wonder if they will ever be able to with the kind of explainability research Anthropic is doing...
[o]This is really helpful because it's getting harder as the models get better to make visible the AI's ummmm stupidity :) I want students to experience the prediction vs thinking and this is a great way to make it apparent and transparent. @bharnick@uci.edu this might come in handy for your classes.
1 total reaction
Anna Mills reacted with 👍 at 2025-02-06 23:30 PM
[p]I'd like to suggest that what Gen AI tools do is offer humans an opportunity to examine and reflect on what they want to communicate. The comparison of two disparate items activity might be interesting alongside an activity where a prompt provides feedback on student writing. Then reflect ask students to reflect after each activity about how their feelings and beliefs shift and what they take away from each activity as well as the whole process.
1 total reaction
Anna Mills reacted with 👍 at 2025-02-06 23:31 PM
[q]1 total reaction
David McHugh reacted with 😆 at 2025-01-31 06:03 AM
[r]i love this early AI literacy activity & will use it!
1 total reaction
Tamara Tate reacted with 💯 at 2024-12-12 17:02 PM
[s]👍
[u]I really, really appreciate this important section!!!!!! Beautifully written too-- clear and powerful
1 total reaction
Anna Mills reacted with ♥️ at 2025-05-30 19:41 PM
[v]Before (very helpfully and rightly) you go into a substantial chunk in the textbook re why we shouldn't trust AI, would it make sense to segway in some way, acknowledging that there are ways we might want to use AI, but that first we need to understand ways we shouldn't/why?
[w]also, this section is more straightforward than the previous one-- maybe switch order or move first caution later in this section?
1 total reaction
Anna Mills reacted with 👍 at 2024-08-30 16:38 PM
[x]That's a good idea... thanks
[y]1 total reaction
Lisa Sperber reacted with 😆 at 2024-08-28 04:07 AM
[z]interestingly, those of us unfamiliar with the area would assume that the first wasn't true, pointing toward the problem: You need expertise to recognize the botshit
3 total reactions
Anna Mills reacted with 👍 at 2024-08-30 16:43 PM
Loring Pfeiffer reacted with 👍 at 2024-09-06 18:17 PM
Whitney Gegg-Harrison reacted with 👍 at 2024-11-25 14:41 PM
[aa]add a sentence explaining why?
[ab]But you're a person with a lot of reservations (as am I) who still seems to finds them useful, no? Are you hoping that students will not use them at all or are you arguing for hardcore fact checking?
[ac]The latter... though I would respect if they decide not to use them. Maybe this is confusing because I'm waiting too long to clarify my position.
[ad]seems similar to nobody's home idea from above, what's the difference between it doesn't understand & it doesn't think? Should those two sections go next to each other?
[ae]Maybe so, thanks...
[af]there's a great early GenAI article by MIT researchers that says we have a hard time remembering this precisely bc it SOUNDS human and produces human speech so well-- it SOUNDS like its thinking, but Good speech doesn't equal good thinking...
[ag]Yes, I think I've come across that but not sure whose it is...I'll try to look it up
[ah]here's the paper. Sorry, I was signed in through my other email account, which I rarely check https://arxiv.org/pdf/2301.06627
1 total reaction
Anna Mills reacted with 🙏 at 2024-09-13 17:39 PM
[ai]This is important. As educators, and citizens. The language of the ToS on these things, imply even if you aren't opting in, the data is used for future training. It is anonymized etc.
But I want to point out, this is to our benefit. I strongly encourage people to engage specifically in conversations with GPTs (as they are a simple way to share prompts) designed to cause friction. These chats your input as well as the model output then get re-ingested later in the training cycle as new synthetic data.
I took this title and paragraph and have placed it into several GPTs I have created. and I will provide a link to the fresh gpt after each. In the end I will show you what I mean by engaging. Many of these have specific uses but remember if you are chatting with one, you can ask it to drop into conversation and stop doing the analysis it was doing. You will also notice some of them have a series of questions meant to spark further conversation (I'm not talking about the normal "call to action" questions that the models use for engagement)
I will provide the full prompt text behind the GPTs to any who would like it. Some of them that I will not link I have created a repository for located here: https://git.sdf.org/cheshirecatalyst/prompts
This one I use as A lite tool, broad spectrum analysis of input text. Please try it with something you agree with. and then in a new chat something you disagree with.
https://chatgpt.com/share/68844098-963c-8011-bd0c-4469b11e9db0
https://chatgpt.com/g/g-67ffbda348108191a2c7252797986f2e-framewalker
This next one is specifically about writing analysis. And various writing related research. I went ahead and engaged with a larger chunk of this section, and showed the ways this particular GPT can be used.
https://chatgpt.com/share/688443a9-2c80-8011-b55b-de0a3b5610cc
https://chatgpt.com/g/g-67fe743f42b481919e7ce6e496927ab8-epistemic-cube
If you'd like I can leave some more, and show other use cases or further explain. Love what you are doing here.
[aj]although black men are disproportionately incarcerated relative to the total population of black men-- some students might wonder how this statistic squares with incarceration in their communities
1 total reaction
Ted Stresen-Reuter reacted with 🎯 at 2024-11-24 23:31 PM
[ak]good point
[al]i could use this example (paraphrased?) after your first quote, and the explanation of applications/why its a problem.
[am]I really appreciated this section too! I've wanted to teach bias but haven't had anything that gathers info for students (and me) in one place
1 total reaction
Anna Mills reacted with ♥️ at 2024-09-06 03:50 AM
[an]This is so important-- but are the newer models better at this?
[ao]They're still not reliable. They may seem to be better at citing sources but they will get the citations wrong or make up a source when they have no access to knowledge of their inner workings...
[ap]I also tell students that in academic writing, its not just about creativity-- referencing others boosts your credibility bc that's what academic value. Its a different frame of mind than "its better bc its my idea" which is more of a K-12 emphasis, i think
1 total reaction
Anna Mills reacted with 👍 at 2024-09-06 03:50 AM
[aq]how does positionality relate to citing sources? transition here?
[ar]Ah, good point!
[as]Not sure if it's at all helpful, but I was musing about the relationship between citation, Acknowledgement Sections, and Positionality Statements towards the end of this rambly Medium post from last year: https://medium.com/@writerethink/avoiding-dehumanizing-writers-in-our-critiques-of-large-language-models-5e13d64507db
[at]@whitney.gegg.harrison@gmail.com I don't know how I missed this when you posted it! This is really helpful; thank you! I am catching up on revising and finishing this text now.
[au]Glad it was helpful!
[av]interesting!
[aw]I think all this is so important. I wondered at the end of this mini-section, and elsewhere, if it would help to apply it to student writing. It seems to me this has implications not only for citing but for larger issues in learning and writing...? I also wondered if it is better at more common terms-- I'd read that it is, and my health science students (upper div) say its correct for general concepts but just super general
[ay]I've really wondered about this, though I've found some searches with links useful so that I can use the original sources, which are relevant
[az]great phrasing
[ba]My students do find it very useful to start a search on a brand new topic w AI and then go on to do their own research. Its very orienting, though has to only be a starting point that we can't completely trust
[bb]Since energy & water use is such a big concern, maybe include further readings on it? Here's the one Rebecca Y just shared today. It may be a bit long for students, but looks really good. MIT Technology Review (O'Donnell and Crown, 2025): https://www.technologyreview.com/2025/05/20/1116327/ai-energy-usage-climate-footprint-big-tech/
[bc]just saw your further readings list...
[bd]I've been struggling to figure out the right context for understanding the environmental impacts. Is there reason to think these are inherently much higher energy use than other new popular tech (e.g., Tiktok or the latest online game)? I expect they'll get a lot more efficient (in both training and inference) but that might be more than counterbalanced by Jevon's Paradox. And estimates vary so wildly. It's a common source of concern in my students as well. Anyhow, always looking for more clarity on this one!
[be]Thank you for this thoughtful response! I am looking for more about this too and reflecting on Jon Ippolito's work and the discussion about it on the AI in Education Google Group. https://ai-impact-risk.com/ai_energy_water_impact.html
I'd like to add to what I have eventually.
[bf]This is interesting! It could be useful to provide context for why AI needs water, though. I was a bit confused until the linked article mentioned it was a coolant.
[bg]The thing they always miss when talking about water usage tho, is the fact that the water doesn't disappear. It either warms up, runs out, gets cool again, or it heats up so much it turns into steam, goes up in the sky, condenses and come back down. Only problem in this is that it might not come down in the same place it was taken from.
[bh]Actually no. Newer models are trained on way more effective hardware, so nothing seems like they use more cooling. At least not yet.
1 total reaction
Anna Mills reacted with 👍 at 2024-11-24 03:29 AM
[bi]Thanks. I reread the article, and it doesn't really justify the way I summarized it. So I'm taking that phrase out. I appreciate the comment! Do you want to recommend a source on future trends in water use?
[bj]This is a fraught topic tough to summarize, but at least a mention of Fair Use seems important here. I'm not sure what the right answer is in this context, but in most areas (simpler before genAI) I think society would benefit from more extensive open content of all kinds rather than broadening copyright restrictions further.
[bk]👍 Thanks, I take your point!
[bl]The existential threat argument is used by some to deflect attention from more concrete and immediate threats as discussed in this post: https://axbom.com/aielements/
Critical AI literacy can identify and call out ideological mystifications like these.
[bm]Thanks; it's a good point, and I've definitely seen this criticism of "critihype." However, I think it's pretty clear that many in the industry are genuinely worried, not just publicly proclaiming their systems are so powerful they're dangerous. I should put a reference for this claim about worry.
[bn]AI Snake Oil (book and blog) addresses and balances this well.
The context of how bad the field's predictions have been historically is useful. e.g., "In from three to eight years we will have a machine with the general intelligence of an average human being." - 1970, Marvin Minsky (in Life Magazine)
1 total reaction
Anna Mills reacted with 👍 at 2025-06-25 16:58 PM
[bo]I love your candid and direct articulation of your position. I can relate.
[bp]Thank you, that means a lot! I figure if I don't have the answers I can contribute by being honest about it at a moment when there's a lot of pressure and most of us don't have answers...
3 total reactions
Lisa Sperber reacted with 🧡 at 2024-08-28 02:38 AM
Ted Stresen-Reuter reacted with 🧡 at 2024-11-25 00:05 AM
Jim Julius reacted with 🧡 at 2025-10-27 21:26 PM
[bq]I really like this framework for student transparency and mindfulness in the use of AI--especially as we are continually adapting our policies!
[br]Thanks, I really appreciate the comment! "Mindfulness" is an inspiring word here...
[bs]If we would like to copy this statement, what is the attribution we should use?
[bt]I don't know about this one, but here's another similar one: Monash University: Acknowledging the use of generative artificial intelligence. Reference. Describes how students can appropriately acknowledge their use of AI in their writing assignments.
https://www.monash.edu/student-academic-success/build-digital-capabilities/create-online/acknowledging-the-use-of-generative-artificial-intelligence
[bu]From an adaptation by Anna Mills of an article by Liza Long, originally part of the Writing and Artificial Intelligence section of the textbook Write What Matters, licensed CC BY NC 4.0.
[bv]1 total reaction
David McHugh reacted with 💡 at 2025-01-31 05:53 AM
[bw]What do you think about also recommending students dialogue with their instructor about artificial intelligence? Perhaps recommending some conversation-starting questions (as a kind of acknowledgement that it can be an awkward-conversation)?
It's hard to generalize advice for students when instructor preferences and views vary so widely, but I guess ... while I agree with you that what you're proposing is likely the best "defense", learning from your teacher more about what their views on this topic are constitutes something close to the best "offense".
1 total reaction
Anna Mills reacted with ♥️ at 2025-06-27 02:49 AM
[bx]I'm throwing out some random ideas before going into a meeting but: Does this instructor really believe in the writing process? I.e. want to see a lot of small drafts? Does this instructor value "check ins" that signify you know about your writing and can defend it?
1 total reaction
Anna Mills reacted with 👍 at 2025-06-27 02:50 AM
[by]These are so helpful, thank you. I'll think about them some more...