AI Policy and Agreement
SOC 208, Fall 2025, Prof. Delehanty
AI chatbots, such as ChatGPT, aren’t actually intelligent. Instead, they are pattern identifiers. Here is how they work:
In other words, AI gives you the best approximation it can of what it determines you want, based on patterns it has been programmed to find in whatever data it was trained on.
This means that by definition, AI cannot produce original thoughts. It can only analyze existing information and then provide you with a combination of letters, numbers, etc. that aligns with patterns it identifies. For example, when you ask an AI chatbot for a list of citations, it doesn’t usually give you real citations. Instead, it produces text that it has determined looks like citations, based on the characteristics of whatever citations it finds in its training data. For this reason, very often, AI-generated bibliographies contain citations to books and papers that have never been written. Recently, the Trump administration provided Congress with a “Make America Healthy Again” report containing seven citations to academic publications that do not actually exist.
To put it another way, AI is a bullshit machine. I don’t mean that everything it produces is false. As philosopher Harry Franklin explains, bullshitters present information without concern for whether it is true or not, in order to impress, persuade, or advance a narrative. This is what AI does. It produces information that is plausible, but truth-neutral. It does not—cannot—know or care whether the information it gives you is true. It can only determine whether its output fits the patterns it was programmed to look for in the data it has access to.
This means that when you use AI for writing, reading, summarizing texts, distilling notes, etc., you are bullshitting. Not lying, per se, but doing your work without regard to its quality or its veracity. Generating meaningless words on a page for no other purpose than to fill blank space. Letting a machine do your thinking for you. But the machine can’t actually think.
This is a problem, because thinking is how we learn. Thinking happens through repeated intentional practices such as reading, concept mapping, summarizing, memoing, comparing, and writing. Even (especially!) when it’s hard. In my view, the two most important outcomes of a Clark education are 1) learning to think for yourself, and 2) learning how to communicate that thinking to others in speech and writing. Using AI undercuts these skills because it means you aren’t actually thinking or communicating for yourself at all. And when you are not thinking, and not learning to communicate your thoughts, you are not meeting the learning objectives of this course, developing the skills you came to Clark to get, or obtaining the education that you are paying so much for.
For these reasons (as well as others that I’m happy to discuss), students in this class must agree not to use generative AI in any substantive[1] capacity for any course-related work. No asking it for summaries of texts. No using it to organize your notes. And certainly no using it to produce, revise, or edit the written work you submit to me.
This policy includes Grammarly and similar services (because by reformatting your writing for you, they stop you from learning how to communicate your thoughts yourself). I would much rather read your own imperfect prose than the robotic nonsense that Grammarly turns your work into. Only by editing and polishing your writing yourself can you develop your own voice, that is, learn to communicate effectively. Don’t worry, I don’t take points off for the kinds of errors that students typically use Grammarly to “fix,” and I don’t insist on the formal academic writing rules that students tend to worry about. Writing need not “sound smart” to be good.
For these same reasons, I also pledge to you that I won’t use AI for any substantive part of my work in this course. Every email you get from me will be written by me personally. Every assignment prompt, exam question, and piece of commentary on your work will be my own. I will not use AI to draft lectures, generate reading summaries, annotate PDFs, organize my notes, grade your work, or anything else. In an era of “AI-native universities,” I will be an AI-free professor.
Therefore, to promote a productive learning environment for all, students must agree to the following terms:[2]
Student Printed Name: _____________________________________________________
Student Signature and Date: _________________________________________________
Additional Resources: Alternatives to AI
I acknowledge that AI can be a tempting shortcut, that all your friends are using it, that you may be using it in other courses, that some professors may say it’s okay to use it, and that it’s hard to break established habits even when you want to. To help you work through these challenges, below are some alternatives to using AI. Try these out when you feel the itch to call up ChatGPT. And know that I grade work carefully and supportively. You won’t be penalized for minor imperfections in your writing or for not catching every nuance of every reading, and you’ll be rewarded for working through challenges authentically, even if that results in errors or incomplete thoughts.
For generating ideas, drafts, etc.
For summarizing challenging texts or passages
For polishing your prose (as an alternative to Grammarly)
[1] In this context, substantive means “involving thinking about the course material or your analysis thereof.” For example, I don’t mind if you use AI for formulaic tasks such as alphabetizing a list of citations, converting citations from one bibliographic style to another, or generating a table from a list of numbers. These kinds of pattern-based data entry tasks are great uses for AI. If you have questions about whether a particular use is allowed, please ask.
[2] As with any policy, I am happy to discuss questions and concerns. While agreement to the policy is required, I encourage dialogue and discussion of its purpose, implications, and enforcement throughout the semester.
[3] Exceptions will be considered for non-native speakers of English who wish to use AI translation services. Students interested in this option must meet with me first to establish the parameters. Other exceptions may be possible as well, most likely for students registered with SAS, but all will require an in-person conversation first to determine what if any use is acceptable. No exceptions are guaranteed.
[4] Students will always be given the chance to explain their work. If I suspect you of AI use, I’ll first ask to talk about it in a non-judgmental manner. If you can show that the work is your own, no penalty will be assessed.