Policies related to ChatGPT and other AI Tools

What you should know about AI platforms

AI writing platforms have become savvy enough to write essays, emails, create apps, rubrics, help with excel, and nearly any conceivable writing situation that relies on linguistic patterns. You should know how to use these platforms. In this class, we’ll practice learning and thinking with them.

They will be particularly helpful in the following situations, outlined in AUA’s ChatGPT (AI) in Education guide:

  • improving equity, since more students can have access to personalized learning, tutoring strategies, and scaffolding;
  • saving time, e.g., when brainstorming or troubleshooting;
  • motivating learners when they feel stuck or unsure of how to move forward with a certain task
  • developing certain critical thinking skills.

But there are clear limitations. Students should be aware of the following:

  • AI platforms rely on language patterns to predict what an answer to a prompt should look like. They aren’t “thinking” about the right response in a way a student would.
  • AI platforms excel at predictive text and pattern recognition but struggle with accuracy. ChatGPT will even make up things (it “hallucinates” information) that sound convincing but aren’t true. Internet-connected platforms have not solved this problem. Bing Chat (which is based on GPT4) and Google Bard are connected to the internet and still hallucinate. If the user is looking for factual information, assume every output includes stuff that is made-up.
  • AI platforms have bias. They have been trained on datasets that contain worldviews and assumptions and will replicate those ways of thinking about the world. Critical thinking strategies are especially important when engaging with AI-generated text.  
  • Apps such as ChatGPT depersonalize your writing. Overreliance may lead to a lack of voice and distinctive style—rhetorical strategies that are crucial for effective writing.

Our Course Principles for using AI

As we learn with AI platforms, there are two principles we use to guide our class policy on AI use:

  1. Cognitive dimension: Working with AI should not reduce your ability to think clearly. We will practice using AI to facilitate—rather than hinder—learning.
  2. Ethical dimension: Students using AI should be transparent about their use and make sure it aligns with academic integrity.

With those principles in mind, here are some policies that will be enforced in our course:

  • AI Policy I: AI use is encouraged with certain tasks, especially to help with preparation and editing. Students are invited to use AI platforms to help prepare for assignments and projects, e.g., to help with brainstorming or to see what a completed essay might look like. In fact, one way to view ChatGPT is as a simulation platform: it can quickly generate a variety of outputs that are flawed but helpful for seeing things differently. I also welcome you to use AI tools to help revise and edit your work, e.g., to help identify flaws in reasoning or spot confusing or underdeveloped paragraphs.
  • AI Policy II: Major assignments (such as essays) must be at least 50% non-generated. AI platforms can be used to help with aspects of the writing process, including some early drafting. However, at least 50% of the essays (and other major assignments) must be your own work and not generated, unless specified otherwise. See AI Policy III for how to acknowledge AI use.
  • AI Policy III: AI use must be tracked and acknowledged. If you used Generative AI programs such as ChatGPT, Quillbot, or Grammarly to assist with your writing beyond spell-check or grammar suggestions, you must acknowledge its use by following the guidelines provided in Monash University’s Acknowledging the Use of Generative Artificial Intelligence: specify how and where your readers can expect to see the impact, and include an Appendix for the assignment that shows what aspects were generated. ChatGPT now includes the ability to share links to conversations; you can also use extensions such as ShareGPT to share your ChatGPT conversations in the Appendix; and/or you can include screenshots. [meta note: I acknowledge using ChatGPT on March 13, 2023 to help revise this paragraph for clarity. Here’s a link to my ChatGPT conversation.]
  • AI Policy IV: Any writing, media, or other submissions not explicitly identified as AI-generated will be assumed as original to the student. Submitting AI-generated work without identifying it as such will be considered a violation of CWI’s Standards of Conduct, 2.1.1.1: “Submitting the work of another person or entity as your own.” In such violations, students will receive a “0” until they’ve modified it to align with AI Policy II above. If I suspect a student has used generative AI without acknowledging it, I will contact them before marking down the assignment.  

As AI tools become increasingly embedded in existing technologies, students will enter gray areas that don’t obviously align with the policies above. If a student is unsure of whether and how much of a submission has been AI-generated, or whether they are in violation of a certain policy, they should reach out to the instructor and ask for guidance. 

Course Policies related to ChatGPT and other AI Tools by Joel Gladd, Ph.D. is licensed under a Creative Commons Attribution 4.0 International License.

Based on a work at Course Policies related to ChatGPT and other AI Tools