Conversations about AI and Writing
Generative artificial intelligence is often conflated with artificial general intelligence (AGI), which is the human-like, seemingly sentient AI that is still the stuff of science fiction. Generative AI, by contrast, refers to computer systems that can produce, or generate, various forms of traditionally human expression, in the form of digital content including language, images, video, and music. LLMs are a subset of generative AI used to deliver text-based formats like prose, poetry, or even programming code. The GPT grouping of LLMs currently enjoys the most public recognition, but there are others available; GPT itself stands for generative pre-trained transformer, with each piece of that nomenclature bearing a specific technical meaning.
Defining our terms: generative AI
LLMs work by using statistics and probability to predict what the next character (i.e., letter, punctuation mark, even a blank space) is likely to be in an ongoing sequence, thereby “spelling” words, phrases, and entire sentences and paragraphs. It is not unlike autocomplete, but more powerful. LLMs are trained on vast bodies of preexisting text (such as content from the Internet), which, to some extent, predetermine their output. All of the text a model generates is original in the sense that it represents combinations of letters and words that generally have no exact match in the training documents, yet the content is also unoriginal in that it is determined by patterns in its training data. The same language model may generate a variety of different sequences in response to the same input prompt. A model cannot reliably report on which sources in its training data contributed to any given output.
Defining our terms: LLMs
Framing Questions:
https://padlet.com/zurhellenss/wai-2024-l5ryfuvmthwb74os
Principle 1: Shift from a rules-based “honor code” approach to a values-based “academic integrity” approach.
Principle 2: Promote transparency and accountability in the use, reporting, and citing of AI. This includes stating how faculty might use AI-detection tools in assessing students’ work.
Principle 3: Be collegial and collaborative — not adversarial and/or combative — in talking to students about whether or not they used AI to complete an assignment.
Principle 4: Consider the issue of access and equity if you decide students can use AI for your class.
Principle 5: Do not assume/apply a one-size fits all policy for students’ use of AI across all of your courses, or across academic disciplines, as the appropriate use of AI can differ significantly from course to course and from discipline to discipline.
Principle 6: Evaluate the role of writing in your class when determining AI use by your students.
Principle 7: Focus on what is core to the learning process in your class when defining students’ appropriate use of AI.