Published using Google Docs
Community Building and AI – Reflecting (in a Rush) on a Pedagogical Experiment
Updated automatically every 5 minutes

Community Building and AI – Reflecting (in a Rush) on a Pedagogical Experiment

[I gave myself no more than 45 minutes to write this, so forgive me in advance if it is rough. This is a bit of a brain-dump, and I wish I had more time to polish it. But, such is life as a PhD student–running to a project meeting in 30 minutes. I wanted to write this, though, because I wanted to pay forward the empowerment I found in the blog post that sparked this activity for me by my dear friend Stacey Margarita Johnson. So, here goes a brain-dump, prepare for a bit of a bumpy ride!]

I’m back in a classroom after two years away from teaching (formally). I have been over the moon excited about it since I got the assignment–I’m teaching 30 undergraduates, mostly preservice content (not world  language) teachers, and the course is an introduction to second language acquisition.

Alongside all of this happiness, though, has been an unease about a large proverbial elephant in our educational “room”--generative AI. World language colleagues will remember the many (and unfortunately ongoing) discussions about Google Translate and similar tools… and if I’m being honest, I find the discourse exhausting. I hesitate to overgeneralize, but I so often hear and see colleagues assuming the absolute worst of all students, that they don’t care, that they just cheat, that they are not intellectually curious and have no interest in learning. I am sure these come from painful, frustrating experiences, and that they are true sometimes. But I am also sure that they are not categorically true, and I for one refuse to always assume the worst in my students. I will NOT buy into a system of education where we assume people “should know what to do” and punish them when they don’t do what we want–that hidden curriculum stuff is not for me.

Anyways, back to the focus of today’s reflection: generative AI. The last time I was in a classroom, this technology wasn’t publicly available, and therefore wasn’t a concern. Now, two years later, everyone is talking about it. I have even published research on using ChatGPT for one possible assessment purpose. I am far from an expert, but I learn a lot from experts like Fred Poole, and I think I have some basic sense for (1) how generative AI / large language models (LLMs) work, (2) the major ethical concerns that surround them, and (3) their potential for good in society and education–a very complex picture, and one that has me overall feeling anxious and overwhelmed these days when people mention “AI” to me.

Today was my second day of the semester. I knew months ago that I wanted to dedicate the better part of a whole class period to building community consensus around use of AI, especially assuming that many of my students had probably been threatened and warned not to use it, maybe punished for using it, but probably not taught how to use it or how to think about the pros and cons (turns out, I was right).

So what did we do? (Again, forgive the lack of some details–this is stream of consciousness, trying to keep it to an hour here!)

First, using Perusall, students read the blog post that Stacey wrote. The image below shows the task that I left for them.

Perusall Task

For students that didn’t know each other (had only had one 80-minute class period together at the time), their interactions, depth of reflection, and learning from one another blew me away. A few of my favorite comments are below, just to give you a sense:

Student comments (sample) on Perusall

After this asynchronous discussion, which I also participated in (and I read every comment!), students came to class and I asked them to complete a retrieval practice warm-up on a sticky note in which they:

  1. Shared their biggest takeaway from the reading
  2. Listed as many classmates’ names as they could remember without asking

Here were the results of that (a bit of slightly quantified thematic analysis):

10 students

3 students

6 students

1 student each

surprised, concerned about the environmental impact

surprised that AI is trained on content that is copyright protected

General unease about the continued development and complexity of the ethics surrounding AI

  • shocked to learn that AI is not allowed in some spaces (e.g., governments, schools)
  • new thinking about the responsibilities of corporations and how, if they can be, they should be held legally accountable
  • learning about how AI works (scraping data from the internet, with and without permission)
  • thinking about what kinds of tasks AI might be better and worse for

Two students didn’t remember a major takeaway from the article (but this is a natural part of retrieval practice anyway)--and every single student remembered at least 2 other peoples’ names, so I will take that as a win both for cognition AND for community!

After some other content unrelated to AI (reviewing our community norms, class logistics, etc.), we transitioned to our in-class AI discussion using an informal, anonymous survey about their experiences with generative AI.

The overwhelming majority had had very little exposure, experience, or instruction about it, and did not understand how it works, when it should be used, or the pros and cons. And yet… we are so ready to punish these kids, right? We don’t help them learn, but we hold them accountable to this hidden curriculum–as if the world is so simple anyways, to categorically ban this technology or not. Sigh. I digress.

After this, since quite a few students had wondered on Perusall how their teachers have used AI, I shared this with them:

Matt’s AI use summary

Students caught on quickly that, basically in their words, “you used AI after you had already used your own brain”. Yes. Correct.

(Trying to stay on my time target here, forgive me for being abrupt and concise!)

The next thing we did was by far my favorite activity of the class period, and one I will continue to use. First, I gave each student a blank sheet of copy paper. I asked them to fold it into quarters and write their name on it. Then, I gave them this scenario, asking them to put themselves in my shoes (the black text).

I then displayed the questions one by one, and gave them time to respond to each silently (on paper). In short, their responses were incredible. They were diverse, they were thoughtful, they were human-centered, and, unsurprisingly perhaps, they also reflected a vast array of enforcements of “cheating” policies and punishments that students had probably been exposed to before, as well as a beautiful set of aspirations to assume the best, not punish students, to stay curious, and to give students second chances.

I got very emotional (pun intended, I think) when I went over these back in my office. (More on that below.)

Student AI reflection from my (teacher’s) perspective

The major takeaway from this segment? THERE ARE NO EASY SOLUTIONS. Even for the majority of students who said “talk to the student”, nobody could come up with the perfect solution to–when, how, in what medium, using what specific words, etc.--for how to talk to a student in a way that did not make people feel attacked, belittled, defensive, or scared.

I just love that they got to experience this complexity, and they got to see me be vulnerable about how hard this is, and how there is no easy solution as a teacher who cares about learning, about people, and about equity and morality here.

Our collective consensus? Let’s not put each other in this position. When / if we (both students and me) decide to use AI for anything, let’s disclose it up front (see below). This way, we are on the same side, and we can be curious and open in our dialogue about what makes sense as a practice, what practices we might want to avoid going forward, and where the ambiguities lie. (There are many, many, many ambiguities).

After students had a chance to exchange ideas with a peer, we concluded this segment with me sharing my current feelings about AI, and how I envision my approach. These were my words, and these are my commitments to this class:

Matt’s current thinking about AI

We ended with the final consensus that if I thought they had used AI and they didn’t disclose it, I would ask “Tell me about your process on this assignment”, and that I would trust what they told me. That is the kind of teacher and human I want to be. Period.

After class, I have shared a few more resources that I didn’t have time to mention in class, which I hope they will engage with:

  1. This brilliant infographic which I found in this tweet, which overviews environmental concerns about AI.
  2. This brilliant infographic from ongoing work at MIT, which I found in this tweet, which is directed at teachers. I should probably send this out in my building.
  3. This paper by educational titans that I really look up to, which I think offers us a lot of food for thought.

Coming back to my office, I was again blown away by the empathy they showed in putting themselves in my emotional shoes (see word cloud below), the depth of thought they gave to the different courses of action I might take, and the caring, forgiving, learning-centered recommendations that they made to help me make a kind, humane, AND educationally appropriate decision if confronted with student AI use.

Student responses (size of word indicates frequency of occurrence)

Note: This word cloud was generated using this website, after I used Otter.ai to transcribe the list of emotions students wrote on paper (which I read aloud).

These kids are going to be alright. I am going to be alright. I am tormented and conflicted by AI. Truly I am. But I am so sure that this semester, this community, these students–we will be alright.