AI for Research Assistance: Skeptical Approaches
Anna Mills, English Instructor, College of Marin
From a keynote address to GICOIL
April 19, 2024
Licensed CC BY NC 4.0
Could AI be the best research assistant ever?
“I have basically found that it is the best research assistant I’ve ever had. So now if I’m looking up something for a column or preparing for a podcast interview, I do consult with generative AI almost every day for ideas and brainstorming.
And just things like research — make me a timeline of all the major cyber attacks in the last 10 years, or something like that. And of course I will fact-check that research before I use it in a piece, just like I would with any research assistant.”
-Kevin Roose on the Hard Fork podcast for The New York Times Opinion, April 5, 2024
What skills do you need to work with generative AI?
To do this, you need
Let’s cultivate skepticism as we explore AI research tools
We’ve probably heard that AI makes things up. But what about when it works with real, current sources?
Perplexity combines search and chat
Try it! You can go back to your earlier experiment with Perplexity and retry it with the default “All” focus.
It searches the Internet and (seems to) base its answers on the top sources it finds.
AI + real sources = good and bad news
How does this work for academic research? Let’s look at Elicit: “The AI Research Assistant”
Elicit answers your research question in its own words
Instead of searching on “teacher shortages students effects”
And also “educator shortages” and “instructor shortages,” with “impacts”
the student can just ask their question in one way.
Elicit lists papers and summarizes their elements
My students enjoyed Elicit’s intuitive interface and immediate response to their questions.
In one test, Elicit’s synthesis addressed a different question, not what I asked.
Question: Do language models trained partly on AI-generated text perform worse than ones trained only on human text?
Its answer was about detection of AI text and comparison of the quality of human writing and AI text, not about how training data affects performance.
Can we help students practice catching this kind of misinterpretation?
Elicit’s one-sentence summaries of papers sometimes miss key points.
Elicit’s summary of Student Perceptions of AI-Powered Writing Tools: Towards Individualized Teaching Strategies by Michael Burkhard: “AI-powered writing tools can be used by students for text translation, to improve spelling or for rewriting and summarizing texts.”��But the real abstract includes this other central point: “[S]tudents may need guidance from the teacher in interacting with those tools, to prevent the risk of misapplication. …Depending on the different student types, individualized teaching strategies might be helpful to promote or urge caution in the use of these tools.”
Elicit’s “main findings” column describes this better, but the user has to specifically choose that option.
A variety of AI research apps offer similar functionality to Elicit.
Consensus.AI attempts to assess the level of agreement among scholars
The “Consensus Meter”
The “Consensus Meter” looks authoritative and quantitative with its percent ratings, but it comes with warnings and would be hard to double check.
If you have one research paper, Keenious helps you find related ones
ResearchRabbit.AI, “Spotify for papers”
AI features common in these apps are now appearing within academic databases themselves. AI functionality is going mainstream in the research process.
So how do we guide students to use AI for research wisely?
Let’s be skeptical of the efficiencies promised by AI apps. Much of the thinking happens in that inefficient reading time.
SciSpace’s tag line is “Do hours worth of reading in minutes”
Elicit.org’s tag lines are
There are losses with such seeming efficiency. Emily Bender and Chirag Shah have raised concerns about these search-LLM combinations.
“In “Situating Search,” Bender and Shah arguethat “removing or reducing interactions in an effort to retrieve presumably more relevant information can be detrimental to many fundamental aspects of search, including information verification, information literacy, and serendipity.”�
Proceedings of the 2022 Conference on Human Information Interaction and Retrieval, March 2022
Even the systems that search the Internet and databases will misrepresent and make things up
Example of an error in summary: After listening to Ezra Klein’s podcast, I asked Perplexity, “What does Ezra Klein think AI will do to the Internet?”
Perplexity.AI:
But no! His guest Nilay Patel said that, as the footnoted source indicates!
Let’s make sure students practice checking how AI handles information
Invite students to try out one of these system that purports to cite its sources and/or aid with research. Ask them to find something the AI missed.
One lesson: “Fact-Checking Auto-Generated AI Hype”
I asked students to fact-check a list of claims and sources generated by ChatGPT. They commented in the margins of a chat session transcript, speaking back to and correcting ChatGPT’s handling of sources.
See this description of the assignment with materials and samples, published in TextGenEd: Teaching with Text Generation Technologies from the Writing Across the Curriculum Clearinghouse.
ChatGPT misinformation from a chat session on surprising AI facts
“AI Can Decode Ancient Scripts:
There’s no such paper and no such author!
What happened? Yann LeCun + Yoshua Bengio = Yann Bengio?
Yann LeCun and Yoshua Bengio are computer scientists considered “godfathers” of AI who have collaborated. ChatGPT combined their names.
ChatGPT generated the claim, “AI creates original art and music.” I annotated its supposed source and shared this with students.
Students also practiced assessing ChatGPT’s explanations for why sources were credible
ChatGPT output cited the Facebook AI blog: “While a company blog might not be a traditional academic source, it's a primary source in this case because it's directly from the team that conducted the research.”
The students pushed back on the idea that a company blog is credible just because it contains internal company information.
What will we do if we ask students to use AI and the students don’t want to?
If you incorporate a language model, give students a comparable alternative in case they have privacy or data rights concerns
Further resources for ideas on teaching with and about AI
Collections of ideas and tested pedagogical practices.
The AI Pedagogy Project from Harvard's metaLAB
TextGenEd: Teaching with Text Generation Technologies
Edited by Annette Vee, Tim Laquintano & Carly Schnitzler
And published by the Writing Across the Curriculum Clearinghouse
Browse, comment, and share your own informal reflections on the Exploring AI Pedagogy site from the MLA/CCCC Task Force on Writing and AI
One more reason why we need to teach discerning, skeptical approaches to AI: We and our students can help shape the future of the information landscape and mitigate harms.
From the Ezra Klein Show interview with Nilay Patel for New York Times Opinion, April 5, 2024 Patel is editor of The Verge.
EZRA KLEIN: What is A.I. doing to the internet right now?
NILAY PATEL: It is flooding our distribution channels with a cannon-blast of — at best — C+ content that I think is breaking those distribution channels…. I think right now it’s higher than people think, the amount of A.I. generated noise, and it is about to go to infinity.
EZRA KLEIN: What happens when this flood of A.I. content gets better? What happens when it doesn’t feel like garbage anymore? What happens when we don’t know if there’s a person on the other end of what we’re seeing or reading or hearing?
With AI, knowing what’s true and where information comes from will keep being important, and will get more complicated. How will we as a society shape this?
The bottom line: let’s get to know AI. Our voices are needed!
Be curious, be bold.
If we work in education, we likely have critical thinking and communication skills that will help us use AI.
Our students need our guidance, and our voices are needed in the larger policy conversations around AI in society.
Questions or comments?�Thank you, and feel free to get in touch!
Twitter/X: @EnglishOER
LinkedIn: anna-mills-oer
Slides open for commenting: https://bit.ly/skepticalAIresearch
�This presentation is shared under a CC BY NC 4.0 license.