Help Sheet: Resisting AI Mania in Schools

K-12 educators are under increasing pressure to use—and have students use—a wide range of AI tools. (The term “AI” is used loosely here, just as it is by many purveyors and boosters.) Even those who envision benefits to schools of this fast-evolving category of tech should approach the well-funded AI-in-education campaign with skepticism and caution. Some of the primary arguments for teachers actively using AI tools and introducing students to AI as early as kindergarten, however, are questionable or fallacious. What follows are four of the most common arguments and rebuttals with links to sources. I have not attempted balance, in part because so much pro-AI messaging is out there and discussion of risks and costs is often minimized in favor of hope or resignation. -ALF  

Argument: “Schools need to prepare students for the jobs of the future.”

  • The skills employers seek haven’t changed much over the decades—and include a lot of “soft skills” like initiative, problem-solving, communication, and critical thinking.
  • Early research is showing that using generative AI can degrade these key skills:
  • An MIT study showed adults using chatGPT to help write an essay “had the lowest brain engagement and ‘consistently underperformed at neural, linguistic, and behavioral levels.’” Critically, “ChatGPT users got lazier with each subsequent essay, often resorting to copy-and-paste by the end of the study.”
  • A business school found those who used AI tools often had worse critical thinking skills “mediated by increased cognitive offloading. Younger participants exhibited higher dependence on AI tools and lower critical thinking scores.”
  • Another study revealed those using “ChatGPT engaged less in metacognitive activities…For instance, learners in the AI group frequently looped back to ChatGPT for feedback rather than reflecting independently. This dependency not only undermines critical thinking but also risks long-term skill stagnation.”
  • A review of the research showed student overreliance on AI had a "significant impact…on essential cognitive abilities, including decision-making, critical thinking, and analytical reasoning."
  • Educators don’t enter the profession just to churn out workers. They are focused on the whole child in front of them and challenges that child faces in learning a range of timeless skills. Many US districts and schools have shifted focus in recent decades toward STEM based on research showing the “fastest-growing fields will require significant math and science preparation,” a broad strategy that stays focused on essential skills.
  • We don’t know what the jobs of the future are. Educators can't be expected to predict the specific tech skills employers will want in the future, especially when tech is evolving so rapidly. They are “teachers, not time travellers.Bill Gates doesn't know ("What will jobs be like?" he asked.) Companies don’t know how AI might transform their businesses and the work their employees would then do: MIT found 95% of corporations that have invested in GenAI are so far “getting zero return.”  
  • Teachers should focus on their mission. Some specialists (e.g. computer science teachers) may have reasons to work with AI. However, it works against many teachers’ core objectives. English teachers, for example, are meant to teach students how to read, think about, and discuss texts; conduct their own research; analyze rhetoric and literature; and express themselves in writing. History professor Cate Denial says, “There is nothing that I ask my students to do in my classes that benefits from being done by generative AI.”
  • Human intervention—not tech—is the difference maker in states such as Alabama that are achieving large gains in reading and math test scores. Overall, scores in the past decade-plus have disappointed despite the large sums spent by districts on tech. K-12 ed tech spending is forecast to grow to $132 billion in 2032 in good part due to AI mania. Kids deserve humanity.

Argument: “AI is a tool, just like a calculator.”

  • Calculators don’t provide factually wrong answers, but AI tools have. Last year, Google’s AI search returned, among other falsehoods, that cats have gone to the moon, that Barack Obama is Muslim, and that glue goes on pizza. Even though AI tools have and are expected to improve, children in schools shouldn’t be used as tech firms’ guinea pigs for undertested, unregulated products while AI firms engage elected officials in actively resisting regulation. A new study by the BBC found that though chatbots had improved with regard to news queries, "about 45% of all AI chatbot answers had at least one significant issue with accuracy, sourcing, distinguishing opinion from fact, and providing context." 
  • Calculators don’t provide dangerous, even deadly feedback. In one study, a ”chatbot recommended that a user, who said they were recovering from addiction, take a ‘small hit’ of methamphetamine” because, it said, it’s “‘what makes you able to do your job to the best of your ability.’" Users have received threatening messages from chatbots. And children are more likely to see AI as sentient.
  • Calculators don’t pose mental health risks. They aren’t potentially addictive and don't encourage repeated use. They don’t flatter, direct, or manipulate. Chatbots have been designed this way—leading to bad mental health outcomes for some, including users in a New York Times report. 1 in 5 in high school have had a romantic relationship with AI or know someone who has. Alleging a chatbot urged their teen to die by suicide, parents in Florida filed a lawsuit against its maker.
  • Calculators don’t lie. Chatbots, however, have misled users. Writer Amanda Guinzburg shared interactions with one that she asked to describe several essays. It spewed out invented material, showing the chatbot hadn’t actually accessed and processed the essays. After much prodding, it “admitted” it had only acted as though it had done the requested work, spit out mea culpas—and went on to invent or “lie” again. Chatbots are getting worse at this as they've "moved away from declining to answer questions…nonresponse rates fell from 31[%] in August 2024 to 0[%] in August 2025.”
  • Calculators can’t be used to spread propaganda. AI tools, though, including those meant for schools, should worry us. Law professor Eric Muller’s back-and-forth with SchoolAI’s “Anne Frank” character showed his “helluva time trying to get her to say a bad word about Nazis.” In an era of increasing government censorship of literature and history and rising influence of tech moguls on government policy and spending, AI tools have great potential as propaganda. Elon Musk, for example, has been open about retraining his AI chatbot Grok to reflect his extreme views. 
  • Calculators can’t violate children’s privacy. Many of the AI products marketed to schools and teachers “do not sufficiently protect students’ personal data,” according to education news site Chalkbeat. The site states that with “robust training” for teachers, children’s privacy can be safe, though teacher professional development is in reality often lacking. They also state that laws require schools to protect certain information, but that is less than reassuring when massive data breaches, including in school districts, are common.
  • Using calculators more doesn't create worse outcomes. Schools in which teachers and students use AI more, though, often have higher levels "exposure to data breaches, troubling interactions between students and AI and AI-generated deepfakes, or manipulated videos or photos that can be used to sexually harass and bully students," the Center for Democracy and Technology found.
  • The work students ask a calculator to do is different from the work chatGPT does. As John Warner, author of More Than Words, a book about AI and writing, explains, “A calculator doing long division…is not robbing students of experiences that they must do over and over again to continue to learn. The same is not true of AI like ChatGPT and writing.”  
  • A far better analogue: a smartphone. More states and schools are restricting or banning phones in classrooms—once touted as an encyclopedia in a kid’s pocket— because they can be addictive and impede attention, engagement, learning. It doesn’t seem adults should have to learn the lesson twice and so soon: let a technology proliferate in schools by ignoring its risks until well after harm is done.

Argument: “AI won’t replace teachers, but it will save them time and improve their effectiveness.”

  • Adding edtech does not necessarily save teachers time.
  • A recent study found learning management systems sold to schools over the past decade as time-savers (e.g. Google Classroom, Canvas) tend to be burdensome and contribute to burnout.
  • It took software developers in one study who used AI tools 19% more time to do their work, but because they expected to save time, they believed their work had been sped up by 20%. This gap suggests self-reported time savings from AI are suspect.
  • Costly AI textbooks rushed into use in South Korea increased teacher workload while it upped screen time for students.
  • “Extra time” is rarely returned to teachers. Boosters tell teachers to use AI tools to grade, prep lessons, or differentiate materials to have more time with students. But there are always new initiatives, duties, or assignments—the unpaid work districts rely on—to take that promised time. In a culture of austerity and spending cuts, teachers are likely to be assigned more students. When class sizes grow, students get less attention, and positions can be cut. A study by economists found workers "with higher exposure to generative AI experienced a significant increase in work hours and a decrease in leisure time." Any productivity gains from chatGPT did not accrue to the workers.
  • AI can’t replace what teachers do, but that doesn’t mean teachers won’t be replaced.
  • Schools are already doing it: Arizona approved a charter school in which students spend mornings working with AI and the role of teacher is reduced to “guide.” AI-driven schools like this one are spreading across the US.
  • Ed tech expert Neil Selwyn argues those in “industry and policy circles…hostile to the idea of expensively trained expert professional educators who have [tenure], pension rights and union protection… [welcome] AI replacement as a way of undermining the status of the professional teacher.”  
  • Tech firms have been selling schools untested products for years. Technophilia has led to students being on screens for hours in school even when phones are banned. Jess Grose explains, “Companies never had to prove that devices or software, broadly speaking, helped students learn before those devices had wormed their way into America’s…schools.”  AI appears to be no different.
  • Efficiency is not effectiveness. “Speed and efficiency are not educational values…When it comes to learning, what matters is the learning. We have to be careful not to fall into the trap of privileging speedier outcomes.” Many AI tools focus on the product rather than the process or provide shortcuts for important processes like brainstorming, outlining, or discussion. One of the few controlled studies conducted on high school students showed students did worse after AI tutors were removed than students who didn't use them..
  • Novelty doesn’t mean effectiveness. Effective teachers possess a constellation of dispositions, knowledge, and skills; tech skills are just one. The best collaborate with colleagues and parents and “foster positive relationships with students and create a motivating and supportive classroom,” deeply human things. They use “evidence-based practices.” AI tools have not been proven effective while much has been wasted on ineffective edtech. An obsession with the new hasn’t served schools well.
  • AI has the potential to de-skill teachers. It's not just students whose skills can be degraded by using AI. Research has shown this to be the case in other fields: endoscopists who regularly used AI to help in colonoscopies became less skilled at detecting polyps.  
  • AI mania is moving professional development away from helping teachers improve in areas where there is deep evidence of effective strategies. The focus of much PD now is AI proficiency, an enormous ask when, as Dan McQuillan says, engineers “admit that no-one really knows what's going on inside these models. Let that sink in for a moment: we're in the midst of a giant social experiment that pivots around a technology whose inner workings are unpredictable and opaque.”

Argument: “Students are already using AI, so we have to teach them ethical use.

  • If schools want ethical students, teach ethics. More students are using AI tools to cheat, an age-old problem they make much easier. This won’t be addressed by showing students how to use this minute’s AI, an argument implying students don’t know what plagiarism is (solved by teaching about plagiarism) or understand academic integrity (solved by teaching and enforcing its bounds)—or that teachers create weak assignments or don’t convey purpose. The latter aren’t solved by attempting to redirect students motivated and able to cheat.
  • Students can be educated on the ethics of AI without encouraging use of AI tools. They can be taught, as part of media literacy and social media safety programs, about AI’s potential and applications as well as how it can enable predation, perpetuate bias, and spread disinformation. They should be taught about the risks of AI and its various social, economic, and environmental costs. Giving a nod to these issues while integrating AI throughout schools sends a strong message: the schools don’t really care and neither should students.
  • Children can’t be expected to use AI responsibly when adults aren’t. Many pushing schools to embrace AI don’t know much about it. One example: Education Secretary Linda McMahon, who said kindergartners should be taught A1 (a steak sauce). The LA Times introduced a biased and likely politically-motivated AI feature. The Chicago Sun-Times published a summer reading list including nonexistent books—yet teachers are told  to use the same tools to do similar work. Educators using AI to cut corners can strike students as hypocritical. 
  • The many costs of AI call into question the possibility of ethical AI use. These include:
  • Energy - AI data centers need huge amounts of water as coolant as well as electricity, pulling these resources from their communities—which tend to be lower-income—straining the grid, and raising household costs. "Wholesale electricity costs have risen as much as 267% in areas near U.S. data hubs." AI water use is especially alarming as climate change creates droughts and wildfires across the US.
  • Environment - Goldman Sachs calculates “a ChatGPT query needs nearly 10 times as much electricity to process as a Google search.” Data center energy use will jump over the next few years; on behalf of AI, Google has abandoned its goal of carbon neutrality. By 2028, AI “could generate the same emissions as driving over 300 billion miles.” Emissions are only one of AI’s environmental impacts, but are the most threatening to the future for children.
  • Health - In addition to the health effects of worsening climate change, AI is producing air pollution that causes cancer, asthma, other diseases, and premature deaths. “The public health burden by 2030 is expected to…rival that of all the cars, buses and trucks in California.”
  • Equity - Children are vulnerable to the racial and gender bias found across AI tools for policing, health care, and education. Biased algorithms can present material that reinforces stereotypes. Many AI firms “falter in addressing their biased algorithms. [They] either establish policies that tend to ignore controversial topics…or otherwise water down the responses. The result is a potential rewriting of historical events in inaccurate and inappropriate ways.”
  • Labor - Workers are exploited to provide AI tools: “GenAI training tasks are also fueled by millions of underpaid workers who perform repetitive tasks under precarious labor conditions.” While AI will create new jobs, an estimated 92 million will be lost, leading to disrupted lives and communities. Job losses are expected to disproportionately harm people of color and low wage earners. AI is being trained without payment on the “scraped” work of intellectuals and creatives. The beneficiaries will be the world’s richest people, worsening income inequality and weakening the power of unions and working people—including teachers and students graduating into the world of work.

Prepared by Anne Lutz Fernandez, last updated 10/23/2025