0
(0)

In a September 2025 post on X , OpenAI CEO Sam Altman revived interest in “dead internet theory,” the notion that much of what we perceive as human interaction online is actually bots talking to each other in an endless feedback loop. Altman later admitted that even when he knew internet content was real, he found himself assuming it was fake.

The comments went viral, and writers were quick to note the irony. The man responsible for unleashing the “Automated Soulless Text Machine” (Landymore, 2025) on the world now recognized that online experience is starting to feel artificial, while researchers have begun to document how human language increasingly mimics AI (Ramirez, 2025).

Altman may have sparked discussion about authenticity, but the larger revelation is how easily we have learned to live with the illusion of human connection. AI’s flood of automated slop content — endless articles, posts, and replies generated by invisible systems — has become background noise we scroll through without protest. What once would have felt counterfeit now passes as ordinary. And stranger still, we have accepted an online world where the presence of a human voice no longer seems essential. In light of Altman’s post on X, the question is not whether we notice the cost of this trade, but whether we still believe it matters.

The absurd quest for efficiency

The tension around “dead internet theory” is not new. At its core, the theory is yet another example of market pressure designed to eliminate inefficiency, even when the “inefficient” has value. In his 2018 book Bullshit Jobs: A Theory, anthropologist David Graeber describes a similar cycle where machines are built to help farm workers produce surpluses of food, followed almost immediately by rival machines engineered to consume the surplus. We recognize such a cycle as patently absurd, yet, within the system, it signals progress.

We are already seeing similar loops unfold all around us. Employers use AI to generate job postings and screen applications — many of which were themselves written by AI. No one actually gets hired (Lowrey, 2025). In the music industry, record labels roll out AI-generated performers who go viral, yet their “fans” are largely bots (Bakare, 2025). These are dead cycles, closed loop systems running by themselves, producing the appearance of human activity while minimizing or even eliminating the human beings they were meant to serve. What happens if the same logic takes hold in schools?

I teach comparative literature and composition in a high school that, like so many others, is struggling to find a way to live with AI without eroding the relationships between students and teachers. My colleagues and I recently attended a keynote by a self-styled expert on AI in education hoping for guidance and concrete strategies. She appeared on screen not as herself, exactly, but as a Space Age avatar — spiky green hair, steampunk headphones, and the kind of hyper-polished presence I would expect more from a gaming console than a professional development workshop. She launched into her presentation with the breezy confidence of someone who had rehearsed her talking points until they gleamed.

The presenter invited us to see AI as a kind of “co-pilot” to help with judgment, struggle, and growth — for students and teachers — all so our work could be “streamlined” into data points for reporting and dashboards. The talking avatars, videos of AI-generated children praising their own productivity, and interactive polling with real-time data visualizations sent a clear message: Struggle was no longer necessary, friction no longer productive, and the messy work of human judgment could be streamlined into something more efficient and more manageable.

The heart of her message, though, was the suggestion that teachers should defer their decision making to the machine, that learning itself would be smoother and more consistently “engaging” once the rough edges of human toil had been filed down. My colleagues and I left the presentation with a bone-deep concern that something was seriously wrong. This was not a roadmap to deeper learning. It was a sales pitch.

Fighting the gravitational pull

The pull toward AI integration is strong on both sides of the desk. Students face relentless pressure in academics and activities, so why not let AI do the homework, draft the essay, walk through the math problem, or generate the research? In a world where achievement data and extracurriculars strongly determine a student’s future life chances, competing without AI feels especially naïve when peers are already using it. Surveys indicate that usage of AI among students is widespread — roughly 45–50% of U.S. high school students (Schiel, Bobek, & Schnieders, 2023) and a much higher share of college students (Flaherty, 2025) report using AI in their studies. One global survey found about 86% of students worldwide are using AI regularly (Kelly. 2024).

The pull toward AI integration is strong on both sides of the desk.

Teachers experience the same gravitational pull. Their job is swamped with bureaucratic layers: emails, paperwork, differentiated materials, rewrites, retakes. Heading into a weekend with 120 essays to grade is a crushing experience, so why not let an AI tool generate comments, draft lesson plans, or write rubrics? Especially when colleagues are already doing so, why keep swimming against the current?

Consider a recent study suggesting that heavy use of AI tools (like ChatGPT) is associated with lower neural engagement, weaker memory recall, and less ownership of work in controlled study settings (Kos’myna, 2025). For most of my career as a high school English teacher, my role has been to design conditions where students wrestled with ideas and grew through the process. Now, the struggle itself is in danger of being written out of the script. As John Warner (2018) notes in Why They Can’t Write, no one cares about a five-paragraph essay for its own sake; it is assigned to develop skills of reasoning, revision, and expression. But if students can bypass that struggle, producing a passable essay without the growth, what are we left with? Classrooms devoid of the human work that makes learning possible. Dead classrooms.

What are dead classrooms?

In their book In Search of Deeper Learning, researchers Jal Mehta and Sarah Fine (2019) describe the tacit contract that governs many American high school classrooms: “I pretend to teach, and you pretend to learn.” Worksheets, standardized tests, and box-checking tasks often stand in for genuine learning, producing classrooms where the appearance of learning subsumes what the authors describe as “deeper learning”: mastery of skills, expression of self, and the production of something valuable beyond the classroom walls. Their book appeared two years before ChatGPT, and the arrival of large language model AI threatens to accelerate this dynamic dramatically.

An AI creates the work, an AI completes the work, and an AI evaluates it. On the surface, everything appears eerily efficient.

In an AI-saturated learning space, Mehta and Fine’s theory takes a different form. Picture a room full of students at their district-issued devices. An embedded AI helps them polish essays, answer study questions, complete worksheets, and even generate personal “reflections.” With time freed up, they scroll social media and consume content likely created by bots. Teachers, meanwhile, use AI tools to construct whole units of study, draft lesson plans, and generate assignment packets filled with student tasks. The AI co-pilot sifts through the student responses and provides comments specific enough to appear teacher-generated, but of little use to a student actually looking to improve. And the cycle loops: An AI creates the work, an AI completes the work, and an AI evaluates it. On the surface, everything appears eerily efficient.

But obviously something vital has disappeared. Students no longer wrestle with uncertainty or push through the discomfort of trial and error. Teachers are reduced to supervisors of machine inputs and outputs rather than guides of human growth. The iterative process that builds creativity, reasoning, and critical thinking has been hollowed out, replaced by automated steps “optimized” to serve the myriad educational goals teachers are constantly working toward.

The “dead internet theory” describes a digital ecosystem where human voices are made superfluous by machine content. The “dead classroom theory” is a parallel phenomenon where teaching and learning, rooted in struggle and human connection, give way to mere simulation. The danger is not that machines will replace education but that they will imitate it so convincingly that we mistake the simulation for the real thing.

The rhetoric and the promise of AI

The promise of AI integration in the education sector is enticing. Ed tech advocates elevate AI’s ability to “personalize” learning, reduce busywork, and free teachers from drudgery. Some tech-leaning charters even frame AI as a tool to “crush academics” by liberating students from the limitations of non-screen-based learning. One school even claims that a student in their AI-fueled program can advance six grade levels during a single academic year. Though somewhat dubious, the claims cast AI as the key to optimized classroom efficiency, the highest form of progress. But what does this efficiency look like in practice?

Grading student essays — a task that feels like the least efficient practice on earth — is ripe for AI intervention. So when my district piloted an AI-assisted feedback tool, I was cautiously optimistic. The app generated bulleted comments infused with rubric language, flagged instances of copy/paste, and composed narrative endnotes to inspire student reflection. It offered to give feedback for 120 essays — work that would easily take me more than 20 hours during nights and weekends. An AI tool could save a lot of time.

But the reality didn’t live up to the promise. The AI comments felt canned and rarely offered guidance specific enough to help a writer revising more complex structures than a standard five-paragraph essay. Worse, flags of AI misuse now required me to investigate what happened, meet with the student, and involve parents and administrators. The whole thing felt like a weird simulation, a sterile writing process in place of the messy, relational, human-centered work that helps students grow.

Moving beyond efficiency

Across the country, debates around technology in schools have reached a fever pitch. Twenty states have implemented cellphone bans in schools with many more beginning to draft their own policies. Districts are scrambling to limit screen time, and parents voice growing concern about how constant connectivity erodes attention, social development, and mental health. These concerns over phones — framed by Jonathan Haidt (2024) in his best-seller The Anxious Generation — now reflect a public-health crisis.

The same reckoning around screens must extend to AI. We know that technology is never neutral; it structures not only what students do, but how they imagine and inhabit the world. The question, then, is not whether these tools will enter classrooms — they already have — but whether we will allow them to hollow out the struggle, judgment, and meaning-making that make education transformative.

AI offers a map of learning — tidy essays, efficient lesson plans, instant feedback — so polished that it can obscure the real, uneven terrain of education.

AI certainly can serve as a supplement, helping teachers differentiate instruction for instance, or supporting students in iterative drafting. But when it replaces the work of thinking, writing, or evaluating, it risks reducing school to its simulacrum: a performance of learning that offers nothing of value to the people who inhabit it. Educators must preserve spaces where students wrestle with ideas, take intellectual risks, and grow through effort — work no machine can replicate.

This position is, ultimately, less about tools than about stance. Teachers and schools can and should design assignments and assessments that resist automation and the outsourcing of process work. The call is to preserve the human labor, curiosity, and risk-taking that make classrooms thrive, shielding them from the push to trade away what matters most for the sake of efficiency.

Preserve the messiness

In his brief story “On Exactitude in Science,” Jorge Luis Borges (1946/1999) imagines an empire where cartographers grow so skilled that they produce a map as large as the territory itself. Over time, the citizens come to prefer the map: It is cleaner, neater, easier to read. Eventually the map spreads out and covers the land, until the distinction between representation and reality collapses.

Classrooms risk a similar fate. AI offers a map of learning — tidy essays, efficient lesson plans, instant feedback — so polished that it can obscure the real, uneven terrain of education. If we come to prefer the map to the territory, we also trade away the very qualities that make education meaningful for our students.

The danger of “dead classrooms,” then, is not simply that students will use AI to finish assignments more quickly. It is that we will begin to mistake simulated learning for the thing itself, that we’ll accept a world of surfaces that conceals the messy and vital human work beneath. To keep classrooms alive, we must preserve and protect the territory — the slow, uncertain, deeply human process of learning — against the seductions of its simulation.

 

References

Altman, S. [@sama]. (2025, September 3). I never took the dead internet theory that seriously, but it seems like there are really a lot of LLM-run twitter accounts now [Post]. X. https://x.com/sama/status/1963366714684707120

Bakare, L. (2025, July 14). An AI-generated band got 1m plays on Spotify. Now music insiders say listeners should be warned. The Guardian.

Borges, J.L. (1999). On exactitude in science. In Collected Fictions (trans. A Hurley). Penguin Books. (Original work published 1946).

Flaherty, C. (2025, August 29). How AI is changing — not ‘killing’ — college. Inside Higher Ed.

Graeber, D. (2018). Bullshit jobs: A theory. Simon & Schuster.

Haidt, J. (2024). The anxious generation. Penguin Press.

Kelly, R. (2024, August 28). Survey: 86% of students already use AI in their studies. Campus Technology.

Kos’myna, N. (2025, June 10). Your brain on ChatGPT: Accumulation of cognitive debt when using an AI assistant for essay writing task. MIT Media Lab.

Landymore, F. (2025, September 5). Sam Altman says he’s suddenly worried dead internet theory is coming true. Futurism.

Lowrey, A. (2025, September 8). The job market is hell. The Atlantic.

Mehta, J. & Fine, S. (2019). In search of deeper learning. Harvard University Press.

Ramirez, V.B. (2025, July 11). ChatGPT is changing the words we use in conversation. Scientific American.

Schiel, J. Bobek, B.L., & Schnieders, J.Z. (2023, December 5). High school students’ use and impressions of AI tools. Lumina Foundation.

Warner, J. (2018). Why they can’t write: Killing the five-paragraph essay and other necessities. Hopkins Press.


ABOUT THE AUTHOR

default profile picture

John D. Duffy

John D. Duffy is an English teacher at Berkley High School, Berkley, Michigan.

How useful was this post?

Click on a star to rate it!

Average rating 0 / 5. Vote count: 0

No votes so far! Be the first to rate this post.