Embracing new technologies that don’t advance teaching and learning is a mistake. Educators have a responsibility to ask questions — and to resist when necessary.
At a Glance
- Technology that promises to save teachers’ time so they can focus on instruction often fails to live up to that promise.
- Some technologies intended to automate aspects of teachers’ work take teachers’ time away from instruction to focus instead on managing the technology.
- In the 19th century, textile workers known as the Luddites resisted when automation reduced the need for skilled labor and lowered the quality of the products.
- Teachers can learn from the Luddites how to resist technologies — such as artificial intelligence — that threaten to replace teachers’ expertise with automation at the expense of student learning.
- The Luddite praxis of strategic playfulness, developing localized tactics, and building networks of resistance is a model for educators today.
Teaching is a lot of work. Lessons need planning, papers need grading. Labs require setup and teardown. Emails, letters of recommendation, and licensure documentation all demand attention. Then there’s the meetings — department meetings, schoolwide meetings, 504 and IEP meetings, caregiver meetings, student meetings. Somewhere in the day, teachers are also supposed to find time for lunch, though a few bites of a sandwich or protein bar often must suffice before the bell rings again.
Enter the promise of automation: For over a century, entrepreneurs have peddled time-saving technologies as the solution for teachers’ overloaded to-do lists. From automated testing machines (1926) and teaching machines (1955) to Scantron forms (1972) and Khan Academy modules (2008), each successive innovation has arrived with the assurance that it will eliminate the “drudgery” of teaching so educators can focus on “what really matters” in classrooms (Watters, 2021).
Automation hasn’t lived up to this promise. Research consistently shows that efforts to address complex educational challenges with technological fixes have, at best, produced mixed results for teaching and learning (Cuban, 2003; Reich, 2020), and, at worst, they have deepened existing racialized inequities in schools (Crooks, 2024). Nevertheless, the hope persists that the solution to our pedagogical woes is one technical breakthrough away.
The latest chapter in this story stars generative AI. Generative AI is a catch-all term for technologies that mine vast troves of data to produce text, images, and video on demand. We don’t have to look far for breathless predictions about its revolutionary implications for schools. They appear in op-eds, product announcements, and the titles of bestselling books, like Sal Khan’s Brave New Words: How AI Will Revolutionize Education (and Why That’s a Good Thing). “AI is here to stay,” we are told. “The train has left the station.” “There’s no putting the genie back in the bottle.” The message for educators is clear: AI is the future, and we must embrace it or be left behind.
Calling someone a Luddite today is usually a jab at their ineptness with technology or their stubborn resistance to change. The original Luddites resembled neither of those things.
Accepting this story of AI’s inevitability — or that of any technology — is a grave mistake for educators. Not only does it cede tremendous power to tech companies that are unaccountable to teachers, students, or school communities, but it also forces us into a position where we are adapting our professional practices to the latest tool rather than letting our pedagogical goals determine which technologies, and which uses of them, belong in our classrooms. From this perspective, resisting the uncritical adoption of “inevitable” technologies in schools is not just possible but necessary. It’s the only way to safeguard quality teaching and learning from being lost in the rush to innovate and optimize.
What might this resistance look like? To answer this question, we turn to what may seem an unusual source: the 19th-century textile workers known as the Luddites. Although frequently caricatured as anti-technology zealots, the original Luddites were actually skilled artisans who rejected not machines themselves but the ways particular technologies were being used to de-skill labor, concentrate power, and undermine community values. In this way, their story has instructive parallels for our present. Revisiting it offers educators today a different frame for thinking about technology — not as something we merely adopt or reject, but as something we must interrogate; negotiate; and when necessary, refuse.
Who were the Luddites?
Between 1811 and 1816, bands of clothworkers across Northern England began breaking into factories and destroying automation technologies that threatened their livelihoods. The vigilantes called themselves “Luddites” because they claimed to be operating under the direction of Ned Ludd, a mythical weaver’s apprentice who was said to have smashed a stocking frame to protest his employer’s cruelty. This legend established Ludd, like Robin Hood before him, as a regional symbol of resistance against injustice.
Calling someone a Luddite today is usually a jab at their ineptness with technology or their stubborn resistance to change. The original Luddites resembled neither of those things. Most were skilled operators of complex machinery and enthusiastic adopters of technologies that enhanced the practices of their craft. They welcomed innovations like the press, draw bench, and lathe, for example (Randall, 1998). What the Luddites opposed was how factory owners increasingly used newer automation technologies to cut costs and increase efficiency at the expense of worker safety, fair wages, and the quality of the goods produced.
The Luddites’ approach to resistance was, similarly, more sophisticated than is often remembered. What was noteworthy about their methods was not that they destroyed machines — this had been an established form of protest in the region for decades — but that they transformed this tactic into a coordinated and principled movement (Thompson, 1963). They didn’t vandalize indiscriminately. Rather, they targeted specific equipment owned by manufacturers who violated customary agreements about wages and working conditions. Moreover, they paired this direct action with public letter-writing campaigns to influence legislation and win popular sympathy — efforts that made them a household name far beyond the English textile districts. Indeed, we remember the Luddites today largely due to the success of these campaigns.
The question for the Luddites was never simply whether to accept or reject a new machine, but rather who controlled it, how it was deployed, and whose interests it served. Technological change, in their view, was not some inevitable force of nature but the result of human choices about power, work, and dignity — choices that could, and should, be contested when they don’t align with community values. These insights remain relevant for educators today.
Automation in the classroom
Factories in the 19th century embraced automated machinery to streamline production — making cloth cheaper and easier to create, even at the expense of quality and craftsmanship. Today’s education technology, or edtech, companies promise not just increased productivity but liberation from the “drudgeries” of the workday. And in both eras, the primary beneficiaries of automation are those who profit from it, not those whose work it transforms.
Take the promise of “saved time.” Historians have demonstrated that gains in efficiency don’t tend to reduce our workloads — instead, they increase expectations for productivity (Cowan, 1983). When AI handles lesson planning or essay grading, teachers don’t suddenly gain time to build relationships with students. That time gets absorbed into managing platforms, troubleshooting tech, and entering data — tasks often more tedious than those that were automated away. The skilled aspects of teaching, like crafting learning experiences and offering meaningful feedback, are replaced not with more space for reflection but with a different form of “drudgery” that is closer to the work of a data clerk than a professional educator.
This is what’s known as de-skilling, and it’s something the Luddites recognized. Tasks that require specialized knowledge and skill are broken down, automated, or otherwise simplified. Consider how teachers respond to student writing. They aren’t just identifying errors — they’re learning how students think, what they care about, where they struggle, and how they grow. And when teachers plan lessons, they aren’t just filling in templates; they’re weaving together disciplinary knowledge, pedagogical expertise, and an understanding of their students. Automating these processes doesn’t “free” teachers; it unmoors them from the core of their professional practice. The Luddites saw automation replace fine cloth with cheap fabric. When we apply this same logic to education, we’re not just sacrificing the quality of textiles in the name of efficiency, but the quality of our students’ learning.
The promise of automation has always been seductive. Who wouldn’t want machines to handle the tedious parts of our work? For educators drowning in administrative tasks and grading backlogs, the appeal is obvious.
Today’s automating systems also introduce dangers the Luddites never faced. Power looms didn’t surveil workers or extract their behavioral data for profit. Modern AI tools do both. They log how students write, what they click, which questions they get wrong, and how teachers interact with them and their caregivers. This data feeds into tech development pipelines, enabling companies to create ever more sophisticated tools to further automate instruction (Nichols & Monea, 2022). Teachers and students become not only de-skilled, but instrumentalized.
Moreover, these systems exacerbate inequality. AI tools trained on biased data have been shown to reproduce discriminatory patterns: mislabeling bilingual students’ writing as deficient, disproportionately flagging students of color as “at risk,” and narrowing the curriculum to exclude the knowledge and cultural practices of non-dominant communities (Benjamin, 2019; Dixon-Román, Nichols, & Nyame-Mensah, 2020). In addition, the development and maintenance of these systems have substantial ecological and human impacts — from the energy and water demands of data centers to the underpaid laborers in the Global South who annotate data to keep AI “intelligent” (Crawford, 2021; Selwyn, 2024). The marginal gains in efficiency that AI offers come at a steep price.
It is also worth highlighting that tech executives are not sending their own children to schools that offload instruction to AI but to those that limit personal devices, emphasize human connection, and maintain small class sizes with expert teachers (Bowles, 2018). Even if their marketing pitches say otherwise, they know that meaningful education cannot be automated. In the future they’re building, quality teaching becomes a luxury good — quality instruction for those who can afford it, AI-tutors for everyone else. Just as the Luddites recognized that the benefits of automation flowed upward while its considerable costs fell on workers and communities, we must recognize that educational automation today follows the same pattern — and demands a similar, principled response.
Toward a Luddite praxis in education
What might today’s educators learn from the Luddites? How can their strategies help us navigate our own moment of technological disruption? Here, we offer three approaches to Luddite praxis, inspired by the clothworkers’ resistance — not as prescriptions, but as starting points for shaping our own responses to automation’s incursions into schools.
Embrace strategic playfulness
The original Luddites understood that resistance could be both serious and playful. They wrote satirical ballads, planted mysterious graffiti messages, and sent ominous letters signed by their mythical leader, Ned Ludd. Though most were men, some Luddites dressed in women’s clothing during their raids — partly for disguise, partly to show solidarity with women who’d lost their cotton-spinning jobs to automation (Merchant, 2023).
Educators today can learn from the Luddites’ playfulness by refusing to take AI as seriously as its promoters demand. The fawning rhetoric surrounding generative AI creates an atmosphere where questioning it seems foolish or futile. But this aura of inevitability is itself a marketing strategy, and it crumbles under the slightest scrutiny. When we approach AI with irreverence rather than deference, we begin to see through the hype to the mundane reality: These are products sold by companies that profit from our belief in their importance and necessity.
Scholars Emily M. Bender and Alex Hanna model this approach in their book, The AI Con (2025) and their livestream series, Mystery AI Hype Theater 3000. With sharp wit and careful analysis, they dissect overblown claims about AI’s capabilities, revealing the gulf between its marketing promises and actual performance. They call this “ridicule as praxis” — using humor to puncture the mythology that shields AI from criticism and, in doing so, enabling us to recognize its real-world impacts and harms.
Educators can adopt a similar stance. Rather than letting anxieties about being left behind drive our approach to AI, we can recall the long line of technologies that have previously promised to revolutionize education. Of course, the failure of these innovations to reform education doesn’t guarantee that AI will follow the same path. However, it should make us wary of treating each new product as a generational turning point. This is especially so when these products come from tech companies that, time and again, promise educational transformation yet only ever seem to deliver more tools for surveillance and standardization. A playful perspective can help us see these incongruities for what they are.
Develop localized tactics
The Luddites understood that resistance can’t be one-size-fits-all. Though they shared a common cause, their tactics varied across regions and circumstances. The Midland Luddites used letter-writing campaigns more than those in Yorkshire, and stockingers targeted different machines than croppers (Deseriis, 2015). This willingness to tailor resistance to local conditions made the Luddites resilient, even as authorities worked to stamp them out.
What might such localized tactics look like today? Here we must get creative. The Luddites’ machine-breaking was itself a regional tactic, which they adapted to their moment. But we face a different situation: We have no parallel tradition of machine-breaking to draw on — and even if we did, smashing a large-language model is appreciably harder than smashing a power loom. Following the Luddites’ example, then, doesn’t mean copying their methods exactly, but adopting their process of identifying forms of resistance that are suited to our own contexts. In practice, this means recognizing the opportunities for resistance around each of us.
For instance, when required to attend a mandatory professional development on AI, educators might channel their inner Luddite by raising “technoskeptical” questions (Krutka, Pleasants, & Nichols, 2023): What are the environmental costs of running these systems? Who profits if we adopt them? What privacy concerns or algorithmic biases might they introduce into our classrooms? Such questions force those in attendance to confront the real-world impacts of AI that are too often bracketed in professional settings. They also allow our positions — as school leaders, subject experts, veteran teachers, union members, friends — to lend authority to our skepticism. Even small acts of public resistance like this can signal to colleagues that their own concerns are reasonable and shared.
The power of localized tactics lies in matching methods to context. In some settings, resistance might mean writing op-eds for the local paper or demanding community input in technology-procurement processes. In others, it might mean quieter acts: choosing not to use AI grading tools or creating process-oriented assignments that center the beautiful inefficiencies of human expression. The Luddites knew that breaking a loom sent a different message than writing a letter and that both could work in tandem to achieve their goals. Similarly, educators today can calibrate our resistance to our respective situations, asking what would be most effective in this school, with these colleagues, in this moment?
Build networks of resistance
One of the Luddites’ lasting achievements was transforming machine-breaking into a mass movement through careful coordination. Meeting in tavern backrooms and moonlit fields, they shared intelligence, planned actions, and sustained each other’s resolve. Their networks crossed trade divisions and regional boundaries, creating a web of resistance so strong it took the full force of the British crown to finally bring it down. This coordination gave their individual acts collective meaning and power.
Educators today need similar forms of coordination. We need spaces to share what works and what doesn’t and to remind each other we’re not alone. The Alliance for Refusing Generative Artificial Intelligence (ARG AI) Discord server offers one model. With a little over 300 members working across educational contexts, members exchange stories and resources, celebrate small victories, and offer consolation in moments of defeat. The Civics of Technology community is another example. Here, teachers and researchers convene for virtual book clubs, monthly “tech talks,” and an annual summer conference. The group also shares member-created curricular materials and a weekly newsletter. Spaces like these serve as modern tavern backrooms — minus the ale — where a teacher’s classroom experiment becomes a blueprint for others to try and a researcher’s article becomes evidence that practitioners can use to shape school policy.
These educator-driven networks grow even stronger when linked to related movements outside education. Environmental activists protesting the expansion of data centers, for instance, understand something that edtech marketing often obscures: AI runs on massive computational infrastructure with staggering material costs. Consequently, when schools sign AI contracts, they aren’t just purchasing software; they’re participating in systems of energy use, water consumption, and carbon emissions with far-reaching consequences. As Anne Pasek demonstrates in her playful zine, Getting into Fights with Data Centres (2022), communities can learn to recognize and resist the construction of the bland, warehouse-size buildings that power AI systems and deplete regional resources. Building coalitions with such organizers and other groups focused on education-adjacent issues like data privacy (Fairplay) and algorithmic discrimination (Algorithmic Justice League) can amplify educators’ efforts and sharpen their goals — linking concerns about pedagogical autonomy with a wider vision for technologies that serve communities rather than extracting from them.
A present-tense perspective
The promise of automation has always been seductive. Who wouldn’t want machines to handle the tedious parts of our work? For educators drowning in administrative tasks and grading backlogs, the appeal is obvious. But as the Luddites understood two centuries ago, and as the historical record confirms repeatedly, automation’s promises rarely match its realities. Time is never saved, just reallocated. Work is never eliminated, just transformed — and often, degraded.
The Luddites offer a different frame for thinking about automation. In their strategic playfulness, we find permission to laugh at what we’re told to revere. In their localized tactics, we see how dissent can be an everyday practice. And in their networks of resistance, we’re reminded that small acts gain strength through coordination. Together, these approaches to Luddite praxis show us that technology is not destiny; it’s a series of choices about power, work, and dignity and what kind of world we want to make.
The historian David Noble (1995) once described the Luddites as “the last people… to perceive technology in the present tense and act upon that perception” (p. 7). Today, as edtech vendors market AI with promises of its inevitability and revolutionary potential, we need that present-tense perspective. The future they’re selling has not arrived — and perhaps it never will. But the de-skilling, surveillance, and extraction — all of that is happening now, in our classrooms, today. The Luddites would recognize this moment and act on it, and we should too.
References
Bender, E. & Hanna, A. (2025). The AI con: How to fight big tech’s hype and create the future we want. HarperCollins.
Benjamin, R. (2019). Race after technology: Abolitionist tools for the New Jim Code. Polity.
Bowles, N. (2018, October 26). The digital gap between rich and poor kids is not what we expected. The New York Times.
Crawford, K. (2021). Atlas of AI: Power, politics, and the planetary costs of artificial intelligence. Yale University Press.
Cowan, R.S. (1983). More work for mother: The ironies of household technology from the open hearth to the microwave. Basic Books.
Crooks, R. (2024). Access is capture: How edtech reproduces racial inequality. University of California Press.
Cuban, L. (2003). Oversold and underused: Computers in the classroom. Harvard University Press.
Deseriis, M. (2015). Improper names: Collective pseudonyms from the Luddites to Anonymous. University of Minnesota Press.
Dixon-Román, E., Nichols, T.P., & Nyame-Mensah, A. (2020). The racializing forces of/in AI educational technologies. Learning, Media, and Technology, 45 (3), 236-250.
Krutka, D., Pleasants, J., & Nichols, T.P. (2023). Talking the technology talk. Phi Delta Kappan, 104 (7), 42-46.
Merchant, B. (2023). Blood in the machine: The origins of the rebellion against big tech. Little, Brown and Co.
Nichols, T.P. & Monea, A. (2022). De-escalating dataveillance in schools. Phi Delta Kappan, 104 (4), 23-27.
Noble, D. (1995). Progress without people: New technology, unemployment, and the message of resistance. Between the Lines.
Pasek, A. (2023). Getting into fights with data centers. Experimental Methods and Media Lab, Trent University.
Randall, A. (1998). The ‘lessons’ of Luddism. Endeavor, 22 (4), 152-155.
Reich, J. (2020). Failure to disrupt: Why technology alone can’t transform education. Harvard University Press.
Selwyn, N. (2024). Digital degrowth: Toward radically sustainable education technology. Learning, Media, and Technology, 49 (2), 186-199.
Thompson, E.P. (1963). The making of the English working class. Vintage.
Watters, A. (2021). Teaching machines: The history of personalized learning. MIT Press.
This article appears in the Winter 2025 issue of Kappan, Vol. 107, No. 3-4.

ABOUT THE AUTHORS

Charles Logan
Charles Logan is a post-doctoral research fellow in the Center for Responsible Technology, Policy, and Public Dialogue at Northwestern University, Evanston, Illinois.

T. Philip NIchols
T. Philip Nichols is an associate professor in the Department of Curriculum & Instruction at Baylor University, Waco, Texas.

Antero Garcia
Antero Garcia is an assistant professor in the Graduate School of Education at Stanford University, Stanford, California.
