5
(1)

When a Southern California high school teacher asked ChatGPT to co-design a mock trial for her Great Gatsby unit, she wasn’t trying to cut corners — she was trying to buy back time. “I could’ve done it myself,” she told us. “But it would have taken hours. AI helped me build the project faster, so I could focus on helping students rehearse arguments and understand character motivation.”

Artificial intelligence is not just shifting what gets taught or how fast — it’s reshaping what it means to learn, think, and teach.

Across the country, a middle school science teacher asked his students to predict how many atoms in an apple become part of their bodies after digestion. Instead of turning to a textbook, they turned to ChatGPT — not for an answer, but for a starting point. “We evaluated its response together,” he said. “It wasn’t about whether AI was right or wrong. It was about using it to think better.”

These stories reflect a quiet revolution happening in classrooms: Artificial intelligence is not just shifting what gets taught or how fast — it’s reshaping what it means to learn, think, and teach.

Since ChatGPT’s public release in late 2022, generative AI tools have flooded into K-12 classrooms. Some forward-thinking school networks are building their own systems and trying to launch new approaches to professional development for teachers. Other, often less well-resourced school districts, are grappling with single-issue concerns such as cheating or safety.

How teachers and students are using AI now

The uses and impact of teacher engagement with GPT technologies are not yet well documented. But one thing is clear: Teachers and students are using these technologies in both unexpected and oddly familiar ways. Teachers have enlisted ChatGPT, Gemini, and a host of other AI-driven technologies to support them in tasks such as gathering ideas for units, lesson planning, or applying rubrics to evaluate student work. Students, too, are increasingly working with AI tools — mostly at home. They are redefining their own work processes and “products,” and it’s hard to tell who is doing what in the learning process.

In our more than 60 interviews with a diverse pool of teachers who teach in middle and high schools in California, Hawaii, Michigan, New York, Pennsylvania, and Texas, we heard enough to know that AI is making noteworthy inroads into how lessons are planned, carried out, and reflected upon. Students are writing drafts, answering homework questions, creating PowerPoint presentations, and analyzing lab results, all with the help of AI-driven large language models (LLMs).

Teachers and students are using these technologies in both unexpected and oddly familiar ways.

In the brave new world of AI-infused teaching and learning, these new ways of “doing school” follow familiar patterns. Sometimes the effort is creative and exploratory; other times it looks hastily done and perfunctory.

Whether they use it for lesson planning, grading student work, or communicating with parents and guardians, teachers told us that generative AI is no longer just a digital assistant. It is becoming a pedagogical actor (or co-teacher) working alongside other members of the educational ecosystem. To assist with their ever-increasing workloads, students, teachers, and administrators are using AI agents to get things done. For educators under constant time crunches, AI may indeed help them with “getting it done.” But what does its use mean for teaching and learning? By listening to what teachers are telling us — in their own words — about how AI is (and is not) changing teaching and learning in their classrooms, we learned there is as much concern as there is hope.

The need for responsible guidance

In this rapidly shifting landscape, we cannot rely solely on quick-fix policies or a proliferation of playbooks and toolkits to guide practice. Teachers need more foundational anchor concepts — ideas that help them think through how, when, and why to use AI technologies in their classrooms.

Teachers in our study asked questions like these:

  • What happens when students outsource thinking to machines?
  • How can teachers preserve space for curiosity, struggle, and uncertainty in the curriculum?
  • What does it mean to give or receive feedback in a world where AI can generate it instantly?

Wrestling with questions like these offers teachers a way to determine what responsible practice looks like in their classroom.

In this rapidly shifting landscape, we cannot rely solely on quick-fix policies or a proliferation of playbooks and toolkits to guide practice.

From our conversations, five themes emerged that we call “pillars” of possibility and concern. Each of these pillars — the 5 A’s — comes with a set of essential questions that invite reflection for learners and their teachers who want to get it right.

  • Accuracy: Is it true? How would we know? Where is the bias? Are there other sources?
  • Agency: Whose thinking is this? Did I struggle with the ideas? Or offload to others? What’s the right balance?
  • Accessibility: Who gets access to the educational materials? How? Is the level of support helpful? How often and when?
  • Assessment: Is this helping me improve? How? Am I getting genuine formative feedback and taking appropriate next steps?
  • Authenticity: Is this really my work? Is it in my voice? Does that matter? Who originally created this work or the ideas behind it? Did I note sources when appropriate?

These pillars and the associated essential questions are not technical specs for designing and engineering better prompts (a new industry in itself). Nor are they yet another add-on to the AI literacy frameworks popping up across the globe. Instead, they give educators a foundation for examining how AI tools are advancing or sidetracking the aims of ambitious teaching, authentic assessment, and deeper learning in K-12 schooling (Shepard, 2021). They serve as guidelines for self-assessment, peer feedback, and a teacher’s determination of “what’s next” for a particular student who may need to “try again” (Duckor & Holmberg, 2023).

Think of these questions as conversation starters. Essential questions, or EQ’s (McTighe & Wiggins, 2013), typically guide assignments, projects, and performance assessments. In the context of AI-assisted learning, the EQs within the 5A’s can help educators and students reflect on what learning has occurred when they use AI. Using these essential questions — for and with students — can help teachers better approach AI technologies to support and guide deeper learning.

Accuracy

Is it true? How would we know? Where is the bias? Are there other sources?

In AI-enhanced classrooms, accuracy is no longer a binary matter of right or wrong. Tools like ChatGPT often produce fluent, confident responses that are factually incorrect. This phenomenon — what we call false fluency — can mislead students into accepting incorrect information simply because it sounds polished. That’s why accuracy involves teaching students to approach AI-generated text with skepticism, even when it sounds plausible.

One science teacher used AI to spark critical thinking. When students were asked to estimate what percentage of atoms in an apple become part of the human body after digestion, they used ChatGPT responses as a point of analysis. They made predictions, debated the logic, and learned to spot flawed reasoning. This approach helped reinforce students’ scientific habits of mind, not just their ability to fact-check.

In this way, the concern about accuracy becomes an active process — a form of engaged skepticism that AI can help cultivate, but only under thoughtful guidance of teachers and students committed to consulting multiple sources and evaluating their credibility. Evidence matters, as do sources of information.

Agency

Whose thinking is this? Did I struggle with the ideas? Or did I offload the struggle to others? What’s the right balance?

If learning is to remain meaningful in the AI age, students must retain their role as thinkers, creators, and decision makers. Yet AI tools are increasingly designed to make choices on students’ behalf — suggesting revisions, completing sentences, or generating ideas. Without careful instructional design, this convenience risks eroding students’ ability to come up with their own ideas.

The concern for agency in the AI age means expecting students to wrestle, revise, and sometimes fail. It means creating space for students to engage in multiple possibilities, to sit with uncertainty, and to develop intellectual stamina. AI can support these moments — but only when its use is intentional, scaffolded, and reflective. An arts teacher recommended that teachers and students “break it down, slow it down, and document who — including the bot — is doing what, when, and where,” when allowing AI tools in class.

Accessibility

Who gets access to the educational materials? How? Is the level of support helpful? How often and when?

Equity is a central concern in any educational innovation — and the introduction of AI in classrooms is no exception. While AI has the potential to provide personalized support, assistive features, and differentiated instruction, its implementation often depends on access to devices, stable internet, and costly platforms.

One teacher developed an AI-powered quiz bot that adjusted question difficulty based on student performance. It worked well — until the bill arrived. “One month I spent $377 just to run the bot for one test with 120 students,” he said. Eventually, he scaled back to only using AI for grading free-response questions. But his story underscores a broader issue: Powerful AI tools are not equitably distributed. Unless schools address this head-on, AI could exacerbate existing digital divides for low-income students and their families.

In our interviews, only teachers who felt confident with AI used it to differentiate the curriculum for learners or offer direct support to newcomers. A California teacher in a linguistically diverse middle school used new AI tools to help him communicate instructional content more effectively. He created his own AI tool that evaluates students’ handwritten paragraphs in English and gives feedback, which students can then translate back into Spanish, Arabic, Hindi or other languages as needed. “If they can’t read,” the teacher told us, “they can choose text to speech.” Among our interviewees, this sort of innovative practice was rare.

Several educators expressed a double bind in creating more access to instructional materials. They want their students — especially those from under-resourced communities — to have access to AI tools and increase their facility with generative AI in particular. But these teachers also don’t want to sacrifice meaningful human interaction for “prompt engineering 101.” One teacher put it plainly: “I don’t want to leave my students behind. But I also don’t want AI to take away the time I spend connecting, communicating, and caring for them.”

Assessment

Is this helping me improve? How? Am I getting genuine formative feedback and taking appropriate “next steps”?

Assessment is one of the areas where AI has made the biggest inroads. Automated scoring systems, AI-generated feedback, and intelligent tutoring platforms promise faster turnarounds and scalable solutions. But speed and scale are not the same as quality and direction.

Several teachers we spoke with described trying AI-powered assessment tools, only to find limitations. One educator explained how she inputted student essays into an AI system aligned to her rubric. “It sounded great in theory,” she recalled. “But the AI gave students’ five-paragraph essay feedback at a college reading level. It was overwhelming — and not helpful.” She couldn’t tell the AI to prioritize just one or two points of feedback or adjust the tone. The result: more confusion, not more clarity.

Others, however, are finding new rhythms. “I used to spend hours grading at home,” one high school teacher said. “Now I prioritize circulating and giving feedback while students are working. That’s where the learning happens.” In his classroom, AI doesn’t replace teacher judgment — it helps redistribute time, enabling more formative, real-time interaction. The goal of assessment, then, should not be automation for its own sake. It should be feedback that feeds forward — grounded in human insight, tailored to student needs, and used to guide meaningful revision.

Authenticity EQs

Is this really my work? Is it in my voice? Does that matter? Who originally created this work or the ideas behind it? Did I attribute sources when appropriate?

AI can simulate fluency, originality, and even creativity, causing educators to ask: What counts as real, authentic work? If a chatbot can generate a five-paragraph essay or write a song or conduct a lab experiment that earns an A, what distinguishes authentic student learning from machine mimicry?

“I think it takes quite a bit of intention to use AI in a way that does not simply supplant student work and learning,” said a high school math teacher. “As students — and let’s face it, teachers too — it’s just so tempting to have AI do the work for you rather than really go through the process of learning.”

Many teachers in our study described the “illusion of creativity” that AI often presents. “It feels like a creative process,” one music teacher explained. “We’re all there, watching what it produces, and it seems exciting. But it’s not the same as when a student surprises you with something truly original.” For some, it feels even worse: “They’ll write their essay because they told an AI to write it. They’ll give it to me as if it’s their writing… Then I have my AI read it… and then we all stamp each other’s things… My bot is going to be talking to their bot, and we’ll all pretend like we still have a relationship.”

Others pointed almost nostalgically to “low-tech” teaching moments as the most memorable and impactful. These are learning experiences AI technologies can’t authentically replicate. A field trip. A lab discovery. A spontaneous “aha” that occurs in real-time discussion. A moment when a student takes a risk and goes off script. “These things,” one teacher reflected, “can’t be done with AI.”

The concern for authenticity, in this framework, becomes a compass to set a new direction or a warning of potential danger. It invites educators and students to design experiences that are rooted in context, voice, surprise, and relationships — qualities that no AI can fully replicate.

What teachers are telling us

Educators are not standing still as AI tools enter their classrooms. They are adapting, experimenting, and reflecting in real time. We heard from teachers who use AI to enrich instruction and streamline feedback — and from those who express deep concern about its unintended consequences.

Many teachers described AI as a useful tool for planning creative instructional units. One teacher used it to help plan a schoolwide assembly during Black History Month. Another used it to help guide other teachers through planning project-based learning units. Yet another used AI to adjust curriculum content to meet the learning needs of the newcomers in his class.

Yet these same teachers voiced strong reservations. Many now find themselves in the role of AI detective. “I started leaving comments on student work,” one teacher said. “I’d write: These are the reasons I think this was generated by AI. Let’s talk about it.” One told us, “There’s a certain way of formatting that reveals AI use. When I saw it in a paper, I was like, oh, this kid just copied and pasted from ChatGPT … He did not proofread … there are no cars in Frankenstein … it was just like, ‘oh, you really did not read or understand.’”

Another language arts teacher described giving students she suspected of having used AI to do their schoolwork the option to pass an AI-generated quiz about their suspected AI-written response. “If they pass, I’ll grade it; if not, it’s a zero. Or they can re-do the essay. So far, no one has taken the quiz. Most students just don’t respond.”

These perspectives suggest a profession in motion — grappling with real-world constraints, ethical dilemmas, and a powerful sense of responsibility to get this right. AI is not being ignored. But neither is it accepted uncritically. Teachers are asking: What kind of learning are we designing — and for whom?

Teaching at the human–machine crossroads

For teachers — and those who teach and support them — AI is not just another tech tool. It is a catalyst, accelerating long-standing questions about the purposes of education, the balance between human and machine agency, and the role of schools in preparing young people to think, act, and thrive in a world where automation shapes nearly every aspect of life.

The rise of generative AI is the latest chapter in a long history of technological reform in education. Like overhead projectors, interactive whiteboards, or 1:1 laptops before it, AI is being introduced unevenly. There will be early adopters and skeptics, moments of promise and missteps, breakthroughs and blind spots (Cuban, 1986, 2001; Kent & McNergney, 1999; Postman, 2011). What makes this moment different, however, is the speed and scale at which these tools are entering our classrooms — and the depth of pedagogical and ethical questions they provoke.

If schools are to remain places of reflection, dialogue, experimentation, and care, educators must be trusted — and supported — to make sense of AI in their own contexts. That means resisting both extremes: the temptation to ban or block it wholesale and the impulse to embrace it uncritically. As John Dewey (1938) reminds us, the challenge for progressive educators is not to reject change, but to shape it — to study what works, for whom, and under what conditions.

One thing is clear from our research: Teachers are already doing this work. They are improvising, probing, designing, and redesigning learning in ways that both respond to and push back against the potential of AI. Their stories reveal a profession in motion, navigating what are, in many cases, enduring tensions: between efficiency and authenticity, support and surveillance, fluency and falsehood. These are not just technical problems. They are pedagogical dilemmas that speak to the heart of what learning is — and what it should be.

At the same time, a new gap is emerging. Students, especially those with early access to AI tools outside school, are racing ahead — experimenting, creating, and sometimes short-cutting their way through assignments. Teachers, meanwhile, are left with few shared frameworks, limited time for reflection, and uneven professional development. Districts and states are scrambling to catch up, often focusing more on compliance than on deep questions of curriculum, cognition, or equity.

This is the crossroads where we now stand. The tools will continue to evolve. But so must our collective capacity to ensure that learning — in all its messy, beautiful, and deeply human complexity — remains at the center of our work.

As one teacher told us, “School may become the last place where we protect what’s sacredly human. We have to preserve that.” We couldn’t agree more.

Note: The authors’ forthcoming book, AI for Deeper Learning: Promises, Possibilities, and Evolving Practices (Harvard Education Press, in press), expands on these themes and offers practical tools for rethinking curriculum, feedback, and equity in an AI-saturated world.

References

Cuban, L. (1986). Teachers and machines: The classroom of technology since 1920. Teachers College Press.

Cuban, L. (2001). Oversold and underused: Computers in the classroom. Harvard University Press.

Dewey, J. (1938). Experience and education. Macmillan.

Duckor, B. & Holmberg, C. (2023). Feedback for continuous improvement in the classroom: New perspectives, practices, and possibilities. Corwin.

Kent, T.W., & McNergney, R.F. (1999). Will technology really change education? From blackboard to web. Corwin.

McTighe, J. & Wiggins, G. (2013). Essential questions: Opening doors to student understanding. ASCD.

Postman, N. (2011). Technopoly: The surrender of culture to technology. Vintage.

Shepard, L.A. (2021). Ambitious teaching and equitable assessment: A vision for prioritizing learning, not testing. American Educator, 45 (3), 28.


ABOUT THE AUTHORS

Brent Duckor

Brent Duckor is a professor in the teacher education department at San José State University, San José, California. He is a co-author of Mastering Formative Assessment Moves: 7 High-Leverage Practices to Advance Student Learning (2017) and Feedback for Continuous Improvement in the Classroom: New Perspectives, Practices, and Possibilities (2023).

Carrie Holmberg

Carrie Holmberg is a lecturer in the teacher education department at San José State University, San José, California. She is a co-author of Mastering Formative Assessment Moves: 7 High-Leverage Practices to Advance Student Learning (2017) and Feedback for Continuous Improvement in the Classroom: New Perspectives, Practices, and Possibilities (2023).

How useful was this post?

Click on a star to rate it!

Average rating 5 / 5. Vote count: 1

No votes so far! Be the first to rate this post.