In 1999, “The Matrix” hit U.S. theaters, presenting moviegoers with a dystopian vision of a future where humans live inside virtual reality. In fact, in “The Matrix,” there is no reality, just computer-generated simulations that run the length of a human’s lifetime. Our stories are written for us. As we live, the machines fill in the blank, and it’s impossible for us to distinguish human from computer-generated experiences.
What if I told you that the ideas behind “The Matrix” are coming into our schools? Well, it’s happening. In November, a company called OpenAI launched a chatbot called ChatGPT. Essentially, it is a software application that produces text that mimics human conversation. The text is derived from a database of human language. The tool is versatile, producing everything from computer code to music composition to dramatic plays. In fact, it models human language so well that some speculate that ChatGPT could eventually be used as a therapist.
The controversy for educators comes from its ability to generate text for student essays and homework assignments, and even answers to test questions. Could this lead to widespread cheating? How will educators know if student essays are real? What happens to the education system if the machines are filling in the blanks?
The emergence of ChatGPT has major implications for K-12 education, so I decided to devote this column to offering useful ideas for school leaders. For technological expertise, I reached out to a university colleague, Suresh Venkatasubramanian, for an interview. Venkatasubramanian is a professor of data science and computer science at Brown University. He is one of the leading experts on the development of guardrails for safely engaging with automated programs. The Biden administration brought him to Washington, D.C., to serve as assistant director for Science and Justice in the White House Office of Science and Technology Policy, where he produced a blueprint for a bill of rights for artificial intelligence.
In post-2020 fashion, we met via Zoom. Venkatasubramanian joined in from Utah, where he has spent time over the past decade working with organizations pushing for data privacy and computational safety.
Can you help us understand what scientists are seeking to accomplish by developing artificial intelligence (AI)?
I would say there are two very specific goals. Then there are broader, more metaphysical ambitions.
The first goal is to build automated systems that can match human performance in a task, and the task could be anything. Related to that, the second goal is to find out if we can build automated systems that can solve problems that humans cannot solve. Because automation allows us to see patterns that we can’t see on our own, can we collect enough data and see enough patterns to do something that we could never imagine? Those are the two very specific goals.
The metaphysical goal is this: By building automated systems that can mimic or do better than humans, can we better understand how humans operate? In other words, does the search for artificial intelligence tell us more about intelligence?
Helping us understand human cognition is an idea that transitions us to thinking about education. One of the ultimate goals of a functioning education system is to maximize the cognitive abilities of a society — to achieve some sort of enlightened understanding that helps us solve collective problems. However, that starts with the ability to maximize individual human capacity.
From an educational standpoint, what do you see as the primary benefits of AI, and how could it be useful for improving classroom instruction, especially at a large scale?
Artificial intelligence is machine learning, where the goal is to collect a lot of data and learn from it. When people are talking about AI, they’re really talking about machine learning, which happens because of the existence of data. Machine learning, data-driven automation, and automated tools could be useful to discover patterns of activity that might be helpful for teachers. So, if a teacher is trying to communicate some concept, the data from different teachers trying different approaches could be collected to get some sense of what’s working and what’s not.
Effective testing is one way to think about this. Because machine learning can deal with more variables in a model, maybe there’s a way to build a model to predict what’s going to be effective and then take it apart and see how the approach seemed to work.
Do you have an example of how this works in practice?
Sure. Since chemicals are expensive and experimenting with them is hard to do, my colleague built a model that used machine learning to try to predict the outcome of a chemical reaction.
However, the goal was not to make the prediction. The goal was to take apart the model and see what variables the model picked up on to make the prediction. What do the specific variables do? What chemical interaction can we now bring to light? Experts look at it and say, “Oh, the model picked up this? Maybe there’s something interesting going on here!” So, it’s almost like you ask someone to do something and you pull apart their brain and say: “Why do they do that?”
That often gets left behind when you talk about AI because we think of it as a black box that can do all these things we see in movies. The black box part is not what’s most interesting. It’s taking AI apart to see why it did what it did and to see if that gives us a new idea for what we might do. Think of it as generating intuitive hypotheses for us to test.
So, what is ChatGPT?
ChatGPT is a large language model developed by a company called Open AI. Its purpose is to interact with text prompts and give plausible-sounding answers. Most large language models try to predict the next word in a given piece of text to make it sound plausible, and then continue predicting in that manner. ChatGPT is the latest iteration of this, and it has a lot of very clever engineering tools to make it extremely plausible, at least to the extent that it’s caught people’s imagination and we’re talking about this.
Should school districts move to ban ChatGPT and similar AI tools?
They can try, but they will fail. How are they going to stop people from using it? It’s just asking for trouble.
You’re one of the leading researchers on guardrails for protecting the public from some of the hazards of AI. Are there specific recommendations you have for school boards, superintendents, and school administrators?
I am very unhappy with the way things like ChatGPT have been rolled out without guardrails. I think the reason school leaders are struggling, and I have great sympathy for their struggle, is because they’re being put in an impossible situation where a piece of tech was introduced without any scrutiny. Now, schools have to deal with the mess, and you can’t put that genie back in the bottle. So, what should they do?
We must understand that this is ground zero. To have an informed citizenry who can understand how technology is playing a role in our lives, we must start in schools. Honestly, I think ChatGPT is a fantastic example of how AI presents a learning opportunity for teachers and students.
They should be talking about what it’s doing. What is AI? What can something like ChatGPT do and not do? We’ve seen on social media where people play with ChatGPT, and they see where it fails. It’s quite funny. But it also tells you that ChatGPT is giving you information about what this machine learning system does. Students need to understand what the technology is really doing. If they were to learn that, they would probably not use it to cheat.
Teachers and students should be having a dialogue about this. Conversation can create a pathway to more literacy about AI and technology in general. Teachers must work with the curriculum, but if they had freedom to think of this as an opportunity, a lot of wonderful things could be done. We could be getting feedback from teachers on what they’re trying, how they’re being creative, what challenges they’re facing. This information would be valuable for those seeking to understand what the guardrails need to look like.
The reason I can talk about guardrails in many areas of AI is because I understand those areas. I worked with the people who deal with those problems, and I understand their concerns. With ChatGPT, we don’t yet know what the classroom benefits and harms are. We do know that our teachers understand our students’ needs, and teachers should be given free rein to try things out and let us learn from it.
This article appears in the April 2023 issue of Kappan, Vol. 104, No. 7, pp. 60-61.
ABOUT THE AUTHOR

Jonathan E. Collins
Jonathan E. Collins is an assistant professor of political science and education at Teachers College, Columbia University, New York, the associate director of the Teachers College, Columbia University Center for Educational Equity, and the founder and director of the School Board and Youth Engagement (S-BYE) Lab.

