The COVID-19 pandemic and the growth of AI have changed the education landscape. What does this mean for standards and assessments?
The promotion and measurement of standards has been a prominent feature of Western education systems for centuries. As far back as the French Revolution, the expanding influence of the Prussian model, which included compulsory testing for students in reading, writing, and arithmetic was observed across much of Western Europe. More recent examples include the No Child Left Behind Act, followed by the Every Student Succeeds Act (ESSA) — both of which featured mandatory testing of students in reading and mathematics.
Although there have been many iterations of standards-based reform over the last two centuries, the central elements remain the same: Education systems hold schools accountable for meeting certain levels of student performance as measured by national and/or state tests in select subject areas — often in relation to the three R’s: reading, writing, and arithmetic. Because these content domains are deemed essential for future success in life, policy makers use them as the starting point for determining the effectiveness of their education systems (Schnepf et al., 2024).

Conventional standards-based reform approaches have been increasingly influenced by the unabated rise in international student assessment measures (Volante, 2018). For example, while the Program for International Student Assessment (PISA) has largely focused on three literacy domains (reading, mathematics, and science) since its initial administration in 1990, the assessments administered by the Organisation for Economic Cooperation and Development (OECD) continue to expand. The adoption of additional assessments is a tacit acknowledgment by the OECD that interdisciplinary knowledge and skills, beyond the three R’s, are important in our rapidly changing world.
What students need to learn today — for life tomorrow
In many respects, the global community is at a critical juncture — particularly given the increased challenges presented by the pandemic.
Indeed, a wide body of cross-national research suggests students have suffered significant learning losses in core subject areas as a result of pandemic-related school closures (Volante et al., 2024). It is equally clear that the pandemic has had a negative impact on the development of non-cognitive characteristics and skills — such as those related to mental health, socioemotional learning, and growth mindset — which are also necessary for future success.
Additionally, the recent proliferation of generative language technologies, commonly referred to as artificial intelligence (AI), has raised concerns that current learning standards and assessments may not represent the learning students will need as these new technologies become more advanced and commonplace.
Policy makers appear to be facing a difficult conundrum: Do they double down on their efforts to catch students up in reading, writing, and mathematics performance (as many have done)? Or do they consider more holistic and multifaceted approaches to student development that focus on broader cognitive and non-cognitive outcomes, and which might also require radically different conceptions of educational standards and approaches to student assessment?
In short, we need to think through what cognitive outcomes our children will require to demonstrate their learning. To the extent that AI has already demonstrated it can readily address the first three levels of the revised Bloom’s taxonomy (remembering, understanding, and analyzing), there will likely need to be a shift toward educational standards that focus on evaluating and creating.
The cognitive domain: Essential but insufficient
There is global evidence the COVID-19 pandemic impacted students’ academic achievement. For example, the PISA 2022 results recorded the largest four-year decrease in mathematics and reading ever observed, 15 points and 10 points respectively (Organization for Economic Cooperation and Development [OECD], 2023a, 2023b).
But perhaps the greatest impact of the pandemic was to further highlight the disparity in the achievement of educational outcomes for children and the educational inequities that exist across families, communities, and nations (e.g., Bennett, 2023; Betthäuser, Bach-Mortensen, & Engzell, 2023; Fahle et al., 2024). Along with the substantial drop in scores, the PISA results also highlight the ongoing socioeconomic disparity in student performance, with the difference in mathematics between students in the highest and lowest socioeconomic groups across OECD countries averaging 93 points, more than the equivalent of three grade levels (OECD, 2023a, 2023b).
It is not at all surprising that policy makers want to now focus their attention on core educational outcomes associated with literacy and numeracy. We appreciate the importance of efforts to address long-standing concerns around content domains. However, such a focus will be insufficient to improve these educational outcomes. Once again, PISA 2022 results provide potential insights. The predicted increase in the performance gap related to socioeconomic status was not consistent across countries. The OECD has identified a number of countries, such as Finland, Japan, and Korea, whose educational systems seem to better support the academic resilience of their student populations.
The non-cognitive domain
The OECD (2018) defines academic resilience as “the capacity of socio-economically disadvantaged students to achieve higher levels of performance than would be predicted by their family background” (p. 97). We previously expanded the concept of academic resilience to include physical health and mental health and well-being, as these have also been associated with positive educational outcomes (Volante et al., 2023).
Academic resilience and its components are just one domain under the broader category of non-cognitive skills and attributes students need. While there’s no single definitive definition of the non-cognitive domain, the term is widely used in the research literature to describe skills and traits that fall outside the academic content standards but that can affect student academic performance. Some of these include academic mindsets, academic perseverance, interpersonal skills, and metacognitive strategies (Farrington et al., 2012).
We appreciate the importance of efforts to address long-standing concerns around content domains. However, such a focus will be insufficient.
Collectively, we have argued for a model that balances academic, mental, and physical health outcomes to ensure that student resilience and growth are supported in a more holistic fashion that considers both cognitive and non-cognitive skills. The OECD began moving in this direction in 2012, when it implemented a program of innovative measures focusing on skills and attributes typically associated with the non-cognitive domain. These skills include Creative Problem Solving (implemented in 2012); Collaborative Problem Solving (2015); Global Competence (2018); Creative Thinking (2022); and the upcoming Learning in a Digital World (2025).
Assessment innovation with AI
Expanding the scope of what’s covered in standards and assessment is just one element of the change we need. We are at a crossroads in educational assessment. The presence of AI in schools is calling teachers and policy makers to make critical decisions about the future of assessment in schools.
On the one hand, reverting back to paper and pencil assessments circumvents some challenges posed by AI, namely, the fear that students will use AI to complete their assessments. Closely supervising students during assessments also provides a level of confidence that assessments represent students’ learning rather than AI-generated results. However, such approaches only provide some short-term relief regarding the potentially inappropriate use of AI. Specifically, they fail to engage AI as a powerful learning resource and assessment tool. What will it look like to leverage AI to support students’ cognitive and non-cognitive learning in a standards-based orientation to education?
Answering this question requires us to explore innovative forms of assessment that engage more authentic and alternative ways of knowing, at scale. We need to invite students to apply their learning through collaborative, real-world tasks and use these as standards-based performance measures of student learning.
Some large-scale assessments have been experimenting with scenario- and simulation-based questions to move assessment beyond traditional formats and encourage more authentic representations of learning. Indeed, these types of assessments already are being used in a variety of higher education programs and disciplines (Volante & DeLuca, 2024). K-12 settings can draw on these examples to develop similar approaches.
At the classroom level, engaging in innovative assessment approaches may be easier. In our previous work, we outlined several strategies to leverage AI to support deep student learning and provoke assessment innovations in the classroom (Volante, DeLuca, & Klinger, 2023a, 2023b). These strategies include:
- Being explicit about learning goals and how AI connects with those goals.
- Establishing assessment criteria collaboratively with students.
- Leveraging peer and teacher feedback to drive learning forward.
- Reframing assessments as authentic performance tasks.
- Engaging students in collaborative formative assessment.
Putting students front and center in the assessment process and giving them agency to design assessments, select tasks, and engage in collaborative formative assessment helps to develop essential non-cognitive skills while ensuring assessments remain valid in relation to educational standards (Earl, 2010).
Policy and practice implications
Overall, assessment of core content domains (i.e., reading, writing, mathematics) is a necessary, but insufficient metric of education quality. Both the increased emphasis on interdisciplinary skills and the rapid evolution of AI suggest conventional standards and associated assessment methods must evolve. In some respects, we have essentially stalled, reaching the predictable end of the “back to basics” testing movement, and require a new vision of what constitutes literacy, numeracy, and science standards for the future. Policy makers will need to play a central role in facilitating these shifts in curriculum and assessment.
Teachers, school administrators, and even students’ evaluative judgments will undoubtedly assume greater importance in systems that embrace more authentic, performance-based assessments of student achievement. Although this will add additional costs to education systems that have historically relied heavily on multiple-choice and selected-response test questions, gauging school effectiveness using conventional testing methods no longer seems reliable, representative, or valid. Indeed, national governments and global agencies are already well on their way to reconsidering standardized testing (see OECD, 2023c), with attention to non-cognitive learning standards assessed through scenario-based and authentic tasks.
If we are going to embrace new standards for learning, then we cannot continue to rely on historical assessment practices. New standards require new assessments.
As Andreas Schleicher (2023), the director of the OECD Directorate of Education and Skills, has stated, “Tomorrow’s schools need to help students think for themselves, develop a strong sense of right and wrong, and interact in a globalised and increasingly digitalised world” (p. 65). Educational standards must expand to accommodate new literacies and competencies for an evolving, more complex future. If we are going to embrace new standards for learning, then we cannot continue to rely on historical assessment practices. New standards require new assessments. The rapid evolution of AI, the call for non-cognitive learning, and the growing interest in addressing complex interdisciplinary issues requires more authentic assessments of our students to face the challenges of the future.
Note: This research is supported by the Social Sciences and Humanities Research Council of Canada.
References
Bennett, P. (2023). Pandemic fallout: Learning loss, collateral damage, and recovery in Canada’s schools. Cardus Foundation.
Betthäuser, B.A., Bach-Mortensen, A.M., & Engzell P. (2023). A systematic review and meta-analysis of the evidence on learning during the COVID-19 pandemic. Nature Human Behaviour, 7 (3), 375-385.
Earl, L. (2010). Assessment as learning: Using classroom assessment to maximize student learning. Sage.
Fahle, E., Kane, T.J., Reardon, S.F., & Staiger, D.O. (2024). Education recovery scorecard: The first year of pandemic recovery: A district level analysis. Center for Education Policy Research at Harvard University & the Educational Opportunity Project at Stanford University.
Farrington, C.A., Roderick, M., Allensworth, E., Nagaoka, J., Keyes, T.S., Johnson D.W., & Beechum, N.O. (2012). Teaching adolescents to become learners: The role of noncognitive factors in shaping school performance – A critical literature review. University of Chicago Consortium on School Research.
Organization for Economic Cooperation and Development. (2018). Equity in education: Breaking down barriers to social mobility. OECD Publishing.
Organization for Economic Cooperation and Development. (2023a). PISA 2022 results (Volume 1): The state of learning and equity in education. OECD Publishing.
Organization for Economic Cooperation and Development. (2023b). PISA 2022 results (Volume II): Learning during — and from — disruption. OECD Publishing.
Organization for Economic Cooperation and Development. (2023c). Implementation of Ireland’s Leaving Certificate 2020-2021: Lessons from the COVID-19 Pandemic. OECD Education Policy Perspectives, 73.
Schleicher, A. (2023). PISA 2022: Insights and interpretations. OECD Publishing.
Schnepf, S., Volante, L., Klinger, D.A., Giancola, O., & Salmieri, L. (Eds.). (2024). The pandemic, socioeconomic disadvantage, and learning outcomes: cross-national impact analyses of education policy reforms. Publications Office of the European Union.
Volante, L. (Ed.). (2018). The PISA effect on global educational governance. Routledge.
Volante, L. & DeLuca, C. (2024). Large-scale testing in the face of AI. Assessment & Development Matters, 16 (1), 48-52.
Volante, L., DeLuca, C., & Klinger, D. (2023a). Forward-thinking assessment in the era of artificial intelligence: strategies to facilitate deep learning. Education Canada.
Volante, L., DeLuca, C., & Klinger, D.A. (2023b). Leveraging AI to enhance learning. Phi Delta Kappan, 105 (1), 40-45.
Volante, L., Klinger, D.A., Salmieri, L., & Giancola, O. (2024). COVID-19 and learning loss: a global perspective. In S.V. Schnepf, L. Volante, D.A. Klinger, O. Giancola, & L. Salmieri (Eds.), The pandemic, socioeconomic disadvantage, and learning outcomes: cross-national impact analyses of education policy reforms (pp. 16-28). Publications Office of the European Union.
Volante, L., Lara, C., Klinger, D.A., & Siegel, M. (2023). Academic resilience during the COVID-19 pandemic: A triarchic analysis of education policy developments across Canada. Canadian Journal of Education, 45 (4), 1112–1140.
This article appears in the October 2024 issue of Kappan, Vol. 106, No. 2, p. 42-46.
ABOUT THE AUTHORS

Louis Volante
Louis Volante is a distinguished professor at Brock University, St. Catherine’s, Ontario, Canada, and a professorial fellow at the Maastricht Graduate School of Governance, United Nations University-MERIT, Maastricht, The Netherlands. He is co-author of The Pandemic, Socioeconomic Disadvantage, and Learning Outcomes: Cross-national Impact Analyses of Education Policy Reforms (Publications Office of the European Union, 2024).

Don A. Klinger
Don A. Klinger is a professor and deputy vice-chancellor of education at Murdoch University, Perth, Australia. He is co-author of The Pandemic, Socioeconomic Disadvantage, and Learning Outcomes: Cross-national Impact Analyses of Education Policy Reforms (Publications Office of the European Union, 2024).

Christopher DeLuca
CHRISTOPHER DELUCA is a professor in the Faculty of Education and associate dean of the School of Graduate Studies and Postdoctoral Affairs at Queen’s University, Kingston, Ontario, Canada. He is a co-author of Learning to Assess: Cultivating Assessment Capacity in Teacher Education (Springer, 2023).

