0
(0)

Given all the uncertainty they faced this past spring, school district administrators can be forgiven for behaving like mad scientists, racing to cobble together a new instructional system from whatever tools and materials were at hand. But now it’s time for them to take a more systematic approach to assessing, learning from, and improving upon their efforts to help students respond to COVID-19. This kind of evaluation is essential for all districts, even those with few or no research staff. And the good news is that it doesn’t require fancy statistics, just two simple steps: collecting baseline data and identifying a comparison group.

Let’s suppose, for instance, that a district is planning to offer online supplemental mathematics courses to students who fell behind after in-person classes ended in spring 2020. How can district administrators figure out whether this strategy actually helps students catch up?

First, they should collect some baseline data on students’ course-related knowledge and skills. If possible, they should do this right before the online courses begin, rather than relying on data from the pre-COVID era, since any learning gaps are likely to have widened while schools were closed. Districts need to know where students stand right now, not where they stood six months ago.

What baseline data will be most useful, and who should collect it? That depends. If the goal is to respond to students’ social and emotional needs, then it might make sense to start with data that the district already collects, focusing on attendance, behavior, and school climate, for example. In the case of the supplemental math course, however, a formative assessment of course-related knowledge and skills might be more useful — even a simple teacher-designed, classroom-based assessment can work, so long as the pre- and post-course assessments are similar in content and difficulty. And if it informs the teacher as to precisely which standards, topics, and skills students need help with, all the better.

However, to gauge the effectiveness of the supplemental math program, it won’t be sufficient just to see how much progress students made from the pre- to the post-course assessment. After all, students’ outcomes could be influenced by all sorts of factors. Right now, for instance, many students are experiencing significant trauma related to race and policing, family finances, the health of loved ones, and more, all of which could affect their academic progress. For that matter, you can’t necessarily assume that the students taking the supplemental course are comparable to those who aren’t taking it, especially if they’ve chosen to participate. If they’ve made great progress, maybe that’s because they have parents who signed them up — and have the resources to support their learning in other ways that boosted their scores independently.

In that case, how can you gauge the quality of the course? How can you tell whether it’s an effective response to COVID-19 or whether it needs improvement?

To answer these questions, find a second point of comparison. For instance, see if you can identify a group of students who didn’t take the supplemental math course, but who are similar to those who did. Perhaps the students who got the supplemental course can be compared to a similar group of students at another school or district where the program wasn’t offered. Alternately, you could compare the performance of the same participants on two different assessments — one related to the course content (in this case, assessing their progress in math) and one that is not (testing them in reading, say). If the students improve in math but not in reading, that would suggest that the math program made a real difference. On the other hand, if they make similar improvements in both subjects, then the supplemental course probably did little to boost the progress they would have made on their own, without the extra support.

Or, districts could get even more sophisticated by randomly assigning some students to take the online courses and others not, then comparing their progress. Here, it’s fair to assume that the two groups of students really are similar in every way except for their participation in the program. Thus, if the students taking the course see better outcomes, then the district can be certain that it really was the course itself that made the difference and not some other factor. That’s why random assignment is touted as the “gold standard” of education research. It’s much trickier to set up this kind of experiment, though, and it involves providing a support to some students while denying it to others, which many educators are loath to do.

Why go to all this bother to assess the effectiveness of your district’s response to COVID-19, a disaster of the sort of that — we can only hope — we’ll never see again? For one, because local restrictions on in-person schooling may ebb and flow, possibly over several years. The more districts learn from their initial response, the more effectively they’ll be able to respond if and when their schools are forced to close again — and to respond to snowstorms, hurricanes, or other events that, while not as severe, can also result in lengthy closures. Further, some of the innovative programs that districts create now, in response to COVID-19, may be worth continuing long after the pandemic ends. By making systematic efforts to assess, refine, and improve them, we may end up with a slew of highly effective and locally proven new programs, services, and interventions. At the same time, districts will learn a lot about how to learn.

District administrators don’t need to become statisticians to do this work, nor do they need to design the perfect research project. They just need to reserve some time now to collect data that allows them to assess progress later and improve their programs over time.

ABOUT THE AUTHORS

default profile picture

Nora Gordon

NORA GORDON is an associate professor at the McCourt School of Public Policy at Georgetown University, Washington, DC. She is a coauthor of Common-Sense Evidence: The Education Leader’s Guide to Using Data and Research .

default profile picture

Carrie Conaway

CARRIE CONAWAY is is a senior lecturer on education at the Harvard Graduate School of Education and former chief strategy and research officer for the Massachusetts Department of Elementary and Secondary Education. She is a coauthor of Common-Sense Evidence: The Education Leader’s Guide to Using Data and Research .

How useful was this post?

Click on a star to rate it!

Average rating 0 / 5. Vote count: 0

No votes so far! Be the first to rate this post.