Given states’ resources and authority, they have a powerful role to play in making sure school improvement is informed by research. 

 

State education agencies have an outsized role to play in efforts to promote research use in education. They wield substantial influence over policy design and implementation and, therefore, over the ways in which research informs state and local decision making. They operate large-scale data collection systems that can fuel research into urgent questions about educational trends and challenges. And they can use their statewide reach to advance research use not just within the state agency, but also within districts and schools. 

However, when I was hired as the first research director for the Massachusetts Department of Elementary and Secondary Education (DESE), few would have thought of state education agencies (SEAs) as a strategic entry point for connecting research and practice. When I began that job in 2007, Massachusetts was the first SEA in the country to create a position focused solely on using research to design and execute its strategic plan. Although No Child Left Behind had added the phrase “scientifically based research” to the education lexicon, we still knew relatively little about how to bridge the research-practice gap, let alone how states could contribute.  

Getty Images

I couldn’t have known it at the time, but my hiring was an early milestone in a movement toward a greatly expanded role for state agencies in advancing the use of research in policy making and practice. By 2015, more than half of state agencies had at least one position dedicated to research or analysis (Schwartz, 2015). That year also marked the passage of the Every Student Succeeds Act (ESSA), which for the first time created a federal definition of “evidence-based” and required states to put far more effort into helping districts implement evidence-based practices, particularly in their lowest-performing schools. Since then, 8 of the 10 grants awarded by the Institute for Education Sciences to evaluate state- and district-level programs and policies have gone to researchers partnering with a state education agency. Clearly, those agencies have continued to strengthen their capacity to do this kind of work. 

While most states have a way to go before they can integrate research throughout their operations, they have made a lot of progress. The states that have done the most to advance research use for systems improvement — for themselves and for districts — have built strong research infrastructures, used both existing research and local data to spur improvement, and formed close partnerships with researchers.  

Building research infrastructure 

Research runs on data, and states have tons of it. Every state collects data on student enrollment, demographics, program participation, attendance, discipline, test scores, graduation rates, and more, and some states have found ways to gather data about topics that are relatively hard to measure, such as students’ perceptions of school climate or the rates at which they complete their college financial aid forms. And while the data that states collect often lack the fine-grained detail of that collected by schools or districts, they offer researchers an opportunity to look at trends for all students statewide and compare the performance of multiple districts over time.  

However, if states want data about local schools to inform their policy-making processes, they must figure out how to get their data into the hands of trusted researchers. The complexity of this task became evident to me on my first day at DESE, when my manager asked me to launch a research project with some local university-based researchers on the academic impact of charter schools. It took nearly a year to negotiate with those researchers over the terms and conditions for sharing student data. The investment was worthwhile, though: The data-sharing agreement we wrote for that first project still forms the core of the agency’s agreements today. It protects student data confidentiality, limits researchers to only approved uses of data, ensures a “no surprises” policy (requiring researchers to share an advance copy of new reports with the agency before publication), and builds in checks to ensure that approved projects actually serve state interests. 

By 2015, more than half of state agencies had at least one position dedicated to research or analysis.

Many agencies publish district data in tabular form on their website, but we found that it’s even more useful to create reports that show how data compare across similar school systems. My team developed an algorithm to identify comparison districts in two ways: one based on grade span, enrollment, and student demographics, and the other based on district financial conditions. We then designed data analysis tools that made it easy for districts to produce tables and graphs showing their own data over the last five years, with comparisons to the statewide trend and to trends for similar districts or schools. (Actually, the state had published this information for years but had packaged it in ways that made it difficult for local educators to access and use. When we shared the new tools with the state superintendents association, we literally got a standing ovation.) 

Using existing research 

Because states have considerable latitude both to develop their own policies and to decide how to implement federal policies, they have plenty of room to incorporate insights from previous research into their decision making. One way to do this is to survey the existing literature when beginning new initiatives or making major shifts to an existing program, using the findings to set parameters around the new work. In the early 2010s, for example, Massachusetts created a statewide task force to redesign the state’s educator evaluation system. It began by surveying previous research in this area, producing a briefing book, which included roughly 40 research-based articles, to guide its discussions. These previous studies, many of which described the challenges involved in measuring teacher impact on student outcomes, strongly influenced the design of the state’s new evaluation system, particularly its decision not to attach a specific percentage or weight to improvement on student test scores in educators’ evaluation ratings.  

States can also set grant and program requirements that mandate or promote research-informed practices and, as federal requirements change, they can help districts understand those changes and access relevant research. For example, changes to the federal Title II-A program under ESSA meant that districts could only use those funds to reduce class size if they did so in an evidence-based way. In response to this change, my agency developed a policy brief summarizing the existing research on the impact of class size reduction and its applicability in Massachusetts. The brief concluded that “the impact of a broad-based reduction in class size on student outcomes is likely to be small at best and localized to particular grades and types of students. Meanwhile, small classes are an expensive intervention in terms of fiscal cost, recruitment of qualified teachers, and available building space” (Schwartz, Zabel, & Leardo, 2017). We embedded a link to the report in the Title II-A application form. The year after we made this change, most of the state’s 370 districts receiving Title II-A funding decided that the money would be better spent in other ways; just 21 districts decided to spend it to reduce class size.  

Using state and local data in new ways 

Educators often make good use of research conducted in other parts of the country, but nothing beats the power of local data to capture the attention of those working at the state or district level. Whether the research focuses on a local program or shows how a general research finding applies locally, the connection to one’s own context marks the research as especially relevant, prompting further discussion and action.  

Traditional accountability measures give states and districts a valuable big-picture view of how students, and subgroups of students, are performing. However, state research teams can produce more fine-grained analyses that can help educators identify the specific factors that have influenced their own students’ performance, such as inequities in resource allocation that may contribute to inequitable outcomes.  

Nothing beats the power of local data.

One of my favorite examples of this sort of state-sponsored diagnostic analysis comes from the Tennessee Department of Education (2014), which sought to better understand the school-level barriers to student success on Advanced Placement tests. The state research team began by identifying potential reasons that Tennessee students might not succeed, such as inadequate academic preparation, inadequate Advanced Placement course offerings, and inequitable access to those courses. Then it analyzed state performance data to see how these factors varied across high schools. With this information in hand, it then offered targeted supports to meet each school’s particular needs.  

Similarly, in response to an extensive body of research showing how closely students’ outcomes tend to be linked to their access to effective educators, Massachusetts decided to create district- and school-level reports that showed how frequently students with economic disadvantages were assigned an inexperienced or poorly rated teacher, relative to their peers from higher-income households. It also required districts to review these data as part of their Title II-A applications and, where discrepancies were particularly large, develop a plan to address them. This work is just getting underway but is showing promise in focusing district attention on addressing inequities within their control.  

Another powerful way to use local data is to measure program implementation and impact. In Massachusetts, this played an important role in our school turnaround work in the 2010s. Finding little evidence that could guide our efforts, we asked ourselves how we could learn whether and how our improvement strategy was working. We then used our existing state assessment data to track academic outcomes, and we compared the strategies of low-performing schools that improved over time to those of schools that remained stuck in place (Conaway, 2018). Over time, the data allowed us to identify four practices that improving schools had in common but “stuck” schools did not: strong leadership, shared responsibility, and professional collaboration; a schoolwide focus on improving instructional practices; efforts to provide both academic and nonacademic student support whenever possible, and efforts to create a safe, orderly, and respectful school climate for students and families. Our research demonstrated that implementing those four practices well had a significantly positive effect on student outcomes, so we designed our whole strategy for school turnaround around them. Having data that showed what worked and why gave us greater confidence in our approach — not to mention greater credibility with schools, since our strategy was built on what we’d learned from the experiences of schools like theirs. 

Creating partnerships to support research use 

My agency published all the research it produced, but that was just one small piece of our strategy for ensuring that our research actually got used. That’s because we learned over time that research use is a social activity: If researchers and practitioners never interact with one another, then research will be unlikely to have much influence on practice. So, we focused on building trusting relationships in which researchers and program staff were treated as equal partners in the design, execution, and interpretation of research.  

We began by integrating our internal research staff into the agency’s larger strategic planning process, so they could help their colleagues in program offices to define high-priority research questions for them to investigate. Similarly, when we brought in external researchers to conduct studies, we required them to meet with agency program staff throughout their work, so that staff experts had multiple opportunities to pose questions, catch misunderstandings, analyze findings, and apply them to upcoming decisions. Both of these strategies encouraged researchers and program staff to have regular conversations, increasing the likelihood that research findings would be used.  

In my last few years at the state agency, we increasingly looked to build sustained research-practice partnerships with external research experts on our highest-priority policy topics, to make sure that the researchers developed a rich understanding of our local policy context and that our agency staff built their ability to synthesize and apply research findings. One of my proudest moments as research director came as I was meeting with agency colleagues to design a new study. Near the end of the conversation, I asked them, “Would you prefer to do this study internally or externally?” They responded, “Internally. So, either us or our research partner.” Their external partner had become so integrated into their work that they now saw the partner as an “internal” member of their team.  

New directions for states 

State agencies still vary tremendously in their capacity to facilitate research use. Some are just beginning to build the necessary infrastructure and relationships that this work requires, while others are cutting a new trail for the future of research use in SEAs. But the language in ESSA defining an “evidence-based activity, strategy, or intervention” and requiring its use has forced all states to get better at using and interpreting research, especially when it comes to helping their lowest-performing schools, where these requirements are most stringent. And organizations such as Results for America and the American Youth Policy Forum have created networks of SEA research staff who are learning from one another as they expand their research portfolios. I’m optimistic that these investments of time and resources will spur innovation in states as they ramp up their use of research for systems improvement.  

I am particularly enthusiastic about the efforts I’ve seen in a few states to establish networks of districts that work together with researchers to design studies and interpret findings. In Rhode Island, for example, a group of districts has come together with the Annenberg Institute at Brown University to pilot specific research-based pandemic recovery strategies in ways that improve individual district implementation and build researchers’ and practitioners’ knowledge about the strategy over time. Tennessee has launched a similar effort, the COVID Innovation Recovery Network, which is led by Tennessee SCORE and the Tennessee Education Research Alliance. This sort of embedded, networked approach is likely to be effective in building research literacy and promoting research use in districts as well as the state agency itself.  

Most importantly, state-led efforts to connect research and practice have gained momentum, and we know more than ever before about how to do this work well. State agencies are becoming increasingly adept at building, using, and sharing research to improve their school systems, and I’m excited to see where they take this work next.   

References 

Conaway, C. (2018). Tier 4 evidence: ESSA’s hidden gemPhi Delta Kappan, 99 (8), 80. 

Schwartz, N. (2015). Making research matter for the SEA. In B. Gross & A. Jochim (Eds.), The SEA of the future: Building agency capacity for evidence-based policymaking (Vol. 5., pp. 23-39). Edvance Research, Inc., Building State Capacity and Productivity Center. 

Schwartz, A.E., Zabel, J., & Leardo, M. (2017). Class size and resource allocation: ESE Policy Brief. Massachusetts Department of Elementary and Secondary Education. 

Tennessee Department of Education, Office of Research and Policy. (2014). Advanced Placement strategy: A framework for identifying school-level barriers to AP success. Tennessee Department of Education. 

ABOUT THE AUTHOR

default profile picture

Carrie Conaway

CARRIE CONAWAY is is a senior lecturer on education at the Harvard Graduate School of Education and former chief strategy and research officer for the Massachusetts Department of Elementary and Secondary Education. She is a coauthor of Common-Sense Evidence: The Education Leader’s Guide to Using Data and Research .