What should schools, districts, and states know about the organizations they hire to support school improvement efforts? 

Whether they’re seeking to improve instruction, revamp their curricula, or provide more social-emotional support, schools with limited staff time and resources often turn to outside experts for help. As a result, billions of dollars have been pumped into what has become a robust school improvement industry made up of companies ranging from single-person consultancies and small evaluation firms to whole-school operators and large research and development agencies (Emma, 2015; Rowan, 2002). These companies — also known as external providers — differ considerably with respect to the services they provide, how they provide them, and the extent to which they have evidence that their services are effective. Yet they have become embedded in many states’ overarching approach for improving underperforming schools. 

As the number of external providers continues to climb, educational leaders at all levels can easily get lost in the complexities of figuring out where to spend Title I and other monies. Decisions about which external providers to hire or fire get muddier and muddier, and leaders often end up having little to show for all the money spent trying to improve underperforming schools (Murphy & Bleiburg, 2019). To help educational leaders make important decisions about how to work with external providers, we have conducted a series of studies that seek to map the landscape of how state education agencies (SEAs), districts, and schools engage with these companies. Based on our research, we offer some big-picture considerations for schools, districts, and SEAs, as well as some ideas for the future. 

What research reveals about the school improvement industry 

In our initial analysis of how SEAs planned to use federal turnaround funds for low-performing schools, we sorted states into three categories: internal, hybrid, and external (VanGronigen & Meyers, 2019). Internal states administered school improvement efforts entirely within their SEA, without the help of external providers. Hybrid states used both their SEA and external providers to implement school improvement efforts. In external states, the SEA had no role in school improvement efforts; it was all outsourced to external providers. Only five states (Hawaii, Iowa, Maine, Montana, and South Dakota) were classified as internal — the remaining states engaged to some degree with external providers, including eight (Alaska, Nebraska, Nevada, North Dakota, Oklahoma, Utah, Wisconsin, and Wyoming) that only used external providers to improve schools. 

In a related study, we focused on how states hired and fired external providers, a process that involves four stages that states take part in to varying degrees: solicitation, vetting, monitoring, and evaluation (VanGronigen & Meyers, 2020). After reviewing publicly available documents and speaking with SEA employees, we found that 19 states solicited providers by issuing some kind of request for proposal or application, while 21 states vetted providers before approving them to work with schools and school districts (or other local education agencies). Among those 21 that vetted, only 11 published their vetting criteria while 13 ended up publishing their lists of approved or recommended providers on their websites. Some states, such as Colorado, required districts and schools to choose from their approved list, while other states, such as New Mexico, allowed districts and schools to use a provider they found on their own as long as it met the state’s requirements. 

Seven states monitored their providers during the school year, often by having the provider submit a self-created monthly or quarterly report to the SEA. Finally, only five states (Colorado, Texas, Virginia, West Virginia, and the District of Columbia) evaluated providers, often requiring provider-written self-assessments.  

Finally, we analyzed lists of approved or recommended external providers to learn more about those that made it past SEA review (Meyers & VanGronigen, 2018). Providers varied considerably with respect to firm size, geographic location, expertise, intensity of services, and availability. Most of these providers offered consulting services, and only a few were what we called “comprehensive” providers — regional or national organizations, such as American Institutes for Research and WestEd, that had the bandwidth and resources to be truly responsive to local improvement needs. Of about 150 providers reviewed, only a quarter said their services were research-based while another 30% or so said they were research-informed. So, only a little more than half of the approved providers acknowledged research or evidence in any way. Furthermore, only about 10% had evidence of impact on student outcomes, and only a portion of these provided such evidence of effectiveness in underperforming schools. In short, despite being approved or recommended by SEAs to work with underperforming schools, there appeared to be little substance behind these providers’ claims of what they could do. 

So how can schools, districts, and SEAs ensure that the external providers they choose are able to deliver on their promises? 

Considerations for schools and districts 

  1. Know thyself.

Before bringing in external help, schools and districts should invest time into getting to know themselves and their challenges and needs. Although this seems intuitive, research has found that many of the analyses districts undertake to get a handle on their needs are of low quality and lack depth (Duke, Carr, & Sterrett, 2013; VanGronigen & Meyers, 2020). In other words, the foundation for school improvement work — identifying what needs to be addressed by whom and how — appears unstable. Developing a detailed understanding of current needs and the root causes of those needs can better position schools and districts to identify which areas to focus on. 

As part of this process, gathering information from school staff and the surrounding community permits schools to devise richer understandings of challenges, specifically root causes that might not be obvious to decision makers. The more nuanced view that emerges from such investigations might change the scope of work for an already selected external provider or prompt a school or district to select a different provider. This investment of time and targeted work up front can reduce issues of fit and performance on the back end. 

The limited involvement of many SEAs in selecting and monitoring providers places great responsibility on the shoulders of school and district educators. The better that a school knows itself, the more likely it will be to choose a provider with the ability and expertise to aid the school in addressing its specific challenges and needs. 

  1. Be a critical consumer.

Signed into law in 2015, the Every Student Succeeds Act (ESSA) calls for educators to use evidence-based interventions and divides evidence into four tiers: strong, moderate, promising, and “demonstrates a rationale.” Interventions specific to school improvement — many of those needed in underperforming schools — must have evidence from the first three tiers. These evidence tiers align with the types of research studies conducted on interventions, such as randomized trials versus preliminary pilot studies. By requiring evidence and creating tiers, U.S. federal policy makers seemed to nudge SEAs, districts, and schools toward searching for and implementing “better” interventions. Thus, providers with little to no evidence should not be hired using federal monies. Yet, as our research revealed, numerous providers lack any evidence that their services are informed by, much less based in, research.  

Reasons for these circumstances likely vary — for instance, there may be too many schools labeled as underperforming and too few providers offering services. In some cases, providers may desire to have their approaches evaluated using rigorous research methods, but they lack the capacity to do so. To combat this issue, Massachusetts, for example, devised an explicit theory of action about how they wanted to improve their schools and then only approved providers that had the qualifications to address one or more aspects of the theory of action. Other states, like Arizona and Arkansas, appear to permit providers without evidence to continue working with their schools and districts. 

When looking at potential external providers, schools and districts should find out not just whether the provider’s offerings align with a school’s challenges and needs, and whether the provider’s staff has legitimate expertise in performing these services, but also how its approach and services are rooted in rigorous research and evidence. To evaluate providers’ claims, educators must enhance their understanding of what constitutes evidence by, for example, developing a working knowledge about common qualitative and quantitative research designs and considering how findings from a study conducted in one setting — say an urban elementary school — might translate to an entirely different setting — like a rural high school.  

  1. Use external providers to build internal capacity.

Ideally, external providers can increase school and district capacity by bringing additional people to the building to serve as coaches for educators and leaders of professional learning. Schools and districts should view having these providers on-site as an opportunity to build their own internal capacity around the very issues that required outside assistance in the first place. For instance, a school that contracted with a provider to establish meeting protocols and norms could observe how the provider goes about the work so, if inefficiencies with meetings arise in the future, the school can address the issue on its own. 

In this way, schools and districts can leverage the expertise of others to build their own internal capacity over time. This organizational learning can help make the best use of limited funding. Once schools and districts have learned from their providers’ example to solve problems on their own, these scarce resources can be reallocated to address newly identified issues. New Mexico, for instance, partnered with the University of Virginia’s Partnership for Leaders in Education (UVA-PLE), which is a two-year program with documented evidence of impact that’s aimed at developing systems leadership at the school and district levels. After working with PLE for a few years, the state created an in-service professional development program for principals based on what it learned from its time with UVA-PLE.  

  1. Manage up. 

Since a number of SEAs don’t seem to have the capacity or desire to vet and evaluate external providers, it is up to schools and districts to “manage up” by proactively submitting information to SEAs about school- and district-specific needs and which providers may (or may not) be well suited to address those needs. 

This feedback is particularly important for SEAs that give schools and districts an approved or recommended list of external providers. Even when SEAs take an active role in vetting providers, they can’t brainstorm every challenge and need among their schools and districts. They approve or recommend providers based on information they have at a given time. Schools can inform districts and districts can inform (and ideally improve) SEA decisions about which providers to recommend by offering insights into how things are going on the ground — such as how provider services are working and how provider staff are engaging with educators. 

With little doubt, these considerations create work for educators in the short run, but we believe that investing more time early on can reduce surprises and increase efficiency and effectiveness over time. By performing root cause analyses and reviewing external providers’ credentials and evidence, schools and districts should be able to make more informed decisions about who comes into their organizations to perform services. And by learning from those organizations and providing feedback to SEAs about their work, they’ll shore up expertise for future improvement efforts. 

Considerations for SEAs 

Traditionally, SEAs have been compliance monitors more than authentic partners in school improvement efforts. Recently, though, a number of SEAs, such as those in Kentucky, Maryland, and Ohio, have repurposed existing personnel or hired new personnel to expand their in-house capacity to offer assistance to underperforming schools. Kentucky, for instance, used SEA funds to hire education recovery specialists and assigned these staff members to specific schools around the state. Efforts like these contrast with states like North Carolina that formerly had personnel for underperforming schools, but have since shifted away from having SEA-hired personnel to staff improvement offices.  

But with educators in all kinds of schools and districts seeing their workloads increase — sometimes substantially — there are things SEAs can do to help their schools and districts make better decisions about how to spend precious monies. 

  1. Be responsive to local challenges and needs.

SEAs should be responsive to their constituents — districts and the schools within them. It’s essential that SEAs develop a more comprehensive understanding of the challenges and needs of their schools and districts, listen to the feedback they share, and be especially mindful of the needs of underperformers facing high-stakes pressures to improve. This enhanced understanding would permit SEAs to make better spending decisions down the road — and to help schools and districts do the same. 

An SEA needs to explicitly seek information and feedback from districts and then act on it in a transparent way. Some SEAs, such as those in Connecticut, Georgia, and Tennessee, have charged specific personnel with developing authentic — not compliance-oriented — relationships with educators in underperforming schools and districts. With richer connections between the organizations, SEAs can better mobilize resources so that kids, educators, and communities receive what they need to organize, carry out, and sustain critical improvement efforts. 

  1. Develop an overarching improvement approach.

An essential leadership practice for improving underperforming schools is to create a compelling, attainable vision and then devise a set of strategies to help realize that vision. SEAs should do the same for their approach to school and district improvement. In reviewing ESSA state plans, we found many school improvement approaches and strategies to be unclear, unspecific, misaligned, or shallow. A structure is only as strong as its foundation, and the lack of detail and clarity from some states offers a less-than-sturdy base for spending decisions and future improvement work. 

To address this issue, SEAs should perform an audit of their improvement approach, including systems, strategies, resources, and personnel. Looking inward, for instance, what do SEAs expect of their schools and districts? Do current SEA staff members have the expertise to complete the work needed to improve schools and districts? Looking outward, are SEAs reaching out to schools and districts for feedback? They might ask educators’ opinions about the clarity of the SEA’s vision and improvement strategies, such as the use of external providers, or the extent to which the SEA walks the walk by providing schools and districts with the necessary resources to meet state (and federal) expectations. 

SEAs should then compare their “inward” and “outward” notes. Where is there alignment? What appears unclear? What do schools and districts need that the SEA is not currently providing, but can? After carefully and critically reflecting on their findings, SEAs should refine their improvement approach so that it better aligns with the challenges and needs of their schools and districts. In its ESSA plan, for example, Washington State explicitly described taking a “whole agency approach,” which intended to leverage the expertise of employees working in non-SEA agencies like human services. 

  1. Be more involved with external providers. 

Our final consideration brings us back to external providers. SEAs should take a more active role in identifying, soliciting, vetting, monitoring, and evaluating providers. Approximately 50% of SEAs take some responsibility for the first three activities, often publishing a list of approved or recommended providers (VanGronigen et al., 2020). Yet few SEAs offer specifics on how providers were identified and who vetted them. Moreover, less than 20% of SEAs monitored and evaluated their providers, pushing that work down to schools and districts. The SEAs that did monitor and evaluate often required providers to submit self-reported reflections and assessments. 

Real money — sometimes a lot of it — is spent paying external providers for their services. SEAs must ensure they are getting their money’s worth. Government spending scholars argue that “mature” (Rendon & Rendon, 2016, p. 754) procurement processes are measured and continuously improved. Yet, if SEAs fail to rigorously collect and analyze data on provider performance, then they are not well-positioned to measure their performance and make improvements accordingly. Thus, SEAs should seek out whatever data they can on any potential external providers. 

SEAs should also develop rigorous monitoring and evaluation protocols once providers are brought on board. While this requires real effort and investment on their part, there’s simply no other way for SEAs to ensure the quality of providers’ work. Given their incentive to paint a rosy picture, providers can’t be left to assess themselves. And while states may be tempted to rely on student test scores as evidence of providers’ effectiveness, those data tend to become available in the late summer or early fall, after contracting decisions have already been made for the coming school year. 

Ideally, strong monitoring and evaluation protocols will include both self-reported data from external providers and data from the schools and districts they serve. These data can then be compared to other district- and school-level data that an SEA might collect, such as educator working conditions, educator turnover, and community engagement. Through this more holistic approach to monitoring and evaluating providers, schools, districts, and SEAs can make better decisions about whom to hire, rehire, or fire. States like Colorado, for instance, are on the right track by assigning specific personnel to monitor and evaluate providers. A key next step would be to take a hard look at what data might be collected and used for evaluation and how other measures in addition to student test scores might help SEAs make more informed decisions. 

Now what? 

A previous Kappan article asked whether SEAs should “make or buy” when it comes to school and district improvement efforts (Peurach, Glazer, & Lenhoff, 2012). Our research found that more states appear to be buying than making. That being the case, it’s of paramount importance to ensure that precious dollars are being spent wisely and that schools and districts, especially underperformers, are getting the supports they need. 

Despite how common it is for schools to rely on external providers, the research base on the topic remains thin. With ESSA continuing to call for SEAs to use evidence in their improvement approaches, being an informed consumer is perhaps more important than ever. As we found, though, a number of external providers fail to demonstrate that their approaches are research-based, much less that those approaches actually have evidence of impact. There are steps that schools, districts, and SEAs can take that make these waters less murky. In the meantime, however, we’ll end with a warning: If you’re engaging with external providers for some or all of your school improvement services, then remember: buyer beware.   

Note: Do you have experiences working with external providers that are offering improvement services? If so, we’d love to hear from you. Reach out to us at bvg@udel.edu. 

References 

Duke, D.L., Carr, M., & Sterrett, W. (2013). The school improvement planning handbook: Getting focused for turnaround and transition. Lanham, MD: Rowman & Littlefield. 

Emma, C. (2015, November). Here’s why $7 billion didn’t help America’s worst schools. Politico. 

Meyers, C.V. & VanGronigen, B.A. (2018). So many educational service providers, so little evidence. American Journal of Education, 125 (1), 109-139. 

Murphy, J.F. & Bleiburg, J.F. (2019). School turnaround policies and practices in the U.S.: Learning from failed school reform. Cham, Switzerland: Springer International Publishing. 

Peurach, D.J., Glazer, J.L., & Lenhoff, S.W. (2012). Make or buy? That’s really not the question. Phi Delta Kappan, 93 (7), 51-55. 

Rendon, J.M. & Rendon, R.G. (2016). Procurement fraud in the US Department of Defense: Implications for contracting processes and internal controls. Managerial Auditing Journal, 31 (6/7), 748-767. 

Rowan, B. (2002). The ecology of school improvement: Notes on the school improvement industry in the United States. Journal of Educational Change, 3 (3-4), 283-314. 

VanGronigen, B.A. & Meyers, C.V. (2019). How state education agencies are administering school turnaround efforts: 15 years after No Child Left Behind. Educational Policy, 33 (3), 423-452. 

VanGronigen, B.A. & Meyers, C.V. (2020). Short-cycle school improvement planning as a lever to launch school turnaround: A descriptive analysis of plans. Teachers College Record, 122 (5). 

VanGronigen, B.A., Meyers, C.V., Scott, C., Fantz, T., & Dunn, L.D. (2020). Soliciting, vetting, monitoring, and evaluating: A study of state education agencies’ use of external providers for school improvement efforts. Manuscript submitted for publication. 

ABOUT THE AUTHORS

default profile picture

Coby V. Meyers

COBY V. MEYERS is associate professor of education and chief of research for the Partnership for Leaders in Education at the University of Virginia, Charlottesville. 

Bryan A. VanGronigen

BRYAN A. VANGRONIGEN is assistant professor of education specializing in educational leadership at the University of Delaware, Newark.