School report cards can be useful to those weighing schools. But if designed poorly, they can damage perceptions of school quality.
Amid the frequent news stories about “failing” public schools, it’s easy to lose sight of the many highly successful schools. But knowing where one’s local school falls on this spectrum can be hard to determine. In part, this is because gathering information on school quality can be difficult and time consuming, leaving many to rely on school demographics, personal observations, or word of mouth within one’s social network to estimate a school’s quality.
Recognizing the need for information in public education, Congress mandated in the No Child Left Behind Act (NCLB) that all states create and disseminate annual school performance report cards. A decade after NCLB passed, school report cards are widely available for nearly every school in the country. As the amount of school performance data grows, so too has public attention. In major cities like New York and Washington, D.C., days of media coverage often follow the release of the annual report card as schools become publicly identified as high flyers or huge disappointments. Research suggests the public does pay attention to these reports, with report card grades influencing parent satisfaction and housing prices (Jacobsen, Saultz, & Snyder, 2013; Figlio & Lucas, 2004). But this responsiveness should be viewed cautiously because report card elements not related to school performance, like how data are presented, also influence satisfaction (Jacobsen, Snyder, & Saultz, in press). Thus, while school performance report cards hold potential to inform, they also can distort views about public education performance and affect parent and voter support for public education.
Why school report cards?
Providing performance report cards can enlighten the public and lead to an improved education system.
Two facets underpin the rationale for school performance report cards: their ability to enlighten and embarrass. The American public spends over $500 billion each year on public education and wants to know whether education leaders used these funds wisely. In theory, providing performance report cards can enlighten the public by allowing people to distribute political and financial rewards and punishments based on more accurate and consistent information.
School report cards also pressure schools into higher performance through embarrassment. No one likes to have his or her failures made public. Current policies that publicize performance information are based on the idea that schools, seeking to avoid this public embarrassment, will behave in ways to improve outcomes and avoid negative performance reports. Through these two mechanisms — enlightenment and embarrassment — school report cards are expected to lead to an improved education system.
But there may also be a downside to enlightenment and embarrassment. These theories largely overlook the ability of individuals and bureaucracies to ignore information that sheds a positive light on schools and that embarrassing schools can demoralize faculty and depress performance further. These negative outcomes also must be considered as we further publicize school performance information.
Expanding an old idea
Despite NCLB’s heavy emphasis on data and dissemination, providing the public with school performance information is not a new idea. In fact, early efforts to evaluate the American education system began in the 1960s with the development of the National Assessment of Educational Progress (NAEP). NAEP, commonly referred to as the Nation’s Report Card, has publicly reported what students know and are able to do across a variety of subjects since 1969. In the 1980s, the U.S. Department of Education compared states according to student performance on the SAT and ACT in the widely distributed “wall chart.” In the 1990s, many states developed their own measures of school performance with most states publishing some form of academic performance information even before NCLB mandated this practice. While NCLB did not initiate the publication of school performance data, it has significantly expanded its volume, reach, and sophistication of presentation.
Although NCLB made dissemination more widespread, it left many decisions regarding the amount, type, and presentation of data up to states. Some states publish a few pages; others provide report cards exceeding 20 pages. For example, Wisconsin’s school report cards are two pages, while New York’s are 27 pages. These different lengths often stem from decisions about what data to include. While some states report academic performance — math and reading test scores — and little else, others report on teacher qualifications, school safety, numbers of classes offered, academic performance disaggregated by subpopulations, and include multiple interpretations of the same data. Additionally, while most states include an overall school performance measure, many different formats are currently in use. For example, California uses a performance index score ranging from 200-1000, Wisconsin assigns schools to performance categories such as “Exceeding Expectations,” and Louisiana gives each school an A-F letter grade.
As states continue to change and expand their performance reporting systems, they often rely upon trial and error to determine the optimal way to communicate the complex tasks of schooling. Several states recently changed their reporting formats — Maine and Oklahoma have switched to letter grades — and some states acknowledged that such changes could adversely affect public and parent perception. In Kentucky, for example, school leaders and state level officials expressed concern that parents may misinterpret the new data (Warren, 2012). Thus, challenges arise as states expand publicly available school performance data.
Challenges encountered
Predictably, states continue to encounter numerous challenges when designing and disseminating report cards. However, discussions regarding report cards and their use often don’t happen as changes occur. Here we discuss three challenges in an effort to spur conversation and deliberation.
Data interpretation
Decisions on data presentation are important because they significantly influence how the public understands the data. Our research suggests that the public is sensitive to how data are presented. Seemingly small changes can result in sharply different interpretations of how well a school is performing. In a nationally representative survey experiment, we analyzed how over 1,100 people interpreted different data formats by randomly assigning individuals a report card to view. Some viewed a school report card where the school received a letter grade such as an “A”; some viewed a report card that reported the percent of students who met educational goals; and others viewed a report card that provided performance labels such as “Advanced” or “Basic.” Then we asked study participants to judge the quality of the school they viewed.
School performance data formats aren’t simply technical decisions. They are also political decisions that must avoid unintentional erosion of public support for education.
Ostensibly, an “A,” “90% proficient,” and “Advanced” are equivalent, and thus we expected little to no difference in how individuals judged schools with these different labels. But this wasn’t the case. Instead, individuals viewing the “A” rated school reported significantly higher levels of satisfaction with the school’s performance than those who viewed the other two formats. Further, individuals were much less satisfied with a school’s performance if it received a “C” rather than being labeled as performing at the “Basic” level (Jacobsen, Snyder, & Saultz, in press). These results suggest states can shape how the public believes schools are performing simply by selecting different reporting formats. That could lead to a host of negative effects ranging from reduced willingness to pay for education to a fundamental distrust of the system. Thus, decisions about school performance data formats aren’t simply technical decisions. They are also political decisions that must avoid unintentional erosion of public support for education.
Data overload
While most policy makers agree that publicizing school performance data is important, how much is too much? Today, many states ascribe to the “more is better” approach and have expanded the amount of data made publicly available. This has created very long report cards. For example, one printed version of a school report card in Kentucky can exceed 45 pages.
To understand how parents use the report card data, we joined Kentucky parents participating in the Commonwealth Institute for Parent Leadership (CIPL) as they learned about school report cards. Consistently, parents expressed being overwhelmed and confused by the data even after receiving training on how to interpret reports. As one parent said, “I have a college degree; I work at a college; and this thing is confusing.”
This sentiment was echoed by many of the parents with whom we spoke.
But beyond being overwhelming, the vast amount of data led parents to a number of different interpretations. Parents reported that different data in the report cards, as they understood it, told conflicting stories about their school’s performance. As one parent said to us: “You had this letter grade, you had this ranking number, you had a percentile and then you had the raw score, and all four of them, based on how you were looking at the data, you could’ve come up with all kinds of different interpretations.”
In fact, upon closer inspection, parents raised concerns about how small differences on one measure could imply very large differences on another. In particular, the role of cut points between two performance labels was a point of much discussion. One parent called the overall score confusing, adding, “When you have one school that’s at 61.2 and ‘needs improvement,’ and another school at 61.8 and they’re ‘proficient,’ I mean . . . that doesn’t make any sense.”
What appears as a small difference on one measure resulted in a large distinction on another. This troubled parents as they examined the data. As a result, some of the parents in the training grew skeptical of data, leading them to question the usefulness of the report cards. After discussing the data, one parent summarized how many felt: “You have to take it [the report card] with a grain of salt.”
Data consequences
Report cards are often implemented simultaneously with many other reform efforts. Other policy changes may significantly affect school ratings and, unless understood broadly, these changes may be inaccurately interpreted. This happened recently in New York City when, in tandem with state reforms to achievement tests, the city changed how grades were assigned in an effort to spur continuous improvement. The change by the city limited the number of schools that could receive “A” and “B” grades. As a result, over 71% of elementary schools saw their report card grade decline. Unaware of this policy change, many parents likely became alarmed when they saw their schools go from an “A” to a “C” in just one year.
To understand how parents responded, we analyzed parental satisfaction changes on the annual New York City School Survey. Our research found that parents, on average, reported lower levels of satisfaction after the report card grades declined. This occurred even though actual school test score performance was mixed and, in some cases, actually improved (Jacobsen, Saultz, & Snyder, 2013).
Implications for policy and practice
There has been a steady decline in public confidence in the public education system. Will school report cards exacerbate or reverse this trend? While it’s too soon to know for sure, our research demonstrates that policy choices regarding school report cards can without warrant erode confidence in public schools. Currently, our ability to collect and disseminate data outstrips our understanding of how data are used. Thus, school leaders, policy makers, and researchers should proceed cautiously. While we present a number of challenges that commonly arise above, we do believe school leaders and state policy makers can take steps to make data more beneficial for the public.
First, data dissemination efforts must be accompanied by equally ambitious public information campaigns. District and state education leaders ought to provide public forums for parents and interested citizens to learn about and discuss school report cards. We especially encourage these efforts when changes to the report card occur. If scores dramatically rise or fall, special attention must be spent communicating with the public. Those changes could be due to test changes, introduction of new standards, new report card formats, or actual performance changes. Providing opportunities for discussion is just as important as providing the data.
Second, school leaders might identify a few dedicated parents who are interested in helping communicate the details of school report cards. Schools can work with the PTA or other associations to provide leadership opportunities for parents who are particularly interested in school performance data. In Kentucky, each school district identifies parents for the CIPL trainings (described above). Furthering these efforts will improve school-community relations, better educate the public, and open the lines of communication, more generally enabling increased engagement in public education.
Third, as school communities discuss report cards, keep in mind that even lengthy report cards can’t cover all aspects of schooling. Data in report cards mostly report academic achievement (often only reading and math test scores). Yet parents and the public want a wide range of outcomes from public education, including developing students’ social skills, cultivating healthy physical habits, and teaching critical thinking skills (Rothstein, Jacobsen, & Wilder, 2007). Communities and school leaders must also understand what is not included in school report cards. The temptation with any report is to let that data dictate what is important. While the data in school report cards can provide insights regarding school performance, no current iteration accurately captures the multiple desires the public has for its schools.
Conclusion
School report cards provide an excellent opportunity for school leaders and policy makers to communicate with parents and the public. If used appropriately, report cards can be a powerful tool — an informed and active public is one of the most valuable components of a healthy democracy. The success of school report cards should not be measured simply by how much data are available, but rather by how much discussion and engagement report cards spark.
References
Figlio, D.N. & Lucas, E. (2004). What’s in a grade? School report cards and the housing market. The American Economic Review, 94 (3), 591-604.
Jacobsen, R., Saultz, A., & Snyder, J.W. (2013). When accountability strategies collide: Do policy changes that raise accountability standards also erode public satisfaction? Educational Policy, 27 (2), 360-389.
Jacobsen, R., Snyder, J.W., & Saultz, A. (in press). Informing or shaping public opinion? The influence of school accountability data format on public perceptions of school quality. American Journal of Education.
Rothstein, R., Jacobsen, R., & Wilder, T. (2007). Grading education: Getting accountability right.New York, NY: Teachers College Press.
Warren, J. (2012, January 29). New Kentucky student assessment test: Prepare to be confused. Lexington Herald-Leader.
Citation: Jacobsen, R., Saultz, A., & Snyder, J.W. (2013). Grading school report cards. Phi Delta Kappan, 95 (2), 64-67.
R&D appears in each issue of Kappan with the assistance of the Deans’ Alliance, which is composed of the deans of the education schools/colleges at the following universities: Harvard University, Michigan State University, Northwestern University, Stanford University, Teachers College Columbia University, University of California, Berkeley, University of California, Los Angeles, University of Michigan, University of Pennsylvania, and University of Wisconsin.
ABOUT THE AUTHORS

Andrew Saultz
ANDREW SAULTZ is an assistant professor of educational policy at Pacific University, Forest Grove, OR.

Jeffrey W. Snyder
JEFFREY W. SNYDER is a graduate student at Michigan State University, East Lansing, Mich.

Rebecca Jacobsen
REBECCA JACOBSEN is an associate professor in the Department of Educational Administration at Michigan State University, East Lansing, Mich.
