Educators must be savvy consumers of new curricular materials and not assume that they’re a panacea for all educational ills.
Peer into any classroom, grade level, school, or state, and you’re sure to find reading curriculum materials purchased from an educational publishing company. In fact, nearly three-quarters of U.S. elementary schools and teachers use core reading programs (Education Market Research, 2010). As is often the case, curriculum materials come and go, swinging back and forth with the educational pendulum. Publishing companies work hard to keep up, often just reorganizing, relabeling, and adding in more current technology to produce what they like to tout as new editions that are “the answer to schools’ academic woes.”
Unfortunately, “live and learn” often becomes the mantra that accompanies curricular purchases as schools serve as a revolving door for new programs and materials. Districts make purchases and then play the waiting game to see how those materials work for staff and students alike, sometimes collecting formal data but most often measuring the success in informal ways. But the bottom line is this: No one has found the holy grail of reading products, one that outperforms every other instructional material on the market in terms of student achievement on a large scale across demographics and teaching situations. No product is perfect. Three experiences with buying new materials — specifically, for reading instruction and intervention — have taught me some valuable lessons.
# 1. Why it worked for some but not for others
I once worked with a large, suburban district that purchased a computer-based, teacher-supported reading curriculum for middle and high school students who struggled in reading. Teachers and interventionists were provided with guidelines that required strong fidelity to the program.
One of the most respected and knowledgeable teachers in the district began using the program and said that it often contradicted what she knew to be true about high-quality reding instruction. Generally there was a large amount of focus on reading to complete skill-and-drill type activities, with little authentic reading and writing experiences that allowed the reader to connect with and respond to what was being read in meaningful ways. Because of the administrator’s strict directives about the program, she responded by going underground in her use of it — that is, she made sure students logged in enough so those watching saw it was being used with fidelity, but when she and her students were the only ones in the room, she adjusted instruction, throwing out parts she felt didn’t work and adding missing essentials.
As my work with the district began, the administrative team had just started to discuss the effectiveness of this program, which had been in use for several years. An administrator uncovered some disturbing trends related to students who had been in the program. Within the data, two distinct groups had emerged: one group of students who were progressing in average ways on state standardized tests and another group of students who had regressed considerably from one year to the next. The team was charged with finding out why this one program was producing two different results.
First, we examined controlled factors. Could the teacher account for the difference? One teacher who used it had great results for almost all of her students, but every other teacher using the program had some students in both categories — those who succeeded and those who regressed. The conversation quickly turned to the concept of fidelity. Was it because the one teacher used the program exactly as she was supposed to, and the others did not? Would a more forceful top-down approach to monitoring the teachers’ use of the program be the answer to this dilemma? These were the questions we asked ourselves.
I knew this line of reasoning wasn’t right; research supports the idea that as teachers grow in both knowledge and experience, modifying programs can help them meet student needs (Kersten & Pardo, 2007). You see, the one teacher who was producing successful students was the teacher above who recognized the program’s weaknesses and adjusted for them. She was succeeding despite the program but was afraid to tell administrators that she wasn’t using the program with fidelity.
So we continued our quest to learn why students in other classrooms either succeeded or failed terribly — failing “terribly” meant they dropped between 30 to 40 points from one year to the next on the state achievement tests. We began looking at the program-to-student match. We discovered that students who were significantly behind their peers ally grew when exposed to the program, but students who weren’t as far behind regressed. So the data revealed that the curriculum and demands within this program were a good match for students who were more than two years behind but a bad match for borderline students. We then looked at how students had been matched to the program.
Students who were significantly behind had been automatically matched to it since it was designed for those who were more than two years behind. However, once class rosters were created with just these students, administrators found that the class sizes were too small compared with classrooms not using the program so they placed students with borderline scores in those classes as well. Unfortunately, trying to be fair to teachers actually turned out to be very unfair for students who were not significantly behind.
Lessons learned:
- Evaluate schoolwide, year-to-year data in relation to curricular reading materials. It’s not enough to simply buy and implement programs. Continually ask the question, “Are the students who are using these materials growing?”
- Different teachers might be using programs differently. Teachers must feel comfortable and supported in evaluating both the strengths and weaknesses of various programs and materials and able to adjust accordingly. Forcing teachers to use a program with fidelity is based on the false premise that a program is free from flaws.
- A program can be beneficial for some students but detrimental for others. Programs aren’t strictly good or bad; they have to be carefully and strategically matched to individual student strengths and weaknesses.
#2. Why you can’t blame the “owner” (a program isn’t perfect)
There are usually one or two people in a district who get “matched” to a program. Many times, this is an individual or a small group of individuals in the curriculum department who were responsible for finding the program, negotiating prices and components, and organizing teacher training. Inadvertently they become the face of the program in the district. This often doesn’t end well because when the program is critically evaluated, people take it personally.
For example, a curriculum specialist had become the face of the reading program in question; as we explored some of the problems that the data had raised, the specialist took it personally. Most people at the table simply wanted to uncover discrepancies in student scores and in the teachers’ use of the program, but at least one person felt under attack and wanted to stand up for the program no matter what the data revealed.
Most of this could have been avoided, but it would have required a very different stance when adopting a program. All too often, districts buy and promote programs to teachers and parents with the idea that they’re providing the absolute best product on the market, no questions asked. Most of the time, teachers are expected to use the program; any discussion of program weaknesses is viewed as an attack. This sets up the district for failure and encourages teachers to react in one of two ways: to go underground if they’re brave enough to recognize the program’s weaknesses and do something about it or to continue using the program with students despite their concerns.
A far better approach is to adopt a program with the idea that it’s potentially the best choice on the market at the time but to clarify that the district wishes to evaluate its use while teachers implement it. This leaves the door open to discussion about the strengths and weaknesses encountered when actually using the program with groups of students in different grade levels, of varying ability levels, and in different teaching situations. This discussion must include the people actually using the program: the teachers. Without their voices, administrators won’t know about issues that have been solved or may need to be solved when considering the ongoing use of the program. If teachers find successful solutions to the issues, they should be encouraged to share these with the staff so everyone using the program benefit.
In the end, programs are simply materials provided to the educated and skilled teacher to use during instruction in the most effective ways possible. No employee in a district “owns” the program. No one’s face needs to be attached to it. The failure or success of a program should never be traced back to the person who signed the purchase agreement; it depends on too many factors outside this person’s control. And instead of looking for tried-and-true programs that work for all students (research has shown there is no such thing), we should be looking for high-quality programs that teachers can use successfully in a variety of situations across a district with appropriate adjustments
Lessons learned:
- Programs are objects, not people. No one should take a program’s successes or failures personally.
- Programs should be purchased and presented to district employees as tools to critique and adjust as needed, recognizing they may or may not contribute to success, depending on many variables.
- Teachers should be encouraged to have constructive dialogue about how programs are or aren’t working for students. This creates a collaborative environment that encourages and values critical thinking.
#3. Why teachers need to understand why
A final experience involved a computerized program used across all grade levels in a local school district. It can best be described as a supplemental instructional program and assessment, with both a language arts and math component. Students regularly worked within the program independently during the school day as well as during after-school hours. Computerized curriculum has saturated the instructional materials and programs market, as technology has become a heavy focus in schools every where. However, computer-based reading materials and programs must be evaluated and analyzed in the same ways as low-tech ones; educators must under stand that these materials also have strengths and weaknesses — and that they come with no guarantees.
The reading portion of the program changed considerably for students, depending on the grade and the user’s skill level. In fact, one of the biggest selling points of modern-day computerized assessments and instructional programs is that they’re individualized based on the performance of the student as he or she moves through the program.
Unfortunately, the term individualized is just a way of saying that the program uses students’ most current performance level and places them into groups of lessons that are very much standardized. For example, if the program includes 40 lessons on phonemic awareness, it would determine whether the student needed any of these lessons or whether the student had already mastered all of the subskills. If the student still needed more skills in phonemic awareness, then he or she would complete more of the lessons in the lesson bank. The actual lessons themselves are not individualized; the individualization comes when the computer decides which lessons of those 40 the student might benefit from completing. This is nothing new; teachers have been personalizing teaching for centuries. The only difference is that instead of having a variety of lessons in hard-copy form on file in the teacher’s classroom, the bank of lessons are kept online and doled out as the student sits in front of a computer.
My research determined that teachers across grade levels were using the computer program in a variety of ways. Some used it as an independent station during the reading block once or twice a week, others assigned it as homework once a week, and teachers in different grade levels used it during an extra class period several days each week. During conversations with teachers, I found that most were indifferent about the program although several disliked it. In conversations with students, I found that many young students enjoyed it and said it was much like playing a computer game but older students were much more likely to dislike the program.
What surprised me most was teachers’ lack of understanding of the program’s purpose. None of the teachers in the district seemed to know what to expect from the program or why they were using it. Little thought went into using the program within a well-organized and purposeful literacy block. It was simply being used. But used to accomplish what?
One teacher thought it was probably being implemented so students could gain exposure to reading and testing online rather than using paper and pencil. Another teacher thought maybe it was just some thing “extra” to help students gain additional practice in a variety of reading and writing skills. And yet another teacher thought it was likely being used as a source of data that were being gathered objectively and reported to administrators in order to evaluate teachers on the academic progress of their students.
Herein lies the problem: Neither teachers nor students were sure why they were using the program. In addition, most teachers didn’t know exactly what students were doing when they logged in. No teacher could describe to me what the computer asked students to do in the literacy portion. They simply knew that students were participating in “literacy activities.” In addition, teachers rarely used data from the program to make any instructional decisions, although sometimes students received “grades” for completing tasks based on the score the computer reported.
So my questions lingered: What was the purpose of this program? What was the intended outcome of students participating in it? How would teachers know if it was working? How should teachers be using the program to get the most out of it? High-quality teachers are purposeful in most everything they do. Yet in this instance, use of this program was more or less purposeless.
Lessons learned:
- When a district buys a program or materials, administrators should discuss openly with teachers their expectations of what this program should accomplish — the program’s purpose.
- Teachers should be encouraged and supported in their efforts to use materials in ways that match the program’s purpose.
- Districts should openly critique both low- and high-tech programs, as both have their strengths and weaknesses.
It’s about the people
Choosing high-quality instructional materials requires much research, time, and thoughtfulness. It requires not only a plan for just how teachers will use the materials but also clarity on the purpose and expected outcomes. From the start of material adoptions, school leaders should encourage teachers to assess the program within their instructional setting and share this feedback with other school personnel to keep ongoing curricular decisions well-informed. Schools should encourage such ongoing reflection and evaluation as natural and essential parts of using purchased materials.
Indeed, a program doesn’t create success (Bond & Dykstra, 1967) — a program is simply a mass produced set of “things” that act as tools within a teacher’s classroom. The people who use and know the program control its success, based on how they interact with it, what they expect, how they measure success, and how strategic they are with modifying it.
References
Bond, G.L. & Dykstra, R. (1967). The cooperative research program in 1st-grade reading instruction. Reading Research Quarterly, 2 (4), 5-141.
Education Market Research. (2010). Elementary reading market: Teaching methods, textbooks/materials used and needed, and market size. Rockway Park, NY: Author.
Kersten, J. & Pardo, L. (2007). Finessing and hybridizing: Innovative literacy practices in reading first classrooms. Teachers and Teaching, 61 (4), 146-154.
Citation: Noll, B. (2016). Buyer – Be informed. Phi Delta Kappan, 98 (4), 60-65.
ABOUT THE AUTHOR

Brandi Noll
BRANDI NOLL is a visiting assistant professor in the Department of Curricular and Instructional Studies, University of Akron, Ohio.
