0
(0)

For the past five years, I have worked as one of three reading specialists at the Mid Valley Elementary Center, a K-6 Title I school of nearly 1,100 students in northeastern Pennsylvania. As a reading specialist and coordinator of our school’s multi-tiered system of supports (MTSS), I have focused on aligning programming and instruction with the science of reading. I have done this by adopting high-quality instructional materials and developing of systemic support for students. Our literacy program uses 95% Group Core Phonics (K-3); Wit & Wisdom (K-6); Heggerty Phonemic Awareness (K-1); 95% Group Multi-Syllable Routine Cards (3-4); and 95% Group Phonics Lesson Library (K-6). Selecting cohesive materials and implementing them properly have contributed to improved student literacy outcomes, but these successes also rely on a clear understanding of student performance and progress.

When implementing literacy interventions through MTSS, using assessments intentionally and strategically is essential. These are the assessments we have found most useful as we plan interventions:

  1. An efficient screening tool to identify students at risk of difficulties with early literacy skills (Kettler et al., 2014).
  2. A diagnostic assessment specific to the skills that the intervention will address.
  3. Progress-monitoring assessments to understand the effectiveness of instruction and drive changes to interventions (Tuttle, 2009).

Alongside intervention work, formative and summative assessments of grade-level content in the general education classroom are necessary to monitor students’ development toward grade-level standards.

Avoiding data misuse

When gathering data for interventions, it is imperative to do so with a clear intent to use it. Idle data wastes precious time and resources. So why does so much data sit unused? For one thing, educators might not receive the appropriate professional development to help them analyze the data. They also might not have protected time to analyze data together and consider trends and patterns that indicate the need for instructional changes. In addition, they might not have the instructional time to apply teaching strategies to address trends in the data. Finally, they might not trust the data if previous experience didn’t get results. 

Sometimes, data gets used but in a way that isn’t helpful, which also wastes precious time and energy. To use data well, teachers need data literacy, which Ellen Mandinach and Edith Gummer (2013) define as:

the ability to understand and use data effectively to inform decisions. It is composed of a specific skill set and knowledge base that enables educators to transform data into information and ultimately into actionable knowledge. (p. 30)

For the same reasons educators fail to use data, educators might not be equipped to be effective data analysts. For example, when educators gather data through a universal screener and then use only that information to group students, they might be grouping students inaccurately. Screening tools do indicate where students stand relative to grade-level expectations for specific skills, but they do not provide a detailed breakdown of students’ competencies with specific skills that teachers can address. The results might not reveal skill gaps that will prevent students from reaching their full potential.

Filling data gaps

At the Mid Valley Elementary Center, we gather Acadience benchmark data to screen students in grades K-4. Acadience benchmarks are “empirically derived, criterion-referenced target scores that represent adequate reading skill for a particular grade and time of year” (Acadience Learning Inc., 2011). Acadience benchmarks provide reading specialists with an initial overview of which students may be at risk for reading difficulties.

Screeners are helpful tools to understand the big picture of student performance. However, if we were to use only that data to create intervention groups, we would only have data for the focus areas in each benchmark category: phonological awareness and letter-sound correspondence in kindergarten and first grade; phonemic awareness and the elements of oral reading fluency (i.e., accuracy and fluency) in first and second grades; and the elements of oral reading fluency and comprehension in third and fourth grade.

The data from the screener is intended simply to give teachers a snapshot of students who might need additional intervention. These benchmark categories are too broad to accurately reflect what students know and need to learn next (Troester, Raines, & Marencin, 2022). When educators gather screening data and group students based on the information from the screener, they limit the potential of their instruction right from the start. 

In addressing the issues of unused and misused data, diagnostic data is a key link to effective instruction and student progress. Once screening data indicates that a student is at risk for challenges with literacy skills, a diagnostic assessment should be the next step to identify the specific skills to target during instruction. Diagnostic assessments offer a more specific and detailed breakdown of individual skills than broader screening categories encompass. For instance, if a student’s benchmark score shows a need for support with decoding, a phonics diagnostic assessment would enable educators to identify the specific skills that would be most appropriate for instruction based on individual student needs. 

While some might argue that formative assessments can serve a dual purpose — both to monitor student progress and to diagnose student needs — the two have different goals and take place at different points. Diagnostic assessments provide information prior to instruction, and formative assessments provide information during the unit. Diagnostic assessments empower teachers and reading specialists to be proactive in aligning instructional content and practices with specific student needs, while formative assessments enable teachers to make changes during a unit based on student response.

Diagnostic assessments can serve as a bridge between a teacher’s perception of their student and that student’s specific instructional needs. When teachers receive and understand diagnostic assessment data, they can more effectively individualize and differentiate instruction, which builds their trust in the intervention system and the effectiveness of their instructional materials.

Our process

At Mid Valley, our data collection and analysis process follows four main steps that incorporate screening data, diagnostic data, and progress-monitoring data. Since the process of learning early-literacy skills is formative until students can read at grade level, our data process does not include a summative assessment. Figure 1 shows the overall steps in our assessment cycle. 

Diagram depicting the different data types utilized in the early-literacy skills assessment cycle. Circular pattern with the following steps: Administer universal screener Organize groups based on universal screening data Administer diagnostic assessment Finalize groups based on diagnostic data Administer progress monitoring assessment(s) Communicate student progress during data meetings
Figure 1: The assessment cycle for early-literacy skills
Step 1. Screen.

At the beginning, middle, and end of the school year, three reading specialists complete Acadience benchmark assessments for all students in grades K-4. Once we complete these assessments, we examine the data to identify students at risk of reading difficulties and place students in preliminary groups based on the information from each of the measures. We then drill down into the data to identify students in need of reading support.

To start, we examine the most advanced skill on the assessment for each grade level. For instance, in second grade, we start by listing the students in descending order based on their oral reading fluency scores. Students who score at or above benchmark on this measure are grouped into a benchmark group that will work on grade level and enrichment content during the intervention period. However, if a student is at or above benchmark on the most advanced skill but did not reach benchmark on earlier skills, we flag those students to examine more closely through a diagnostic assessment. We then continue this process for other skills. 

Step 2. Diagnose.

After organizing the students according to their scores on all measures, we shift our focus from screening data to the diagnostic process. We analyze screening data, then complete the 95% Group Phonics Screener for Intervention (PSI) diagnostic assessment with students who will receive instruction in intervention groups.

Our schoolwide approach allows us to have approximately 10 teachers supporting students during the intervention period in each grade, K-3.

The PSI is an untimed diagnostic that assesses students’ knowledge of and ability to apply phonics skills — from the earliest foundations of letter-sound correspondence to more advanced multisyllabic phonics skills. We complete the PSI with each student who scored below or well below benchmark on the screening assessment and analyze the diagnostic results to create intervention groups based on specific skill needs. Once we’ve organized the students based on their performance on the diagnostic assessment, we create specific groups based on the number of teachers available for the intervention period, knowledge of student dynamics, and additional information that would be relevant to the intervention process.

Our schoolwide approach allows us to have approximately 10 teachers supporting students during the intervention period in each grade, K-3. Six general education teachers in each grade level work with students who are at or above benchmark, while three reading specialists and one or two special education teachers work with students in need of support during the same period of the school day. A team approach allows us to provide more students with appropriate instruction for their skill needs as identified by the diagnostic assessment.

Step 3. Monitor.

Once we have provided group assignments to teachers and students, the intervention period begins. Students work on the phonics skill appropriate for their present levels as identified by the diagnostic tool.

At Mid Valley, we use the 95% Group Phonics Lesson Library, which aligns with the 95% Group Core Phonics program that our general education teachers use in the classroom. As students work through the phonics lessons and skills, we complete regular Acadience progress monitoring to analyze student growth and instructional efficacy.

Specialists complete progress monitoring biweekly with students who receive Tier 2 support and weekly with students who receive Tier 3 support. The data we collect provides teachers with information about whether to adjust instructional approaches, such as pacing, the amount of direct and explicit instruction, and the nature of independent practice. After six weeks of instruction, any students who are not meeting their aim lines for expected growth begin receiving Tier 3 instruction. After mid-year benchmarks, the groups are reorganized following the same assessment process. If students have met their goals in Tier 2 instruction, they return to core instruction groups. If students from the core group show emerging needs, they enter Tier 2. The ongoing cycle of progress monitoring allows specialists to address the needs of students as they develop throughout the school year.

Step 4. Communicate.

Finding success with the assessment cycle of screening, diagnosing, and monitoring requires an investment from all parties involved with each student. To keep general education teachers aware of student progress outside the general education classroom, the reading specialists host data meetings every six weeks. Data meetings enable us to communicate with teachers about student progress and share in the analysis of student data. Further, they give teachers the chance to practice analyzing data in a supported and collaborative environment, which is helpful in improving the accuracy of data analysis as well as teacher confidence in understanding data (Means et al., 2011).

Since implementing our current assessment cycle and practices, more students entering first, second, and third grade are performing at or above benchmark on early literacy skills.

When teachers and specialists can use data to discuss students’ individual and specific phonics needs, it builds trust in the intervention system. Teachers can see students are studying the most appropriate skills for their individual levels. Before we implemented diagnostic assessments, teachers told us they knew which students were struggling with literacy skills and that screening assessments just confirmed what they already knew. The diagnostic assessment empowers specialists and teachers to move beyond discussions about which students are struggling and into discussions about which skills each student is struggling with.

Outcomes

The introduction of a diagnostic assessment into our literacy intervention routine has improved the effectiveness of our programming, built trust between teachers and specialists, and contributed to student improvements on screening measures. After five years of implementation of new core programs, MTSS, and the above assessment cycle, we have observed improvements across grade levels. Before the implementation, Acadience benchmark scores indicated that only 40% to 55% of students were meeting benchmark targets consistently across grade levels and benchmark measures. At this point, benchmark data indicates progress beyond that target. Table 1 shows screening results from the beginning of the 2024-25 school year, and Table 2 shows screening results from the beginning of the 2021-22 school year, prior to implementation of the assessment cycle.

Table 1. Students scoring at or above benchmark on literacy screening in beginning of 2024-25 school year
Table 2. Students scoring at or above benchmark on literacy screening in beginning of 2021-22 school year

Since implementing our current assessment cycle and practices, more students entering first, second, and third grade are performing at or above benchmark on early literacy skills. These data indicate that overall programming changes are working. While we continue to support more students to reach benchmark levels, we are addressing the literacy needs of more students than with prior systems. 

Challenges and potential applications

The introduction of the assessment cycle has occurred within a broader shift in curriculum and instruction, so separating the direct impact of improved assessments from those other elements is difficult. However, we believe the ability to be confident that we are providing students with instruction tailored to their individual needs based on diagnostic information has contributed to the positive outcomes.

Phonics is only one area of literacy development, and literacy is just one of many disciplines studied in school. The availability of a range of assessments, screeners, diagnostics, and formative assessments are critical to the success of work in teaching phonics at the elementary level. Other grades and content areas lack the kinds of resources available for literacy education. However, understanding students’ present levels is critical for effective instruction in all disciplines. 

Detailed, clear, and targeted diagnostics assessments can empower educators to improve their practice and tailor it to individual student needs.

Diagnostics for students in older grades and across content areas often align with standards, which can be helpful if teachers understand how to use the information provided in these diagnostic assessments to inform instruction. A diagnostic assessment may show that a student needs support with a specific standard without offering information about the skills that the student needs to learn.

For example, the Pennsylvania standards for language arts in fourth grade includes the standard, “Determine a theme of a text from details in the text; summarize the text” (Pennsylvania Department of Education, 2014, p. 13). A diagnostic assessment could indicate that a student needs to work on that standard, but it does not offer information about whether the student needs to work on closer reading, better understand what a theme is, or developing the skills to summarize the text. Without a more specific diagnostic breakdown or analysis of student work, the process of adjusting instruction is an educated guess. 

Detailed, clear, and targeted diagnostics assessments can empower educators to improve their practice and tailor it to individual student needs. Increasing the number of available diagnostic assessments that are specific to content areas and grade levels could allow teachers to use assessments to guide practice with more confidence. Models of this process exist in spaces such as early literacy, specifically phonics, so with the appropriate measures, the process of screening, diagnosing, monitoring, and communicating could expand beyond early literacy. 

References

Acadience Learning Inc. (2011). Benchmarks and composite score.

Kettler, R.J., Glover, T.A., Albers, C.A., & Feeney-Kettler, K.A. (Eds.). (2014). An introduction to universal screening in educational settings. In R.J. Kettler, T.A. Glover, C.A. Albers, & K.A. Feeney-Kettler (Eds.), Universal screening in educational settings: Evidence-based decision making for schools (pp. 3-16). American Psychological Association.

Mandinach, E.B. & Gummer, E.S. (2013). A systemic view of implementing data literacy in educator reparation. Educational Researcher, 42 (1), 30-37.

Means, B., Chen, E., DeBarger, A., & Padilla, C. (2011). Teachers’ ability to use data to inform instruction: Challenges and supports. U.S. Department of Education, Office of Planning, Evaluation and Policy Development.

Pennsylvania Department of Education. (2014). Academic standards for English language arts.

Troester, K., Raines, R., & Marencin, N. (2022, Winter). Universal screening within an RTI framework: Recommendations for classroom application. Perspectives on Language and Literacy, 21-25.

Tuttle, H. (2009). Formative assessment responding to your students. Eye on Education.


This article appears in the Summer 2025 issue of Kappan, Vol. 106, No. 7-8, pp. 34-38.

ABOUT THE AUTHOR

Brooke Wilkins

Brooke Wilkins is a reading specialist at the Mid Valley Elementary Center, in Throop, Pennsylvania.

How useful was this post?

Click on a star to rate it!

Average rating 0 / 5. Vote count: 0

No votes so far! Be the first to rate this post.