If weekly CBMs are too much, testing less frequently is also valid

Teacher with two little boysMany teachers regard once- or twice-a-week progress monitoring of reading with curriculum-based measurement (CBM) as impractical.

A recent study in Exceptional Children finds that a far less frequent schedule may work just as well to get a good learning slope on special education students and struggling students.

Assessments given every 3 weeks and as infrequently as every 9 weeks produced a slope that was similar to a student’s true slope, the researchers write.

“Contemporary special education teachers generally consider daily and weekly measurement to be a ‘best practice’ beyond their reach,” the authors write.

“This is unfortunate on two levels. First, when teachers forgo progress monitoring, they omit an essential feature of an individualized (special) education. Second, teachers who are accomplished in using measurement to guide instruction obtain stronger reading growth.

“A straightforward way to ease the demands of progress monitoring is to measure less often,” the researchers write. If less frequent monitoring produced accurate and valid growth estimates, teachers might use CBM more, they write.

Less frequent monitoring

To test different frequencies for monitoring reading progress with CBMs, researchers compared slopes for 41 special education students on 5 monitoring schedules:

  • 1 passage every week,
  • 2 passages every 2 weeks,
  • 3 passages every 3 weeks,
  • 4 passages every 4 weeks, and
  • first/last weeks only.


Students were from Grades 4-8 (one 7th-grader and one 8th-grader) from 8 schools in and around the Seattle area. Researchers used standard CBM passages from Grades 1-6 developed at Vanderbilt University to track progress over a 10-week period beginning in September.

To keep the number of scores consistent, each student was assigned a random sequence of 29 passages that they read regardless of their frequency schedule. Only the first 2 scores were used if students read more than two passages at their testing session. The standard or “true slope” was the slope that was based on all 29 Words read correctly (WRC) scores. Four passages were used to develop a baseline score rather than the more typical use of 3 passages.

None of the schedules differed very much from the true slope, the researchers report, but the every-3-weeks and first/last week schedule produced slopes closest to the true slope.

One reason that the first/last week schedule performed so well, the authors hypothesize, is that there was more weighting of scores on the first and last week. Taking multiple measures on the first and last measurement occasions is a recognized strategy for significantly increasing the reliability of growth measures, the study says.

Number of passages affects accuracy

The number of passages read at each testing session, however, did affect accuracy, the researchers report.

“Together, these results suggest that teachers may be able to thin the monitoring schedule as long as they assess reading skill with multiple passages at baseline and other monitoring points.”

Based on their experience with developing a baseline, researchers recommend that students be given a “practice passage” that is not used in the calculation of baseline.

“Although this research focused on the reading growth of special education students, the results have implications for progress monitoring of struggling readers in general,” they write. This study implies that teachers and response to intervention (RTI) managers could monitor students’ reading intermittently as long as they obtain multiple scores at each measurement point.

One note of caution, however, is that with less frequent monitoring, educators may be slower to detect inadequate growth.

“Estimating Reading Growth Using Intermittent CBM Progress Monitoring,” by Joseph Jenkins, et al., Exceptional Children, Volume 75, Number 2, pp. 151-163.

Leave a Reply

  • (will not be published)