Curriculum-based measurement helps evaluate writing progress before high-stakes tests

iStock_000016940545XSmallAt least 41 states require students to take a writing test. In many of these states, students must pass the test to graduate from high school.

How do teachers know their students, especially their students with disabilities, are on track? How do they know their writing interventions are working when writing, as many administrators know, is notoriously difficult to “measure”?

A new study published in Exceptional Children found that a curriculum-based measurement (CBM) of writing predicted how high school students performed on the Minnesota Basic Standards Test/Minnesota Comprehensive Assessments (MBST/MCA) in written expression. The measure is useful for lower-performing high school students and for English Language Learners (ELLs) who are fairly proficient in English and higher performing, the authors write.

Previous studies on CBM measures have mostly been done with middle school students, and none have examined the use of CBM measures to predict performance on state standards for high school students or for ELLs, the researchers write.

“Our study is the first step in the development of a data-based decision-making system for teachers to use to monitor student progress in written expression,” the researchers say.

“Results reveal that high school students need to write samples that are 5 to 7 min in length and that are scored for CIWS (correct minus incorrect word sequences),” the authors write.

In this study of 183 10th-grade students at two large, urban high schools, students produced two writing samples during their English classes to narrative prompts. The narrative prompts were: “It was a dark and stormy night….” And “I stepped into the time machine and….”. Students were given 30 seconds to think and 10 minutes to write each story. At 3, 5, and 7 minutes, students were instructed to make a slash mark on their paper to indicate how far they were at each time point.

Writing samples were scored four different ways:

  • words written (WW);
  • words written correctly (WWC);
  • correct word sequences (CWS); and
  • correct minus incorrect word sequences (CIWS).

Separate scores were calculated for three-, five-, seven- and 10-minute samples of writing performance. The correlation between the two writing sample scores determined “alternate reform reliability” for the CBM measure. Predictive validity was calculated using the mean score for the two writing samples.

WW is the total number of word units written in the sample, regardless of spelling or usage. WWC is the total number of correctly spelled words in the sample. CWS is the number of sequences between two adjacent correctly spelled words and CIWS is the number of correct sequences minus the number of incorrect word sequences. Writing samples were scored by three graduate students who were trained in scoring in a 3-hour session. Previous studies have suggested word measures were more effective for younger students and that word sequences were needed for older students.

“Viewed within the context of the technical adequacy of writing measures in general, the criterion-related validity coefficients we obtained in our study of .56 to .60 for CIWS are quite respectable,” the authors write. “Correlations for WW and WWC ranged from .23 to .31, for CWS from .43 to .48, and for CIWS from .56 to .60.”

One simple method for making the data from this measure user-friendly is to report a cut-off score–the score that best predicts passing the state standards test. An alternative is to create a Table of Probable Success that shows the probability of passing a state standards test along the entire continuum of CBM score.

“Thus, teachers can view the probability of passing the state standards, test associated with a CBM score of 5, 20, or 30,” the researchers say.

Data from a large, representative sample of students from a school, district or state is needed to create a good table, the researchers say. The accuracy of such a table depends on the strength of the relationship between the CBM predictor and the state test.

One limitation of the study, the authors write, is that ELL students in their sample had high levels of English language proficiency and were functioning in mainstream classrooms. Their results do not address the validity and reliability of writing measures for ELL students who are at lower levels of language proficiency.

Curriculum-Based Measurement in Writing: Predicting the Success of High-School Students on State Standards Tests, Exceptional Children, Vol. 74, No. 2, Winter 2008, pp. 174-193.

Published in ERN February 2008 Volume 21 Number 2

Leave a Reply

  • (will not be published)