Although students spend the bulk of their time reading silently, classroom teachers still mostly assess fluency through oral reading because it is more easily observable.
A recent study in Psychology in the Schools describes a fluency assessment of silent reading called “underlining”, which measures silent reading on a tablet PC. Students read a passage and underline each word using a stylus. Fluency is measured by the speed with which students are underlining words. The educator also receives detailed information about pauses and regressions because students are instructed to stop underlining if they pause in their reading and to re-underline each word if they regress to words they have read earlier to resume reading.
“A need exists for a sound silent reading fluency measure that is appropriate for school settings,” the researchers write.
“Although the similarities between oral and silent reading cannot be discounted, several researchers have suggested that overarching dependence on oral reading provides an incomplete picture of both reading and reading development.”
This small study of 59 students in 4th and 6th grades found that the underlining method was a valid and reliable measure of silent reading based on corroboration of the results from this measure with results from other established measures of reading. Results from underlining were also compared with results from other silent reading measures.
Children in the study took a battery of assessments covering oral reading fluency, silent reading fluency, reading comprehension and vocabulary. Nine reading passages for each grade level were carefully chosen from the DIBELS collection for oral and reading fluency assessments. During the silent reading portion of the battery, each student read 2 passages for each of the silent reading tests.
Existing measures of silent reading
Current methods of assessing silent reading include eye-tracking methodologies, “moving windows” and paper-and-pencil inventories. The existing methods have several disadvantages, write the researchers.
Eye-tracking methodologies are expensive and allow only one student to be tested at a time with the need for close monitoring by the administrator. Eye-tracking assessments yield high-precision reading data, but children must remain relatively motionless, a difficult task for many students.
In “moving windows,” children push buttons to expose additional text once they finish reading words, sentences, paragraphs or sections of text. This assessment provides more limited reading information (e.g. no information regarding regressions and pauses) and is an awkward, unnatural process. In paper-and-pencil inventories, students read silently until an administrator tells them to circle the last word they’re read. A major disadvantage of this method is that it is a form of self-reporting. As well as raising issues about accuracy and reliability, it provides little information about reading behaviors such as pauses and regressions.
The underlining measure avoids many of these pitfalls and could easily lend itself to testing groups of students in the classroom to monitor progress in an RtI model. If a classroom has several tablet PCs, data could be collected simultaneously from multiple students via a wireless router throughout the day with minimal disruption to instruction, the authors write.
The measure was significantly correlated with the standardized, norm-referenced reading comprehension and vocabulary tasks. It demonstrated a strong association with the maze task, a commonly used screener for reading comprehension within RtI models.
Not only was the study small, but it also was composed of relatively skilled readers so the method of underlining requires more evidence, according to researchers. It is hoped that the positive results from this study will bring about a deeper investigation of the issues surrounding silent reading, specifically the need for stronger curriculum-based measures than be used with ease in the classroom.
“Silent Reading Fluency Using Underlining: Evidence for an Alternative Method of Assessment,” by Katherine W. Price, Elizabeth B. Meisinger, and Max Louwerse and Sidney K. D’Mello, Psychology in the Schools, 2012, Volume 49(6), pps. 606-618.