There is no consensus regarding the best method to identify students with learning disabilities, report Briley Proctor and Frances Prevatt, Florida State University. Proctor and Prevatt compared four models used to diagnose learning disabilities with a sample of 170 clinic-referred students. Because the consequences of being identified as having learning disabilities are profound in most educational settings, it is important to know the consequences of using one model over others. Very few studies have compared the populations identified under competing models of eligibility. The present study compared the level of agreement between four models used to diagnose learning disabilities.
To compare models, these researchers considered several issues, including what is meant by achievement and intellectual ability, and how each should be measured. In addition, they evaluated what constitutes a significant discrepancy or severe disability. In comparing models with this population of students, these researchers found little correlation between which students each model identified as learning disabled.
The first model used a simple discrepancy between the full-scale IQ score on the Wechsler Adult Intelligence Scale and compared it to achievement scores in reading, math, written language and oral language from the Woodcock-Johnson test battery. A difference of one standard deviation (15 points) in any of the four achievement areas was considered a severe discrepancy.
The second model tested the intra-individual differences – the discrepancies between an individual’s cognitive and achievement cluster scores of the Woodcock-Johnson battery. When any of the 7 cluster scores was significantly lower (1.3 standard deviation) than the average of the other scores, an intra-individual weakness was indicated.
The third model analyzed intellectual-ability discrepancies. A student’s general intellectual ability score on the cognitive portion of the Woodcock-Johnson was compared to each of his or her four achievement area scores. A severe discrepancy was indicated when the difference was 1.3 standard deviation or higher.
The fourth model measured underachievement in the four subject areas of the Woodcock-Johnson battery. A score in any one area falling at or below the 16th percentile was coded as a significant weakness.
Diagnostic models identify different students
Results revealed that the simple discrepancy model identified significantly more students as having learning disabilities than any of the other three models. The intellectual ability-achievement model identified the fewest students. There was little correlation between the results of the different models. Although three of the models identified similar numbers of students as learning disabled, different students were identified by these models. Proctor and Prevatt conclude that these diagnostic models are not interchangeable: in fact, there was very little agreement between the four models in terms of diagnosis. These results suggest that switching diagnostic models could lead to very different populations of students’ being identified as having learning disabilities.
These findings seem to support the assertion that the diagnosis of learning disabilities is often arbitrary. Clearly, the choice of model has a large impact on who is diagnosed and, therefore, who is eligible to receive special education services. Proctor and Prevatt do not advocate one model over another. The choice of model depends on one’s theoretical beliefs about learning disabilities, but the implications of different choices should be understood by those making the decision. Educators should study both the theoretical differences and real-life consequences of each model before making a decision. Diagnosis without such consideration implies that the various models are interchangeable or at least that there is a reasonable amount of agreement between them. The results of this study show that this is not the case.
“Agreement Among Four Models Used for Diagnosing Learning Disabilities”, Journal of Learning Disabilities, Volume 36, Number 5, October 2003, pp. 459-466.
Published in ERN December/January 2004 Volume 17 Number 1