The National Assessment Governing Board recently stated that the “National Assessment of Educational Progress may not be taken seriously enough by students to enlist their best efforts,” and that, therefore, results “may understate achievement.” For this reason, Steve M. Brown, Northeastern Illinois University, and Herbert J. Walberg, University of Illinois/Chicago, studied the effect of changing motivational conditions on elementary students’ scores on a standardized mathematics test.
Brown and Walberg believe that questions concerning motivation can be raised about most standardized, commercial tests given to U.S. students. They suggest that many students are not motivated to do their best because the content of these tests if often unrelated to topics recently studied in school, because performance on such tests does not affect grades, college or job prospects, and because most students will never know how well or poorly they have done.
Brown and Walberg tested 409 students, mostly from Hispanic and African-American families, in three working-class Chicago-area schools. These researchers chose the Mathematics Concepts subtest of the Iowa Test of Basic Skills because it is both widely used and highly reliable. Pairs of heterogeneous classes at the third, fourth, sixth, seventh and eighth grades were randomly selected. Each class was given the standard instructions for the subtest as described in the test manual. In addition, the experimental classes were told by their teachers: “It is really important that you do as WELL as you can on this test. The test score you receive will let others see just how well I am doing in teaching you math this year. Your scores will be compared to students in other grades here at this school, as well as to those in other schools in Chicago. That is why it is extremely important to do the VERY BEST that you can. Do it for YOURSELF, YOUR PARENTS and ME.”
Students’ scores on the Mathematics Concepts test showed that, on average, students in the experimental classrooms who were read the motivational script scored significantly higher than control students. This increase was moderately large – the average student’s score was raised from the 50th to the 62nd percentile.
The motivational effect was the same for boys and girls and across grade levels. However, the size of the effect varied significantly from school to school. Differences may have been due to testing attitudes of teachers or students in the schools, cultural differences or variations in testing conditions. Given the limits of the data from this study, researchers could not ascertain the reason for these interschool differences.
An average raise of 12 percentile points, as seen in this study, would bring American students much closer in performance to students in other economically advantaged countries. However, it cannot be concluded from this study alone that motivation is the cause of this country’s poor standing in international comparisons because poor motivation may be a factor in other countries as well.
Nevertheless, the results of this study indicate that even a very simple attempt to increase motivation can make a substantial difference in test scores. Brown and Walberg suggest that highly motivating, pre-test instructions might increase the scores of students across the country. However, they add that further research is needed to determine how such attempts to increase motivation affect different populations of students.
“Motivational Effects on Test Scores of Elementary Students”, Journal of Educational Research, Volume 86, Number 3, pp. 133-136.
Published in ERN May/June 1993, Volume 6, Number 3