Among others, Ed Week reported about findings from a recent study showing the beneficial impact of having adults provide reading tutoring for young children. Under the headline “Volunteer Tutors Found to Help Poor Readers,” Catherine Gewertz wrote “A program that uses older volunteers as tutors has significantly improved the reading skills of students in the early grades, according to a study released today [10 April 2009].”
The study is an evaluation conducted by Nancy Morrow-Howell and colleagues of Washington University in St. Louis (MO, US) and in collaboration with Mathematica Policy Research. In brief, the study compared the reading outcomes (and other measures, e.g., teachers’ endorsement of the program) of 825 1st, 2nd, and 3rd graders on a suite of school measures (including decoding, comprehension, vocabulary, and teacher assessments). About half of the students received tutoring about once per week for a year. The data revealed that the tutored students made statistically greater gains on some measures than those found for the students in the control group.
The study deserves both accolades and scrutiny. Although it has multiple strengths (e.g., students were assigned randomly; pre-tests showed equivalent levels of competence; numbers were fairly large), there are problems, too. Not the least of these is that the report depends upon gain scores. Because the design fits classical experimental procedures, wouldn’t it be appropriate simply to examine the outcomes after the one year of tutoring? Also, what was the intervention?
|WJ word attack||0.079|
|WJ pass comp||0.090|
|Grd Spec skls||0.136|
One way to think about benefits for students is to examine effect sizes. Notably, in the document by Murrow-Howell and colleagues, the reported effect sizes are based on the gain scores and they were actually pretty small (0.13 to 0.17); to get these effect sizes, they—understandably—used only students who received at least 35 sessions of tutoring. Using the data in Table 3 (p. 12) and comparing the means for the experimental and control groups at posttest (a la a classical experiment), I get the even-smaller effect sizes shown in the table at the right. (For the technically inclinded, I calculated simple d using the control SD; the authors used Hedges’ G.)
Tutoring has well-documented benefits, but small-group instruction is equally effective and clearly more efficient (Elbaum et al., 2000; Journal of Educational Psychology). So, tutoring might not be a bad thing, but could the Experience Corps get more bang for their proverbial $$ if they had tutors take groups of, say, three? What is more, we don’t really know what happened to the students in the control condition. Did they get any supplemental help? If not, how was the study controlled for the possibility that Reading Instruction Plus Something More is simply better than Reading Instruction? Mayhaps answers to these questions are provided in a more detailed report than the one I used.
Of course, volunteers would need coaching and they’d have to learn to execute pretty specific lessons, which raises the related question: What did the tutors do with the students? I quickly scoured the Experience Corps Web site looking for a curriculum or set of guiding practices, but I came up empty handed. I’ll need help with this, and perhaps a kind reader can provide it in the comments.
Now, if the tutoring program was an adaptation of model, such as the one tested by Wallach and Wallach years ago, that would be a good thing. Or, perhaps even better, if it was something predicated on 100 Easy Lessons, that might be good.
|Method A||A Tutored||A Grouped|
|Method B||B Tutored||B Grouped|
|Control C||C Tutored||C Grouped|
If it wasn’t, though, then we need a new study comparing the tutoring methods employed in the sites in this study (call it Method A) to tutoring methods based on some known-to-be-powerful method (call it Method X) and to extra time on reading (which may be what the students in the current study got!). Ideally, this should be crossed against small-group supplements, something like the diagram here.
To be sure, it’s always easier to critique studies than it is to run them. I’m just fearful that the press coverage of this one is going to make more of it than it merits. It’s not a bad study, but those effect sizes are dwarfed by effects of powerful instructional procedures. And, yes, I know I’m ignoring the social validity of teacher satisfaction (would anyone actually expect teachers to disparge getting extra help for the children in their charge?), but it’s students’ outcomes that matter.
“Students in urban schools get big boost from pioneering tutor program
Comprehension and other critical skills improve dramatically with one-on-one help from Experience Corps’ volunteers, a new study shows”—Christian Science Monitor.
“Study finds students with Experience Corps tutors make 60% more progress in critical reading skills than students without tutors“—Washington University News and Information Office; see also, “Students With Experience Corps Tutors Make 60% More Progress In Critical Reading Skills Than Students Without Tutors“—Medical News Today>; NewsGuide.us; BioMedicine, and others.
Elbaum, B., Vaughn, S., Hughes, M. T., & Moody, S. W. (2000). How effective are one-to-one tutoring programs in reading for elementary students are risk for reading failure? A meta-analyis of the intervention research. Journal of Educational Psychology, 92, 605-619.
Wallach, M. A., & Wallach, L. (1976). Teaching all children to read. Chicago: University of Chicago Press.