9

A feel-good program that works

In these tough economic times (or maybe before, for all I know), Oakland school board members are quick to tell us that they can’t make decisions based on what “feels good.” 

Well, a study released today by researchers from Washington University in St. Louis tells us that one program used in Oakland – Experience Corps, a school tutoring program with volunteers older than 55 – does more than that.


Tribune file photo by Diana Diroy

The study, which spanned two years and included more than 800 kids, found that children served by Experience Corps volunteers, on average, made 60 percent more progress on sounding out words and in reading comprehension skills than other academically struggling students who didn’t have the tutors.

I wrote a story about the program two years ago (although it describes a math lesson, and this report analyzed reading). Experience Corps is at six Oakland elementary schools: Civicorps, Cleveland, Emerson, Piedmont Avenue, Santa Fe and Monarch.

Katy Murphy

Education reporter for the Oakland Tribune. Contact me at kmurphy@bayareanewsgroup.com.

  • Jim Mordecai

    Katie:

    Tutoring 800 kids sounds like a wonderful program and opportunity for students and must be rewarding for the adults. My mother participated in a program of tutoring students near her work in San Francisco years ago and found it most rewarding.

    The following comment is not to put down the program but to understand the basis for the 60% claim.

    How many students does the 60% claim apply and is that an average score for one year or more than one year?

    Where does the statistic 60% mentioned in your story come from? Is that a sub-test of the CST in reading or is that statistic from a test administered by the program? And, if administered by the program what is the name of the test and the publisher?

    Students tested on a sub-test may score improved when the same sub-test is given a second time with or without instructional intervention. However, the issue is not whether repeating a test improves a sub-test test score but whether the identified improvement is transferable to a major skill such as reading comprehension.

    Jim Mordecai

    Jim Mordecai

  • Katy Murphy

    The Experience Corps program is actually much larger than 800 kids. In fact, there are more than 2,000 tutors around the country. Eight hundred was just was the number of children included in the two-year study.

    The 60 percent refers to the average difference in “progress” between the two groups of kids. I linked to the study (just click on the phrase “study released today”) if you want to check out the research methodology and other findings.

  • Jim Mordecai

    I was unable to find in the study reference to 60 percent greater effect than group not participating in the program. However, there was some very impressive results in support of the program.

    By the way, the program was evaluated in three cities outside of California.

    The effectiveness of the program was greater than some other often discussed programs that have been researched as to effectiveness:

    “To understand the impact of the EC program, we can compare these effect sizes to those of other and various types of reading interventions. Reading Recovery® (RR) is a one-to-one intensive tutoring program, employing certified teachers specifically trained in the intervention. The What Works Clearinghouse (Institute of Education Sciences, 2007) reports effect sizes around .80. The Tennessee Star program reduced class size to improve academic achievement, and the effect size associated with change in reading scores was .26 (Nye, Hedges, & Konstantopoulos, 2000; Mosteller, 1995). Reading First, a national initiative that promotes instructional practices, did not produce a statistically significant impact on reading comprehension for students in 1st through 3rd grades (Gamse, Jacob, Horst, Boulay, & Unlu, 2008). In this context, the magnitudes of the reading improvements associated with the EC program are substantial, given that the intervention is delivered by trained volunteers.
    The EC program succeeded in delivering the intervention to a large number of the students. About half of the EC students received between 30 to 49 sessions, and 76% received over 35 sessions. Although program effects were detected in the full sample, including students who received very few EC sessions, program effects were stronger for the subset of EC students who received 35 or more session (.13, .17, .17). These findings suggest that the EC program would be strengthened by attempts to ensure that all students participate in the program at the intended level.”

    Although I could not find the 60% reference the .13, and .17 are impressive.

    Jim Mordecai

  • Katy Murphy

    OK. I’ve followed up on Jim’s question and found some answers — some of which, truthfully, lead to more questions about the takeaway message of the study and the use of statistics, in general.

    As Jim observed, the “60 percent more progress” figure was not explicitly listed in the study, although it was described in a public relations summary to reporters to be a “central finding.” (I know, I should never rely on a summary that’s not written by the researchers, themselves.) The 60 percent figure referred to a small average difference in average progress between the control group and the Experience Corps group over the course of one year. Those numbers are in Table 3.

    The control group — kids without Experience Corps — tested 2.47 points higher at the end of the year than it did at the beginning of the year on sounding out words (“word attack”). By comparison, the Experience Corps group improved an average of 3.78 points. The starting scores for both groups, to give you an idea of scale, were 91.89.

    On the reading comprehension tests (starting scores: 83.99), the control group improved by 2.46 points during the year, on average, and the Experience Corps group moved up by 4.41.

    When you average the difference, the Experience Corps group made 53 percent more progress on “word attack” and 79 percent more progress in reading comprehension. I think that’s how they arrived at “over 60 percent.”

    Whew!

    Here’s my question: Based on the miniscule number of `progress’ points we’re talking about here — less than 5 points for either group out of 80- and 90-some points, total — is it overselling the study’s findings to tout the 60 percent figure?

    Yes, this is a statistically significant difference, but for practical purposes, pretty small.

  • Jim Mordecai

    What was reported in comparing two groups (group that was tutored and group that did not receive tutoring from Experience Corps tutors–might have received tutoring or some other intervention from another source) was a T-test for independent means.

    Question being asked is the difference between two groups statistically significant. The caution (I have read) is not to confuse statistical significance and practical significance. In other words a study might yield a difference between experiment group and control group but that difference may have no practical application.

    What is being tested is the mean of the experiment group and the control group. Depending on the size of both groups and with a large sample population small differences could be statistically significant.

    I am impressed that in the notes on the study researchers pointed to famous research studies that yield statistically significant differences less statistically significant as the test score means on the sub-tests of students not receiving the tutoring treatment compared to the experiment group receiving the treatment.

    Pointing to percentage differences in given sub-tests is over selling and misleading.

    However, it appears that this research has some rigor and should be given attention but it is not easy to explain why it should be given attention.

    In Katy’s response to my concern over using percentage, she discusses mean scores and slips into mentioning percentage difference because that is something easier to communicate than statistical significance. But, percentage and statistical significance are two things that I believe should not be mixed in reporting on educational research.

    Beyond challenging the misuse of percentage, I don’t have any idea how to communicate this area and keep it simple. But, it is an area to get statistically snookered be it reporter, teacher, or parent.

    Jim Mordecai

  • Katy Murphy

    Thanks, Jim.

    Just to clarify one point: I didn’t exactly “slip into mentioning percentage difference” in my response to you. I was merely responding to your question about the percentage (from the study’s summary, which I cited in the blog post). You wondered where it came from, and I made those calculations, as well as a few phone calls, to figure it out.

  • Jim Mordecai

    Katy:

    Thanks for following up on my question about the report of using percentages; and, I get your distinction between responding to my inquiry and slipping into mentioning percentages.

    And, I agree it is not fair to say you were supporting the misuse of percentage by your response to my inquiry.

    My concerns about the research design of the two year study of the efficacy of the Experience Corps, as carried out in three cities on the other side of the United States, aside, I would like to know more about the program in how it is being implemented in the Oakland schools.

    For example, does the program cost the District money and if so what is the funding source? Can other elementary schools that are interested in the program take part or has the capacity of the program been reached?

    One of the findings of the Experience Corp’s research study was that children that participated fully in the tutoring improved to a greater extent than children with less attendance.

    Are there strategies being employed in Oakland to maintain a high attendance in the program and what are these strategies if they exist?

    A challenge I witnessed in the years I taught in Oakland elementary schools was that the struggling students were least likely to consistently show up for tutoring.

    Jim Mordecai

  • http://www.experiencecorps.org David Moren

    Thank you both for thoughtful dialogue. To answer some of the questions raised about Experience Corps in Oakland, we are currently in seven (7) Oakland elementary schools, and while we have limited resources to bring Experience Corps to as many schools as we’d like, yes, elementary schools that are interested in the program can contact us about taking part. For further questions, we can be reached at (510) 495-4966 or at dmoren@aspiranet.org.

    Thank you,
    David Moren
    Associate Director
    Experience Corps Oakland

  • Nancy Morrow-Howell

    As lead researcher for the Washington University evaluation of the Experience Corps program, I’d like to clarify a few points raised in this discussion.

    When we say that students with Experience Corps (EC) tutors made 60 percent more progress in several critical reading skills than students not served by the program, we provide a simple, accessible way to describe one aspect of the study’s results. The number is a hybrid. Here are the calculations for the Word Attack measure: From Table 3, presenting the actual gain scores of both the EC group and the controls, we know that the controls gains 2.47 points over the school year. From Table 4, our estimation of the impact of the EC program is 1.59 (which is calculated when including the effects of many other covariates). Thus, we add 2.47 to the estimated impact of 1.59 to get 4.06 — an estimate of the gain the control group would have experienced if they had been in the EC program. Next compute the percent gain attributable to EC: ( 4.06-2.47)/2.47=.64. A similar calculation on the passage comprehension measure yields 62%. Both of the gains are statistically significant, as indicated by the statistical tests associated with the estimated impacts (1.59 in our example) on Table 4.

    If you focus on the findings stated in the report to describe the strength of the study’s results, the effect sizes are the most informative. The effect sizes associated with the statistically significant gains on three of the four reading measures were: .10, .13, and .16. Although program effects were detected in the full sample, including students who received very few EC sessions, program effects were stronger on the subset of EC students who received 35 or more session (.13, .17, .17). These effect sizes are actually the differences in adjusted post-test means between the two groups expressed in standard deviation units (Table 4).

    One way to interpret effect sizes is to compare them to those produced by other studies on other programs. The Tennessee STAR program reduced class size to improve academic achievement, and the effect size associated with change in reading scores was .26 (Nye, Hedges, & Konstantopoulos, 2000; Mosteller, 1995). Reading First, a national initiative that promotes research-based instructional practices for early literacy development, did not produce a statistically significant impact on reading comprehension for students in 1st through 3rd grades (Gamse, Jacob, Horst, Boulay, & Unlu, 2008). In this context, the magnitudes of the reading improvements associated with the EC program are substantial, especially given that the intervention is delivered by trained volunteers.

    Thanks to all who are paying close attention to this important research. A full copy of the report is available at http://csd.wustl.edu/Publications/Documents/RP09-01.pdf. I encourage those who have more questions about Oakland’s Experience Corps to call the local program at (510) 495-4966 or to email David Moren at dmoren@aspiranet.org.