As soon as tomorrow, 12 news agencies could have the names and effectiveness ratings of 12,000 teachers from New York City’s public schools, the Washington Post reports.
The United Federation Of Teachers plans to file a lawsuit in state court to block the release of the data. The ratings in question are based on the average progress a teacher’s students made on standardized tests during the course of a school year. In edu-speak, they’re called “value-added” assessments.
It’s not clear from the Post story how many years of data were included in the New York City school district’s analysis. As researchers and statisticians note, these effectiveness ratings tend to vary wildly from year to year because the sample size — especially for elementary school teachers — is so small (20 to 30-some students).
To reduce that variability in their controversial report, Grading the Teachers, the L.A. Times did not give ratings to teachers who had scores for fewer than 60 students. The average number of student scores per teacher was 110, according to Jason Felch, one of the reporters.
At a UC Berkeley forum I covered last month, Sophia Rabe-Hesketh, a UC Berkeley statistician, said the current models don’t separate teacher effect from other variables in a child’s education, such as school leadership, curricula and materials. Others, including Stanford economist Eric Hanushek, say these “value-added models,” for all of their flaws, are far superior to the current, often perfunctory, way that teachers are evaluated.
But it’s one thing to use such metrics internally, and another to publish the ratings with all of the teachers’ names attached. Do you think the public has the right to know this information?