The Blog

Tough Times at the L.A. Times: Standing Behind Incorrect Teacher Ratings

The L.A. Times has not been simply reporting on teacher evaluations or ratings. It has been creating them and publicizing them.
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

The newspaper business can't be much fun these days. Editors and reporters are desperate to find ways to hold on to readers. Such desperation, however, can never justify misleading readers, publishing factual errors, and then doubling-down on those mistakes when confronted with the truth. Yet that's where the Los Angeles Times now finds itself.

It's been three weeks since the National Education Policy Center (NEPC) released a reanalysis of the research underlying the August 2010 Times story that rated teachers in the Los Angeles Unified School District (LAUSD) based on an attempt to estimate the growth of their students' test scores. The new NEPC study concluded that the research on which the Times teacher effectiveness ratings were based was not capable of producing valid ratings of individual teachers.

On February 7, 2011, the Times covered the NEPC research and gave the article the inconceivable title, "Separate study confirms many Los Angeles Times findings on teacher effectiveness." (The subtitle: "A University of Colorado review of Los Angeles Unified teacher effectiveness also raises some questions about the precision of ratings as reported in The Times."). True to the headline, the story incorrectly characterized the re-analysis as "confirm[ing] the broad conclusions of a Times' analysis."

In fact, the NEPC study, conducted and authored by Derek Briggs and Ben Domingue and titled Due Diligence and the Evaluation of Teachers, confirms very few of the Times' conclusions -- and none of the key ones.

The Times story was written by Jason Felch, the reporter who also wrote the August 2010 Times story that relied on the problematic research. The article was apparently assigned to Mr. Felch by assistant managing editor David Lauter, who I am led to believe oversaw the August project. That is, Messrs. Felch and Lauter teamed up for the original coverage and then, when the foundation of that work was critiqued, they teamed up again to misrepresent the critique.

In response to the Times article, NEPC posted a "Fact Sheet" on its website, walking readers through the article's most misleading and false statements. Others joined in, expressing outrage that "the facts reported in the study [were] studiously ignored" in the Times coverage and wondering "how one could reasonably draw such a conclusion [the Times' headline] from the highly readable 32-page research report written by Briggs/Domingue".

Faced with this second set of criticisms, the Times again chose to deny and mislead, defending its reporting and the accuracy of its teacher ratings as well as its coverage of the NEPC report. The Times on February 14th published a post by its "readers' representative" and a separate unsigned statement from the paper's management defending its reporting while making demonstrably false claims about what was included and not included in the NEPC report.

A point-by-point response to the defense presented by the Times is provided on the NEPC website. Readers of that response will see that what is at stake here is not a battle over semantics or arcane statistical details. The Times contends that the teacher effectiveness ratings it published online were built on sound research, offering a fair and reliable assessment of the relative quality of individual LAUSD teachers. Parents are encouraged to rely on its searchable database in order to make choices about who will teach their children.

The NEPC report explains that the model used to construct the Times database of individual teacher effectiveness ratings is not adequate to that task. Using a stronger, alternative model, 53.6 percent of the teachers in the database -- more than half -- would fall into a different effectiveness category for reading than the one assigned by the Times. While the NEPC researchers explain why they think a stronger model is preferable, that's not really the point. Instead, the point is this: because two reasonable models reach such different results, the Times' decision to publish ratings based on their preferred model is reckless.

The Times has not been simply reporting on teacher evaluations or ratings. It has been creating them and publicizing them. This unusual position confers upon the Times a profound obligation to ensure that any ratings it publishes are both valid and reliable. It is incumbent on the paper's reporters and editors to cautiously report on the effort's weaknesses.

This ethical obligation is amplified when the Times is presented with a critique of the social science work that the paper had commissioned and used. Yet inexplicably the story about the critique was assigned to the same reporter who wrote and has repeatedly defended the original story, and this assignment was apparently made by the same editor who worked on the original story. The result, not surprisingly, was an attempt to mislead readers and whitewash the critique. It's been enlightening but chilling to watch a desperate newspaper determined to make its own reality -- behaving in ways I've come to expect of politicians, not journalists.

I am nowhere near a neutral observer of this morality play, but I still hold out hope that the protagonists at the Times will be reached by the researchers, teachers, and others who have tried again and again to shine a light on the truth.

Professor Alex Molnar, NEPC's publications director, joined in the drafting of this post.

Before You Go

Popular in the Community