The Real-Life Values Behind the LA Times Value-Added Teacher Controversy

The current LA Times controversy regarding their value-added statistical analysis of teacher performance is a tricky one. By conducting a rigorous, value-added analysis of teacher/student data the Times has brought to light a cause and effect relationship between teachers and the standardized test scores of their students. That's a big deal and they should be commended. But just because the data is there to assess who does a better job teaching kids to perform well on standardized tests doesn't mean that there is enough data to conclude who are the best teachers. Standardized test scores are an important predictor of a student's success in later life, but so are social and emotional life skills that have also been reliably measured and quantified. And unfortunately, the data available to the LA Times reflects only teacher performance as it relates to standardized test scores and not as it relates to the development of social and emotional competencies.

Recently, the Los Angeles Times analyzed data collected by the LAUSD to assess teachers and their performance based on standardized test scores. The LA Times analysis yielded several important findings, not the least of which is that "Highly effective teachers, the ones who consistently and dramatically raise their students' test scores, are fairly evenly distributed among schools and across different levels of experience and education." But the LA Times didn't stop there. It also cited a teacher by name that scored poorly and one that scored well on the value-added analysis and plans to publish the names and scores of all 6,000 teachers assessed.

Without a doubt, students' scores on standardized tests are a significant factor in assessing the effectiveness of their teachers. The LA Times' reporting is important and commendable, as is the fact that the LAUSD collected the data in the first place and made it available to the public. But here's the problem: standardized test scores are just one factor in rating a teacher's job performance. When teachers' ratings with respect to how their students perform on standardized tests are the only performance indicator readily available to the public, it presents a lopsided picture that can be taken out of context and used against individual teachers in ways that are tough to predict and could damage reputations. (For a more in-depth discussion of the method of analysis itself, and its implications on teacher performance, check out Charles Kirchner's HuffPo piece from last week.)

The focus of this controversy -- whether or not to publish teachers' names and associated test score data -- has received national attention even though it is in some respects a red-herring. An argument can be made that the major public policy controversy here lies not in the publication of the value-added analysis but that it is the only data readily available. The reporters and the editorial board of the LA Times have acknowledged many times that this data should not be the only factor used to assess teacher performance. But it is the best data the LA Times (or parents, or schools or anyone else who is interested in teacher performance) has to work with.

And why is that? Perhaps it is because there is a serious misunderstanding among schools, media, parents and policy makers that factors other than standardized test scores can be measured accurately. Just take a look at this excerpt from an LA Times Op Ed piece written by Sue Horton:

So much of learning, and of excellent teaching, involves intangible things that happen during the school day, interactions between students and teachers that defy quantification. But if the data we have show us that some teachers in a public school system are far better than others at helping kids master the essentials, the public should know that, and the system should study what those teachers are doing right.

And then, once we realize how much such assessments can tell us, perhaps we'll develop a way to measure the equally important but less tangible kind of learning that comes from a tea ceremony.

I appreciate and agree with Ms. Horton's sentiment that there is real value in the quality of learning inherent in a tea ceremony (and those familiar with my work will know I'm not kidding). I also agree that the value-added test score data should be studied to learn what some teachers are doing right. But here's my issue: the "less tangible kind of learning" that she points to is not only measurable but has been systematically measured, yet that data is usually not given it's due.

Life skills programs teach the less tangible qualities Sue Horton refers to in a variety of well-established ways. A recent meta-analysis of good science (controlled outcome studies of school-based youth programs by reliable research institutions) by the Collaborative for Social and Emotional Learning (CASEL) linked life skills programs for kids to the following student gains: Improved social-emotional skills; Improved attitude about self; Improved classroom behavior; and get this, an 11 point gain on standardized achievement tests. The CASEL meta-analysis of life skills programs also showed that students studied were at a reduced risk for conduct problems, aggressive behavior and emotional distress.

By publishing some data about individual teachers' performance, with no way for parents to easily collect other equally reliable data, the LA Times runs a significant risk of not only damaging the reputations of good teachers but of providing yet another disincentive for teachers to cultivate anything other than standardized test performance in their charges. I doubt that naming names would be as effective a means of making institutional change as taking the long view, staying with the story, and keeping a light on all those involved to see how they use this data for the benefit of the entire system (schools, families and the community).