Though researchers have repeatedly warned of instability and high-error rates as they caution against the use of value-added measures (VAM) of teacher performance for high-stakes purposes, "reformers" of a certain kind remain infatuated with the practice -- as does much of the media. I've written about VAM in the past, to object to its unearned popularity and the manner in which teachers' scores were published in Los Angeles (and soon, perhaps, in New York City). Having failed to single-handedly thwart its popularity in the press, I also advocated for its use in all professions, particularly medicine -- with similarly faint success.
I can't say I'm surprised. The value-added campaign is backed by some serious money and clout, especially in the form of the Gates Foundation and its Measures of Effective Teaching Project. According to its overview, The Project:
is based on two simple premises: First, a teacher's evaluation should depend to a significant extent on his/her students' achievement gains; second, any additional components of the evaluation (e.g., classroom observations) should be valid predictors of student achievement gains.
In other words, the Project's work rests on the as-yet-unproven assumption that current methods of using test score gains to assess teacher performance are stable and robust enough to judge teachers and determine the validity of other ways of measuring teacher effectiveness. (So much for using multiple measures of performance so that the weaknesses of one kind can be offset by the strengths of another...)
As this discussion continues, and as growing numbers of people who have little background knowledge in education or research exercise ever greater influence over major policy decisions, it's becoming clear to me that what America really needs, perhaps even more than a Measures of Effective Teaching Project, is a Measures of Effective Reporting Project.
The traditional role of the news media is to provide accurate, useful information to the public, so that we can make sound decisions in various aspects of public and private life. Ideally, journalists should be well-informed and independent from powerful social institutions, so that the information we receive from them is as complete and impartial as possible.
That's clearly not happening. The coverage (sometimes fawning) of the first findings from the MET Project illustrates how severe the problem is. In this case, it's clear that many news outlets paid more (or even exclusive) attention to the policy brief than the full research report. And unfortunately, many of the claims made in the brief are only weakly supported by the data from the study.
It's worth noting that a few educators registered their concerns about the report soon after its December release. For example, California teacher Larry Ferlazzo shared his concern that the Project might be cheapening two powerful feedback tools -- video and student surveys -- by reducing their value to how well they correlate with faulty measures of performance. And Seattle principal Justin Baeder's critique anticipated a few of the concerns raised in a research review released today, namely that the data do not support the conclusions the report's authors reach by way of what he terms "logical gymnastics and implication avoidance."
In a review released this morning by the National Education Policy Center, Berkeley professor and economist Jesse Rothstein argues that though the MET Project has positive potential, problems with the study's assumptions (the Project's above-quoted premises) and interpretations of the study's initial findings threaten its usefulness and credibility.
The MET study is an unprecedented opportunity to learn about what makes an effective teacher. However there are troubling indications that the Project's conclusions were predetermined... The results presented in the report do not support the conclusions drawn from them. This is especially troubling because the Gates Foundation has widely circulated a stand-alone policy brief (with the same title as the research report) that omits the full analysis, so even careful readers will be unaware of the weak evidentiary basis for its conclusions.
According to Rothstein, the researchers made mistakes including:
•Inappropriately implying causation where no evidence yet exists. The random assignment part of the study, that attempts to distinguish teacher influence from other factors, isn't yet complete. It's also not clear that researchers made any attempt to look for signs of potential bias in the available data.
•Inappropriately using their results from a low-stakes environment to support the use of VAM in the real world (a high-stakes environment)
•Overstating the strength and real-world significance of positive correlations among different measures of performance. Though teachers with high ratings and scores in one area "tend to" have high ratings in others, he finds that the tendency is "shockingly weak." For example, in math, "over forty percent of the teachers whose actually-available state exam scores place them in the bottom quarter are in the top half on the alternative assessment" (emphasis mine) that assesses students' broader understanding of the concepts taught.
In other words, there is a mismatch between performance measured by commonly-used tests of short-term knowledge and more conceptually demanding ones, even in this no-stakes environment. Moreover, a teacher's value-added scores from the state tests is only slightly better than chance at predicting her value-added with regard to broader conceptual knowledge. It's therefore still reasonable to assume that in a high-stakes environment, teachers will face a(nother) perverse incentive to teach to low-level tests in order to boost their own scores, at the expense of higher-level thinking skills. Concerns about misidentifying good and bad teachers also remain well-founded...
...as do concerns about our inability to hold powerful foundations accountable, in an environment where the media is less watchdog than lapdog.
If major news outlets feel it is all right to publish previously confidential (and highly flawed) data about teachers, perhaps its time for teachers -- and everyone else who cares about useful reporting -- to launch our own project, to identify and scale up the kind of journalism required by a free and complex society. Who's in?