Last week, The Huffington Post readers were invited to share their stories of inspirational teachers. I had many great teachers throughout my public school career -- and when I tried to pick just one, or even a couple, I realized it would be faster to list those who didn't make a difference for me, than to list those who did.
As I think about what made my teachers great, and listen to others' reflections, I notice that for almost all of us, what resonates with us are how those teachers made us feel -- more confident, more capable, inspired or loved -- and the life lessons and values they taught us.
The one thing I don't hear people saying is, "I loved Miss Such-and-Such because she helped me get better test scores!" It seems that what matters most to us can't be easily measured.
And yet, one of the big trends in education right now is value-added measurement (VAM), where a teacher's effectiveness is calculated by measuring the growth in their test scores. On its face, the idea makes sense. If standardized tests measure what students know and can do, and teachers are solely responsible for increasing what students know and can do, then we should keep track of how much students' scores rise while in their teachers' classrooms. That way, we can reward the best teachers and disseminate their practices, and identify the worst teachers for extra help or dismissal. Simple as Simon, easy as pie, right?
Not really. First, standardized tests offer only partial information about what students know and can do, and only in a limited range of subjects. Second, teachers aren't the only ones who teach students. Family members, friends, other teachers, librarians, tutors, counselors, friends, and the student him -- or herself (imagine that!) all influence what is learned over time. Trying to devise a way to calculate the influence of one particular teacher amidst all those others? Remember, each teacher's relative influence will be different for each learner. Good luck!
Other problems known to plague value-added measures (VAM) for individual teachers include:
- High error rates. While potentially useful when assessing schools or districts, there is a one in five chance that an individual teacher's effectiveness will be mischaracterized when only three years worth of student data are used. That rate rises to 35 percent (over one in three) when just one year of data are used.
With a large enough sample size and random assignment of students, some of these issues could be minimized. But that would require school leaders to randomly assign students to teachers, and wait several years for teachers to accumulate enough data points (formerly known as children) to be sure that any inferences drawn from the calculations are reasonably accurate.
And of course, those problems are stacked on top of the other problems already associated with high-stakes standardized testing. These include, but aren't limited to: bias toward certain kinds of learners, pressure on schools to narrow the curriculum, the temptation to game the system and/or cheat, and so on. That is why groups like the National Academy of Sciences and the Economic Policy Institute agree that VAM should not be used to make high-stakes decisions about teachers.
If we truly respect teachers, and are serious about ensuring that all children are taught by high-quality teachers, why redesign teacher evaluation systems around measures that most researchers agree are unstable and misleading? (So much for being data-driven...) Why would any school district use a tool they don't actually understand to inform high-stakes decisions? Under these circumstances, it is irresponsible to use VAM for any significant proportion of an individual teacher's evaluation -- especially if the other components include a "drive-by" principal observation and little else.
Why not tap into those human connections we all understand and value, and build a better evaluation system from there? It could include examples of student work along with evidence (lesson plans, videotaped lessons) of how the teacher responded to students' needs; student and parent input (which could take the form of surveys and other testimonials on how a given teacher "worked" -- or didn't work -- for them); or an ongoing observation by instructional leaders and peers. Such a system could actually support teacher improvement by giving meaningful and timely feedback. That's a stark contrast to a value-added score that only offers an opaque, relative measurement of a teacher's performance, and isn't available until well after a particular school year has ended. (It's also less expensive than spending money on the tests, data-tracking systems, and specialists VAM requires.)
If that sounds pie-in-the-sky, it shouldn't. Some schools and districts already have such a system in place (or at least the beginnings of one). The trick is making sure that school personnel actually have the time and support to implement it consistently and well. Many school leaders don't, because of the numerous and often competing demands of district, state and federal mandates. Likewise, onerous reporting requirements consume time and energy that teachers and principals could otherwise spend building working relationships with parents.
If you have questions about the quality of your child's teachers, get involved! Visit the school, observe in your child's classroom, and get to know the teachers -- don't assume that a number or ranking published in the paper will tell you what you need to know. Teachers, prioritize your work, and consider ignoring some of the less important stuff you're asked to do so you can spend that time reaching out to parents and students instead.
Here's hoping the current generation of students can look back on their schooling and remember caring, creative teachers, instead of stressed-out test-prep technicians!