I've been in favor of results-based accountability pretty much forever. And for good reason: before the era of academic standards, tests, and consequences, all manner of well-intended reforms failed to gain traction in the classroom. New curricula came and went; states and districts injected additional professional development into the schools; commission after commission called for more "time on task." Yet nothing changed; achievement flat-lined. And it was impossible to know which schools were doing better than which at what.
Then came the meteoric shock of consequential accountability, and student test scores (on the National Assessment of Educational Progress and state exams, too) started to take off. For some subgroups of students, math and reading skills improved by two or three grade levels since just the mid 1990s.
Yet we all know the downsides of the narrow focus on reading and math scores in grades three through eight and once in high school. This regimen puts enormous pressure on schools to ignore or exclude other important subjects (art, music, history, even science). It penalizes schools with an educational strategy that succeeds in the long term but doesn't produce sky-high scores now. (I'm thinking of Waldorf schools, for instance, such as the preschool my son attends.) And it undervalues other important contributions that schools make, such as to students' character development and social skills.
When it comes to evaluating teachers, there is wide agreement that we need to look at student achievement results -- but not exclusively. Teaching is a very human act; evaluating good teaching takes human judgment -- and the teacher's role in the school's life, and her students' lives, goes beyond measurable academic gains. Thus the interest in regular observations by principals and/or master teachers. These folks can pick up on nuances missed by the value-added data -- plus can provide actionable feedback to instructors so that they can improve their craft. (Harrison School District Two in Colorado has one of the best plans in this regard.)
So why do we assume, when it comes to evaluating schools, that we must look at numbers alone? Sure, there have been calls to build additional indicators, beyond test scores, into school grading systems. These might include graduation rates, student or teacher attendance rates, results from student surveys, AP course-taking or exam-passing rates, etc. Our own recent paper on model state accountability systems offers quite a few ideas along these lines. This is all well and good.
But it's not enough. It still assumes that we can take discrete bits of data and spit out a credible assessment of organizations as complex as schools. That's not the way it works in businesses, famous for their "bottom lines." Fund managers don't just look at the profit and loss statements for the companies in which they invest. They send analysts to go visit with the team, hear about their strategy, kick the tires, talk to insiders, find out what's really going on. Their assessment starts with the numbers, but it doesn't end there.
So it should be with school accountability systems. The best ones today take various data points and turn them into user-friendly letter grades, easily understandable by educators, parents, and taxpayers alike. So far so good. Why not add a human component to the process, via school inspectors like those in England? (See this excellent Education Sector paper, by my friend Craig Jerald, for background on how that works.)
Imagine: At least once a year (more would be better) a group of inspectors visits a school. (These would be professionals on contract with the state department of education -- typically retired teachers and principals. In the case of charter schools, authorizers would be involved, too.) They would mostly look for two things:
- Evidence that the school is achieving important outcomes that may not be captured by the state accountability system. For example, the school's administrators might show them test score data from a computer adaptive exam like NWEA's that demonstrates progress for individual kids (especially those well above or below grade level) that isn't picked up by the less-sensitive state test. Or perhaps a high school has compelling data about its graduates' college matriculation and graduation rates that put its mediocre test scores in a different light.
So here's how it would work: The state would develop school grades based on a variety of indicators, as it does now. Then those grades could be raised or lowered based on the findings of the school inspectors. (Generally just a letter-grade, but sometimes more.) Grades would go up because of evidence of strong outcomes not captured by the state accountability system; grades would go down because of evidence of unhealthy curricular narrowing.
Such a system would remain imperfect. Human judgment would introduce subjectivity and error into the process. Inspectors might face pressure (maybe even bribes) to raise schools' grades. And it would be expensive -- at least as compared to the testing-and-accountability systems we have now. These issues would need to be addressed.
Still, it's worth it. To the extent that school grades (and consequences linked to them) drive policy and behavior, we ought to make sure that those grades are informed by more than just numbers. The correct response to the unintended consequences of accountability isn't to end accountability, but to make it work better. That could have positive consequences for many years to come.
Originally published on the Fordham Institute's Flypaper blog.