This point came up in passing a few days ago when I was reviewing some writing by Mark Garrison, but it is worth hammering home all by itself.
We have been told repeatedly that we need to take the Big Standardized Tests so that we can hold schools accountable and tell whether our teachers are succeeding or not. "Of course we need accountability systems," the policy makers say. "Don't you want to know how well we're doing?"
And then we rank schools and teachers and students. But ranking is not measuring.
Would you rather be operated on by a top-ranking surgeon or one who was the bottom of his class? What if the former is the top graduate of Bob's Backyard School of Surgical Stuff and the latter is the bottom of Harvard Medical School? Would you like homework help from the dumbest person in MENSA or the smartest person in a sixth grade remedial class? And does that prompt you to ask what we even mean by "dumb" or "smart"?
"But hey," you may reply. "If I'm going to rank people by a particular quality, I have to measure that quality, don't I?"
Of course not. You can find the tallest student in a classroom without measuring any of them. You can find the heaviest box of rocks by using a scale that doesn't ever tell you how much they weigh. Ranking requires no actual measurement at all.
Not only that, but when we are forced to measure, ranking encourages us to do it badly. Many qualities or characteristics would best be described or measured with a many-dimensional matrix with a dozen different axes. But to rank, we have to reduce complex multidimensional measurement to something that can be measured with a single-edged stick.
Who is most attractive -- Jon Hamm, Ryan Gosling or George Clooney? It's an impossible question because it involves so many factors, from hair style to age to wry wit vs. full-on silliness all piled on top of, "Attractive to whom, exactly?" We can reduce all of those factors and measure each one independently, and that might create some sort of qualitative measure of attractiveness, but it would be so complicated that we'd have to chart it on some sort of multi-matrix omni-dimensional graphy thing, and THAT would make it impossible to rank the three gentlemen. No, in order to rank them we would either have to settle on some single measurement that we use as a proxy for all the rest, or some bastard offspring created by mashing all the measures together. This results in a ranking that doesn't reflect any kind of real measurement of anything, ultimately both meaningless and unconvincing (the ladies of the George Clooney fan club will not change allegiance because some data-driven list contradicts what they already know in their hearts).
In fact, when we create the bastardized mashup measurement, we're really creating a completely new quality. We can call it the Handsomeness Quotient, but we might as well call Shmerglishness.
So let's go back to "smart," a word that is both as universally used and as thoroughly vague as "good" or "stuff." Smartitude is a complex of factors, some of which exist not as qualities but as relationships between the smart-holder and the immediate environment (I'm pretty smart in a library, average under a car hood, and stupid on a basketball court). Measuring smart is complicated and difficult and multi-dimensional.
But then in the ed biz we're going to fold that quality into a broader domain that we'll call "student achievement" and now we are talking about describing the constellation of skills and abilities and aptitudes and knowledge for an individual human being, and to rank requires to use a single-axis shmerglishness number.
We could go on and on about the many examples of how complex systems cannot be reduced to simple measures, but I want to go back and underline that main idea--
Ranking is not measuring. In fact, ranking often works directly against measuring. As long as our accountability systems focus on ranking students, teachers and schools, they will not tell us how well the education system is actually working.
Originally posted at Curmudgucation