Before the New Hampshire Primary results were announced, I was getting ready to follow up that Iowa Coin Toss piece with another post that highlighted the folly of giving numerical information - this time information derived from polls - more respect than it deserved.
But as it turned out, the pre-primary polls did a reasonable job predicting who would win (Trump and Sanders) and which Republicans would be strong runners up. So should we consider the failure of the pollsters in Iowa an outlier and return to salivating over the next bits of data they feed us? Or maybe we should just treat New Hampshire as a lucky guess by quantitative quacks destined to fail three times out of four between now and Election Day.
Before embracing either extreme route, however, it might be good to explore this issue through a quasi-statistical technique that predates modern polling by over two-thousand years: Aristotle's Golden Mean.
If Critical Voter has a hero, it would have to be that 4th Century BCE Athenian man of letters who bequeathed us such reasoning tools as the rules of logic and the Modes of Persuasion.
The Golden Mean was introduced in Aristotle's work on ethics and is meant to describe what constitutes virtue. For example, courage is not a distinct quality someone possesses. Rather, it is a way to describe someone who correctly balances willingness to face danger (like a soldier standing his or her ground against attack) with the ability to avoid rash behavior (such as that same soldier needlessly placing his or her life in danger).
But if wisdom is also a virtue, perhaps we can determine how to wisely treat polling data by defining extremes and placing our opinion at the correct location between them.
For example, one extreme would be to treat pollsters as scientific oracles whose methods can effectively predict the future if they are just given enough time and enough numerical data to work with. This type of folly plays to the human tendency to treat quantitative information as intrinsically superior to qualitative, a problem I described last time when discussing futile attempts to find the "correct" (numerical) answer to which Democrat won in Iowa.
But another extreme would be to look at the many failures of the pollsters and conclude that the entire field consists of nothing but BS artists skilled at passing off guesswork as science.
The problem with this skeptical extreme is that the tools pollsters use (such as survey techniques and statistical methods of analyzing data collected through such surveys) have been extraordinarily useful, underlying as they do over a century of successful social-scientific research. We need some way to gather information about subjects more complex than balls rolling down inclined plains (such as human beings), and asking them carefully worded questions and then subjecting collected results to mathematical analysis can and has delivered valuable insight for decades.
The trouble is that these methods have their limits which pollsters themselves usually communicate by qualifying their statements and predictions. For example, nearly every poll has a built-in margin of error meant to indicate the uncertainty that arises from making claims about a large population based on input from just a small sample of that population. In addition, polls can have additional practical sources of error, such as difficulty getting people with one opinion to pick up the phone and talk to a pollster.
In many cases, even large error bars are not make-or-break. For example, 75% of people preferring one brand of ice cream over another is still telling, even with an error of over ten percent. But when distinctions start to fall within the range of error, poll results begin to border on the meaningless.
For example, when the Republican field consisted of 17 candidates, was there really a distinction between Bobby Jindal's 4% and Carly Fiorina 6%? Not if both numbers fell within an overlapping margin of error (which they usually did). Despite such uncertainty, those numbers were still used to determine who got to participate at the "main event" vs. "kids' table" debates, which had an enormous impact on the who got traction during the Republican primary race. And at least one candidate seems to have predicated his entire candidacy on his ability to lead in a race where over a dozen candidates have been dividing a mere one-hundred percentage points.
And let's not forget that polls only tell you what people are thinking at a particular point in time. But in a fast-moving presidential campaign, a lot can happen between the time a poll was taken and when the results are analyzed and announced. Which makes polls, at best, simply a snapshot of opinion that might already be outdated.
So perhaps, in the case of polling, the Golden Mean would involve understanding and appreciating polls for what they are: one more piece of information that needs to be checked for quality (especially accuracy and timeliness) just as you should be checking any claim being presented to you as true or meaningful.