WASHINGTON -- While the election's biggest winner was President Barack Obama, the other victory on Tuesday night went to the careful application of reason, data and, yes, to the science of modern survey research.
The losers were the amateur poll mavens who sought to "unskew" the polls and the pundits who saw what they wanted to see.
"Nobody knows anything," Wall Street Journal columnist Peggy Noonan wrote on Monday, casting aside the notion that the "the weighting of the polls and the assumptions as to who will vote" could possibly provide accurate guidance on the coming election.
Yet Noonan was certain that Romney would win. How? Intuition, mostly. She was sure that "independents are breaking for Romney." Also, "there’s the thing about the yard signs." On a recent trip to Florida, she had seen more Romney than Obama signs.
Others had grand rationalizations for ignoring the polls. "Romney will win by a very large margin, a landslide if you will," former pollster and Clinton adviser turned conservative pundit Dick Morris told Fox News on Monday. Why? Morris was somehow certain that most polls had sampled too many Democrats and that the undecided vote, "which always goes against the incumbent," would break to Romney. Needless to say, it didn't.
On the other side were the quants and the modelers, like our own Simon Jackman and others like Drew Linzer, Sam Wang and, yes, Nate Silver, who set aside hunch and folklore, gathered the hard data from a wide variety of public opinion polls and combined their findings into remarkably consistent predictions of the election outcome.
But if the models were successful and largely consistent, it was because they all worked with roughly same underlying polling data. "It was the pollsters that called it right thus far," tweeted Daily Kos founder Markos Moulitsas.
We believe the success of the poll tracking model that Jackman designed for HuffPost Pollster -- predicting the winner of all 50 states plus the District of Columbia -- owes in part to aggregating the polls alone with relatively little additional data or processing (save for the use of past voting data to help combine national and state-level polls and statistical corrections to help reduce the distortions produced by consistent pollster "house effects").
The most important lesson, however, comes from the approaches used by the pollsters, who had data that was collectively accurate in the pre-election surveys. Their approach is the opposite of the impressionistic "gut hunches" that dominate so much of conventional punditry. While their methods often differed, most worked to design a standardized approach to gathering representative samples of voters and taking accurate measurements of their preferences.
The challenges were greater than ever. Record low response rates, record high early voting and a rapidly rising number of cell-phone-only households have all worked to disrupt conventional survey methodologies. Yet for the third presidential election in a row, the pollsters produced results that forecast the correct winner
Not all was rosy. The poll estimates in many of the battleground states actually understated Obama's ultimate margins of victory, a pattern likely resulting from large deviations from industry averages. Voter models used by others appeared to produce rapid and often implausible changes in the composition of the likely electorate.
But it was their common approach that won the day. The pollsters relied not on folklore or anecdotal impression, but on tens of thousands of interviews that systematically measured, rather than assumed, how independents were leaning or how undecided voters were breaking.
Yes, the art of politics is about more than winning, but if we are going to cover the horse race, we ought to get it right.