Why Can't Pre-Election Polls Just Get It Right?

Why Can't Pre-Election Polls Just Get It Right?

As we move through primary season this summer, we've seen some shocking and unexpected outcomes. Why didn't we know that House Majority Leader Eric Cantor was going to lose? Why did Sen. Thad Cochran hang on in Mississippi when the polls generally showed an edge for challenger Chris McDaniel? Why can't the polls just get elections right?

With all the different pollsters and methods of pre-election polling, you'd think we could call the winners. Yet for a field that thrives on numbers and measures of certainty, the answers to questions of why it doesn't work -- when it doesn't -- get muddy and uncertain very quickly.

What follows are not the definitive answers to these questions -- because that would take books. It is a look at the challenges facing pollsters and those who want to understand and analyze polling data.

In an ideal world, public opinion is measured using probability-based samples guided by statistical principles of randomness. The theory is simple: Every person in the population you want to sample has the same chance of being interviewed, which means that if you are surveying American adults, every single person in the U.S. over age 18 has the same probability of being included in your sample.

In practice, it's much more complicated since you need a way to identify everyone in your population plus a way to contact them. Every method of identifying and contacting people used by professional pollsters has its challenges. But if it's done correctly, the survey or poll will reach an accurate representation of the population.

Pre-election polls, however, are not just trying to generate an accurate representation of a particular population. First, they need to define a very changeable group: those who will actually vote in an election. And then they need to get an accurate measure of only those people who will vote.

The pre-election pollster also faces the reality that its numbers will inevitably be compared to the actual vote results, even though we all know opinions and intentions to vote can change. Unless the poll is take right at election time, it's just a snapshot of the current situation. That said, the snapshot argument is used as often to justify bad polls as it is to note real movement in opinion over time, and every pollster wants to get as close as possible to the actual result in order to claim accuracy and avoid the snapshot debate.

Given these crucial distinctions between pre-election polls and regular opinion surveys, sometimes different techniques are necessary. The methods we use for more traditional survey research aren't always the same as those we need to get the best snapshot of likely voters.

There are nearly as many ways to define a "likely voter" as there are pollsters polling any given election. Sometimes "likely voter" status is based on whether a person turned out to vote before. Sometimes that vote history is self-reported; sometimes it comes from official election records. Sometimes a "likely voter" is simply someone who says he or she plans to vote. Often it's a complex combination of demographic adjustments ("weighting") and responses to questions about past voting and future voting intent.

After this much data processing, the representative sample may not be so purely representative anymore. Yet sometimes the less-representative sample produces the more accurate prediction. So maybe achieving a truly representative sample doesn't matter if the real goal is getting the election outcome right. The latter seems to be the only metric by which pre-election polls are evaluated in the mainstream media. And we all know, positive media attention leads to more work, money and fame for a pollster.

However, we can only assess pre-election polls in this way after the election occurs. It's easy to November-morning quarterback a pre-election poll, less so to analyze the poll at the time of its release. Those of us who follow polls and want to help the public understand them need some evaluation criteria to use in the abyss before the election returns come in.

We can only really assess quality by the things pollsters disclose about their processes -- including methodology, weighting and sampling details -- and their results -- including demographic summaries and individual question responses by demographic variables (the "cross tabs"). Unfortunately, many pollsters do not provide adequate public information for others to understand how their methodology is working.

Disclosure varies widely even among high-quality pollsters. Some provide long paragraphs detailing the entire process of data collection and processing. Others state in one sentence how the data were obtained, if the process is mentioned at all.

Even when the descriptions are fairly detailed, it's not always enough. For example, poll data are often "weighted to U.S. Census parameters." But there are many ways to weight to Census parameters -- different characteristics to use in the weighting, and different software programs and other ways of calculating the weights. Then, the weights might be trimmed, meaning they could be only so large or so small. Moreover, the complete Census, which provides the most accurate demographic information on Americans we have, is conducted only every 10 years.

And that's just one small example. The same in-depth questioning could be applied to how phone numbers are selected, how the entity providing the phone or any other list creates its lists, and every other aspect of the process. In short, it is extremely rare that we know enough about a poll to fully assess its quality. In pre-election polling, we're short on enforceable standards.

There have been some efforts to improve disclosure. The American Association for Public Opinion Research and the National Council on Public Polls have long had standards for methodological disclosure. AAPOR is even launching a Transparency Initiative to certify organizations that are compliant with its standards.

Until all pollsters disclose more about their methods -- until we know what they're doing -- we can't know why they didn't get it right.

Popular in the Community

Close

What's Hot