What Went Wrong With Last Year's Election Surveys? Pollsters Have Some Answers.

Here's what they say the public should know before the next election.
adamkaz via Getty Images

2016 wasn’t a banner year for the polling industry.

National polls, which largely predicted a modest win for Hillary Clinton in the popular vote, weren’t too far off. But state polling missed the mark significantly, and often uniformly, leaving much of the public feeling utterly blindsided by Donald Trump’s victory.

“The day after the election, there was a palpable mix of surprise and outrage directed toward the polling community, as many felt that the industry had seriously misled the country about who would win,” the American Association for Public Opinion Research notes in a report issued Thursday.

At fault were a perfect storm of small but compounding problems, concludes the report, written by a committee of prominent poll professionals.

A late shift in key swing states and a failure to correct for the underrepresentation of less-educated voters played out against the backdrop of a close race that saw different winners in the Electoral College and the popular vote. The resulting errors were fundamentally worsened by pollsters, forecasters, and aggregators who were either overconfident in their results, or unable to convey a proper level of uncertainty to the public.

“I think the bottom line of what happened was that polling was just wrong enough nationally and at the state level that everyone got the wrong impression about what was going to happen,” SurveyMonkey’s Mark Blumenthal, a member of the committee and former polling editor at HuffPost, said during a panel discussion Thursday.

Several main issues contributed to underestimating Trump’s support, according to the report:

Pollsters failed to capture a significant late shift toward Trump in key states. The percentage of voters who were undecided late in the campaign was unusually high last year, introducing additional uncertainty in the polls. And in several states, including the Midwestern bloc that supposedly comprised Clinton’s “blue wall” of support, her fortunes took a last-minute turn for the worse.

In Florida, Michigan, Pennsylvania, and Wisconsin, between 11 percent and 15 percent of voters said they made up their minds in the final week of the campaign. Late deciders in those states, several of which saw a last-minute surge in attention from the campaigns, broke heavily for Trump, a trend that didn’t extend as strongly to the nation as a whole.

In Wisconsin, for instance, Trump led by 29 points among late deciders, according to exit polls, while he lagged behind by 2 points among those who made up their minds earlier. In Florida, he led by 17 points among voters who chose their candidate in the last week, but trailed Clinton by 1 point among earlier deciders.

AAPOR

College graduates were overrepresented in many polls. Americans’ level of education, unlike other demographic factors such as race or age, hasn’t always been a major electoral fault line. This year, however, better-educated voters were likely to support Clinton. That presented a problem for some pollsters, because highly educated voters are also more likely to answer polls.

AAPOR

“Survey researchers have known for years that adults in the U.S. with higher levels of formal education are more likely to participate in surveys than adults with lower levels of education,” Courtney Kennedy, director of survey research at the Pew Research Center, explained. While many pollsters weight their surveys to correctly reflect Americans’ educational backgrounds, some, especially at the state level, did not. “In 2016, that mattered,” Kennedy said.

Some Trump voters “did not reveal themselves as Trump voters” until after the election. This could theoretically reflect the “shy Trump voters” theory, which holds that some Trump voters were reluctant to express their views to pollsters. But, as the authors note, many tests aiming to capture whether Trump voters misreported their support by lying to pollsters “yielded no evidence.” Instead, many may have simply not have made up their minds for Trump until after the final polls were taken.

The impact of other potential factors is less clear, according to the report. The report sheds no light, for example, on the effect of FBI Director James Comey’s letter to Congress. “The evidence was really mixed on that,” Kennedy said. “We couldn’t come to a firm conclusion.”

Similarly, while likely voter models ― pollsters’ efforts to determine which of the people they talk to will actually show up on Election Day ― presumably played a role, much of the data that would help measure the exact effect isn’t yet available. “Turnout probably was one of the two or three things that introduced error into the process. It’s just very difficult for us to quantify it,” Blumenthal said.

The study found no evidence of a consistent bias toward one party in recent polling. While Trump was underestimated last year, Democrats Barack Obama and Al Gore also saw their standing underestimated in election polling. “The trend lines for both national polls and state-level polls show that ― for any given election ― whether the polls tend to miss in the Republican direction or the Democratic direction is tantamount to a coin flip,” the report’s authors write.

The report found no catastrophic failures in last year’s campaign polls, instead concluding that “[s]ome polls, indeed, had large, problematic errors, but many polls did not.” State polls had a “historically bad year,” the report says, but national polls were “generally correct and accurate by historical standards.”

Nor does the report offer silver-bullet solutions for preventing future surprises. Pollsters who didn’t already account for educational levels can take more care to do so. But there’s no reason why future elections won’t also see last-minute shifts in swing states, or an Electoral College result that doesn’t reflect that popular vote. As one chart shows, the average error in last year’s national polls was far from unusual.

AAPOR

The report offers one big suggestion: that “well-resourced survey organizations might have enough common interest in financing some high quality state-level polls so as to reduce the likelihood of another black eye for the profession.” But with survey costs rising, and many media sponsors paring back, it’s unclear which organizations would take the lead in funding such an effort.

And the solution isn’t as simple as finding the “right” polls to rely on, either. The type of polling traditionally thought of as the gold standard ― telephone surveys using live interviewers to reach both landlines and cellphones, and a method known as “random digit dialing” to select respondents ― didn’t outperform other types of surveys last year.

“This report would have been much easier if polls that on paper looked more rigorous than other polls performed better,” Kennedy said. “It would be very easy to say, ‘Well, you should really hang your hat on these polls that have these characteristics.’ We didn’t really find that .... For 2016, I don’t know that you can look at the data and say, ‘Here are some easy tips in terms of which polls to trust or not trust.’”

More immediately, panelists on Thursday suggested that pollsters, aggregators and journalists should spend more time emphasizing that surveys represent only a snapshot of public opinion at the time that they’re taken, and that the uncertainty surrounding polling goes far beyond the stated margin of error.

“Could this happen again?”Blumenthal asked. “Hopefully, if it happens again, we will all do a better job warning everyone that it’s possible.”

Disclaimer: The author is a member of the American Association for Public Opinion Research, but was not involved in the preparation of the report.

Popular in the Community

Close

What's Hot