There’s little question right now that former Vice President Joe Biden has the polling advantage over President Donald Trump. That hasn’t stopped many people from questioning the polls. In recent weeks, both gunshy Democrats and defensive Republicans have resurfaced an oft-heard theory from the last presidential election: that Trump’s backers are disproportionately likely to conceal their support from pollsters only to turn around and vote for him on Election Day.
In a recent poll of Pennsylvanians, 57% of voters said they believed that there were at least some “secret” Trump supporters in their community.
It’s hard to disprove a negative or to gauge in advance how accurate current polls will be at predicting an election that hasn’t yet happened. What we do know, however, is that researchers spent a lot of time in the aftermath of the 2016 presidential vote looking for evidence of a cadre of “shy Trump voters” big enough to swing an election. Most didn’t come away with much evidence to show for the idea.
The potential effect of such voters was a major focus of a postelection postmortem conducted by the American Association for Public Opinion Research, a professional organization for pollsters, as well as a few additional experiments. Voters who made up their minds in the last week before the 2016 election, the AAPOR report concludes, broke heavily toward Trump, especially in the key states of Michigan, Wisconsin, Pennsylvania and Florida. In another survey, when voters were asked both before and after the election whom they supported, those who gave inconsistent answers disproportionately moved toward Trump. Theoretically, that could indicate that many of those supposed late deciders were instead Trump supporters concealing their stance until the last minute.
But a number of other tests conducted to assess that possibility, the report found, “yielded no evidence to support it.”
Were Trump Voters Embarrassed To Tell Interviewers Whom They Supported?
The most common version of the shy Trump voter theory rests on the idea of social desirability bias ― that a significant bloc of Trump voters was uncomfortable enough about sharing opinions with pollsters to either feign indecision or to lie about their preferences.
From looking at past campaigns, we know that this isn’t a systematic issue for all polling on Republican candidates ― in 2012, for instance, polling overestimated support for Mitt Romney. If support for Trump was uniquely stigmatized, you’d expect him to outperform his polls more than other Republican candidates did theirs. An analysis of 24 live-caller surveys in the AAPOR report, however, found Trump outperformed his estimates by an average 1.4 percentage points in 2016, compared with a virtually identical 1.3 points for GOP Senate candidates
Under the shy Trump voter theory, you’d also expect to see a pattern in how Trump’s numbers were affected by the mode in which a poll was conducted. Specifically, his support should have been consistently lower in polls that used live interviewers (in which Trump voters would need to admit their support out loud to another person) and higher in those conducted online or using automated phone calls (in which Trump voters merely had to click a button). This, however, didn’t happen. Nor was there any evidence of a relationship in the polls between support for Trump and the rates of voters saying they were undecided or were refusing to answer.
Voters might also have been less inclined to divulge their support for Trump to female or non-white interviewers than to white male ones. There was no evidence for this, either. As the report notes, none of this is conclusive evidence against the shy Trump voter hypothesis, but it is “inconsistent with expectations” of the theory.
Pollsters from Politico/Morning Consult and Pew Research tested the effects of survey mode through more direct experiments. Both divided respondents into two groups, assigning half to be interviewed by another person and the rest to take the same poll online.
The Politico/Morning Consult survey, taken just before the 2016 election, found that Trump performed 1 percentage point better among the online respondents, a result that was “statistically insignificant and within the poll’s margin of error.” (It did, however, find that the difference was “especially pronounced among people with a college degree or people who make more than $50,000.”) In effect, those pollsters concluded, while an added sliver of general-election Trump voters existed, it wasn’t enough to swing an election.
In the Pew poll, conducted after Trump took office in 2017, people giving their answers to a live person were likelier to claim they were very satisfied with their family life and to deny facing financial troubles. But on four questions about Trump ― his approval rating, his favorability rating and whether he was trustworthy or a strong leader ― there were no significant differences.
It’s possible that many Trump voters were, contrary to expectations, equally unwilling to express their true opinions online. But another online survey, from political scientist Alexander Coppock, used a technique called a list experiment that allowed respondents to express their support for theoretically controversial statements without having to endorse them outright. That approach didn’t yield evidence of a shy Trump voter either.
In contrast to all this, the research supporting the existence of a significant shy Trump vote count is far more limited. One set of 2016 surveys from Cornell University, however, did find that voters who reported themselves as undecided also, if pushed to weigh in on which candidate was “more truthful,” disproportionately sided with Trump ― a finding that the authors argue suggests they were “hesitant to voice their support for Trump.”
Were Trump Voters Just Not Taking Polls At All?
There’s another reason polls might underestimate Trump voters: If, rather than being prone to answering polls untruthfully, these voters were simply more unlikely to answer polls at all. For an industry battling ever-decreasing response rates, this is, as the AAPOR report puts it, an “alarming possibility.”
Non-response bias, which occurs when some types of people are more likely than others to answer surveys, is a known hazard in polling. Ideally, that’s corrected through weighting the results. But the fact that college-educated Americans were disproportionately likely both to respond to polls and to dislike Trump ― and that polls weren’t always weighted to address that ― was, in fact, likely a serious factor in why those polls missed his 2016 victory in key states.
Still, the AAPOR report concluded that “staunchly pro-Trump areas,” at least, were not underrepresented in polls. A subsequent Pew study on non-response, using actual voting data, concluded that Democratic-leaning adults weren’t any likelier to respond to a survey than were Republican-leaning ones. Research from CBS returned similar findings.
More broadly, although non-response to telephone polls has risen sharply over the past few decades, election polls haven’t seen a correspondingly steep decrease in accuracy.
Can Polls Be Trusted?
With the 2016 election now a few years in the rearview mirror, we have a better idea of what did happen ― as we wrote earlier, “a perfect storm of small but compounding problems” that led polls to miss the mark, significantly, in key states. Some of those problems, like weighting issues, are fixable. Others, like late-deciding voters breaking hard in one direction, are simply a part of the uncertainty inherent in even good surveys.
Since then, many pollsters have implemented a number of changes, both in their methodology and in the way they characterize results. During the 2018 midterms, polling had, by historical standards, a pretty strong year. And though the current race is far from over, Biden’s current lead is considerably wider than Hillary Clinton’s was in 2016, meaning that, if it holds, it would take a more substantial polling error to overcome it.
Polls are neither a guaranteed forecast of what’s going to happen nor a pinpoint-precise measurement of what people think now. They are, however, a pretty good gauge for the basic state of an election at the time they’re taken. And though all surveys are inherently subject to some degree of uncertainty and error, there’s not a lot of evidence to suggest that Trump voters lying to pollsters is a major source of it.
(The author is a member of the American Association for Public Opinion Research but was not involved in the preparation of the report.)