Why Polls Can Be ‘All Over The Place' On Trump Approval

When we nudge more uncertain respondents to offer an opinion, the results we get appear to reflect real attitudes and not random responses.
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.
JIM WATSON via Getty Images

Google the word “polls” and the phrase “all over the place” and you get quite a few results. That’s not surprising, since public opinion polling often yields varying numbers, even when asking identical questions.

Scan the charts for the approval rating of President Donald Trump at HuffPost Pollster or FiveThirtyEight, and you will find a band of variation for individual surveys about 8 to 10 percentage points wide, with more than a few “outliers” falling outside that range.

Some of this variation is purely random – the product of estimates made from samples rather than interviewing the entire population – but not all.

Observers have asked why SurveyMonkey’s tracking polls show an approval percentage for Trump that is typically a few points higher than surveys conducted by telephone. The explanation is fairly straightforward: Both our approval and disapproval ratings tend to be higher than other polls, because the format of our question nudges more respondents to express an opinion rather than saying they are “unsure” or “don’t know.”

That pattern is evident when comparing our trend lines for Trump’s approval and disapproval since February to the multi-poll, aggregated trend lines produced by HuffPost Pollster and FiveThirtyEight. The lines are very close and reveal the same movement over time, but SurveyMonkey’s line is typically a few points higher for both approve and disapprove percentages.

Similarly, a comparison of the polls conducted during July from individual pollsters (averaging where pollsters had multiple releases) shows that SurveyMonkey’s results for the month are slightly higher than the overall average for both approval (42 percent, +2) and disapproval (56 percent, +1). The percentages are higher because we get a much smaller percentage who do not answer the question and thus neither approve nor disapprove.

But why does SurveyMonkey get a smaller “don’t know” for Trump approval? Answering that question requires a bit more explanation and a review of the other reasons why presidential job approval ratings can differ among pollsters.

No interviewer?

When the differences among polls were especially wide in the first few weeks of the Trump administration, some observers noted the gap was biggest between telephone polls conducted with live interviewers and self-administered surveys conducted either by phone or online. Some speculated about a “shy Trump” phenomena – perhaps Trump’s supporters were less likely to want to share their support with a stranger on the telephone.

Findings from the Pew Research Center, published in late March, quashed much of the speculation. They conducted an experiment that held all other aspects of the survey constant – including sampling, weighting, questionnaire – but randomly varied whether interviews were conducted by telephone with a live interviewer or online, self-administered. They found no significant differences in Trump’s overall job and favorable ratings between the same survey administered over the phone or on the web.

Measuring ‘Unsure’

If not a “shy Trump” phenomenon, then why the different results from surveys conducted without live interviews? Much of the answer comes from how pollsters choose to handle uncertain respondents, and how those choices differ with and without interviewers.

Traditional telephone polls instruct their live interviewers to read only the desired answer categories (“approve” or “disapprove”) but also to record when respondents volunteer that they “don’t know” or are otherwise unable to answer.

Some pollsters and call centers are more strict on this issue than others, requiring interviewers to push uncertain respondents harder for an answer, usually by repeating the answer choices. Other pollsters allow their interviewers to accept “don’t know” as an answer more readily. Whenever live interviewers are involved, however, the underlying procedure is essentially the same: There is no prompt for don’t know or unsure. Responses other than “approve” or “disapprove” have to be volunteered.

In self-administered online and automated phone surveys, the pollster has to decide whether to offer respondents with a no-opinion option. Offer “don’t know” as a choice, and much higher percentage of respondents end up in that category than would be obtained by a typical telephone poll. Withhold it, and all respondents are forced to choose.

At SurveyMonkey, we opt for a middle ground. In the introduction to our surveys, we tell respondents they can “skip ahead to the next question” if they “do not have an opinion on a specific question.” So while we do not provide explicit “don’t know” or “unsure” choices on the presidential approval question, respondents have the option to skip the question. We track and report the percentage who skip the question as providing “no answer.”

We believe this practice most closely replicates the procedures used by live interviewers; allowing for some uncertainty, but not explicitly prompting for it. In practice, however, this gentle “nudge” makes for a smaller don’t know percentage for Trump approval (1 to 3 percent) than what phone pollsters have been getting (typically 5 to 7 percent). Since our no-answer response is smaller, our approval and disapproval percentages are slightly higher.

Four categories or two?

Our approval question differs from those used on most telephone polls in another way: Where most surveys prompt for just two responses (approve or disapprove), our initial question prompts for four (strongly approve, somewhat approve, somewhat disapprove or strongly approve).

In 2006, automated telephone pollster Rasmussen Reports reported that internal tests showed their use of a four-category approval question yielded “results about 3–4 points higher than if we simply ask if people if they approve or disapprove.” Rasmussen speculated that “some people who are a bit uncomfortable saying they ‘Approve’ are willing to say they ‘Somewhat Approve.’”

We ran a similar experiment in May, but found a negligible difference. We divided 5,567 interviews conducted between May 16–22 into random half samples. While the unweighted percentage coded as “no answer” was slightly higher for the two-category question (1.5 percent) than for the 4-category question (1.1 percent), the difference was not statistically significant.

It is possible, of course, that the effect is real, just too small to be detected given our sample size, but either way, a difference of that size should not make a noticeable difference in our results.

Populations matter

As the Pew Research Center noted in February, Trump’s approval numbers are typically worst among polls that survey the general population of adults, better among those based on those who say they are registered to vote and better still among those deemed likely to vote. Fivethirtyeight’s Nate Silver and Natalie Jackson, formerly of the Huffington Post, examined larger numbers of Trump approval polls and found very similar patterns.

These findings often come from comparisons of different polls, which vary in many ways, not just the population polled. SurveyMonkey’s data allow for a comparison that holds other aspects of methodology constant.

In surveys conducted in July, for example, Trump had a 43 percent approval rating among all adults, but his rating was a point better (44 percent) among those who say they are registered to vote and two points better (45 percent) among those who say they voted in 2016 (these are statistically significant results given the massive sample sizes).

What if we prompted for ‘unsure?’

We conducted an experiment to delve deeper into the issue of prompting for uncertainty about Trump’s job performance. We split the interviews conducted between July 18 to July 27 into two random samples, with 6,738 respondents getting our standard 4-category approval question and 6,659 respondents getting the same question that also included a prompted category for “unsure.” All respondents also had the option to skip the job approval question, which we coded as “no answer.

As expected, prompting for “unsure” made a big difference, with 9.5 percent selecting that category and another 1.3 percent skipping the question (for a total of 10.8 percent). By comparison, just 2.4 percent skipped our standard question.

The higher undecided percentage made for lower overall ratings of job approval (35 versus 40 percent) and disapproval (54 versus 58 percent). Not surprisingly, the differences were greatest in the “somewhat” categories, suggesting that those with less certain opinions were the most likely to opt for “unsure” when prompted. Note that the difference was slightly bigger for somewhat approve (4 points) rather for somewhat disapprove (3 points).

A follow-up question – probing whether respondents have made up their minds about Trump – yields similar findings. As expected, those who opted for the prompted “unsure” option on the experimental sample were less certain about Trump. Far more said they could change their minds “if he enacts policies I agree with” (65 percent) than said their minds were “totally made up about Trump” (23 percent).

The results to this question also suggest that prompting for undecided has the most impact on respondents who might otherwise choose “somewhat agree.

As expected, those with strong opinions about Trump were more likely to report that their minds are made up, with very consistent results across the two versions of the question. However, on the experimental version, the movement of some respondents to “unsure,” only appears to make a difference in the certainty of those remaining in the the somewhat approve category.

Not surprisingly, the respondents who opt for “unsure” when prompted are overwhelmingly less engaged in politics than other Americans and, presumably, less comfortable rating the job the President is doing. Specifically:

  • 65% say they follow politics not too closely or not at all
  • 51% report they did not vote in 2016
  • 63% have no college education
  • 52% are age 34 or younger
  • 57% reporting earning less than $30,000 annually

While these respondents may feel less confident making judgements about Trump’s performance as president, they do have opinions on him. On a subsequent question withouta prompted uncertain category, 33 percent say their impression of Trump is favorable, and 57 percent say it is unfavorable – only 10 percent skip the question.

More important, their offered opinions of Trump on multiple questions are largely consistent. The Democrats among those “unsure” about Trump’s performance are overwhelmingly unfavorable about him personally, and the Republicans are mostly favorable. Those who give Trump a favorable rating are twice as likely as those who give an unfavorable rating to select positive traits as applying to the current President (such as “stands up for what he believes in” or “tough enough for the job”).

Put another way, when we nudge more uncertain respondents to offer an opinion, the results we get appear to reflect real attitudes and not random responses. Our approval (and disapproval) ratings may be a tad higher than other polls, but we believe the resulting data is more robust.

Methodology: Results for the approval question experiment are from a SurveyMonkey Tracking poll conducted online from July 18 through July 27, 2017 among a national sample of 13,397 adults ages 18 and up. Respondents for this survey were selected from the nearly 3 million people who take surveys on the SurveyMonkey platform each day. Data for this survey have been weighted for age, race, sex, education, and geography using the Census Bureau’s American Community Survey to reflect the demographic composition of the United States. The modeled error estimate for this survey is plus or minus 1.5 percentage points. This article is cross-posted to the SurveyMonkey blog.

Before You Go

Popular in the Community