Once More, With Feeling: What Polls Miss About Trump (And Everything Else)

When you hit the high note just right
When you hit the high note just right

Donald Trump entered office with the lowest approval ratings for a president-elect in history, and they’ve only gone farther down since. It is, however, a standard rule of media that for every political narrative, there must be an opposite (though not necessarily equal) counter-narrative. And so it is perhaps natural that a POLITICO piece should hove into view flying the banner “Donald Trump Might Be More Popular Than You Think” – and in so doing reveal a fundamental flaw in the way we think about the electorate.

The essence of Steven Shepard’s piece is that Trump’s popularity, and that of some of his early Executive Orders, is lowest in polls conducted via live telephone interview and marginally higher in automated telephone polls or surveys conducted online, which suggests a shy-Trump-supporter phenomenon.

It’s entirely possible that Trump and his early decisions might be slightly more popular than suggested by the more dire of the publicly available polls. It is also entirely reasonable to examine polls with a sharper eye than we might have since 2012 or so. Since then, the general polling consensus largely missed the mark on the outcome of many key races in the 2014 US midterms, the UK General Election of 2015, the outcome of the Brexit vote, and, of course, the rise and eventual victory of Trump.

Critiques of pollsters and polling tend to focus on the numbers – why were they wrong, how can they – or the system – be fixed. This is fair enough, and good pollsters do indeed interrogate their methodology post-election. There is, however, a structural problem with polling, one to do not with the accuracy of what it measures, but what it doesn’t measure at all.

Polling is an inherently cognitive act, requiring the respondent to evaluate different options and articulate an opinion. The more complex the question is, the more cognitive the response. This is not a bad way to figure out what someone thinks; the problem arises from the fact that what that person actually does is determined in part by what he or she thinks and in substantial part by emotions and instincts – in essence, by what that person feels.

This set of reactions – the feel – can be partly captured in polls via questions that ask respondents to plot the intensity of their feeling on a spectrum (“on a scale of 1-5, with five being the most, with how much ennui does the present state of politics fill you?”), but the act of assigning a numerical value to something as personal as a feeling is deeply subjective is of limited value.

The primacy of think over feel is not limited to polling, of course; it appears throughout politics, and is a limitation to which progressives can be particularly susceptible; in my own experience, left of center candidates and activists are considerably more likely than their conservative counterparts to bring facts to a feelings-fight. The think/feel imbalance is particularly dysfunctional in polling for two reasons.

The first reason is that, despite the providers’ usual health and safety warnings about the nature of polls as a snapshot rather than a prediction, polls are nonetheless used as important variables in models that predict the behavior of the electorate. When strong currents of feeling in a segment or segments of the electorate go undetected, or at least underrepresented, the models based on that poll are skewed, producing flawed results about the likely outcome of the election that can throw off an entire strategy.

Even when polls are taken with appropriate context and are reasonably accurate, however, they carry a second limitation as the result of the think/feel imbalance – they can tell you what voters think, but they often provide little guidance on what to do about it. A poll on the issues most important to voters might reveal that “jobs” is the most important matter to the electorate; what it does not tell you is what “jobs” means to one segment of the electorate, what it means to another, and how to talk about “jobs” to either. Work is the stuff of security to some voters; of constriction and alienation to others; and of opportunity and community to yet more. All of those segments might be open to a candidate or campaign willing to honestly connect with them, and simply talking more about “jobs” in the extract, or releasing a catch-all “jobs plan”, will build that connection with none. In this way, polling is like the stat-sheet of a basketball game – it’s helpful to know you’re losing on rebounds in the third quarter, but you can’t get more rebounds by talking about them more.

Traditionally, focus groups have been the preferred method for adding some feel to a poll’s think, and they can certainly yield useful results. They are, however, quite expensive, and by virtue of their inherent methodology are subject to various biases that render the data gathered of limited value.

A few solutions to the think/feel problem emerge. From the world of Big Data, firms like Cambridge Analytica are gathering and using psychometrics – essentially, a wealth of consumer data points that help build psychological models of voters. And consumer research firms are employing technologies and behavioral science methodologies that may make deep qualitative research more accessible under the constrained timetables and budgets of political campaigns.

Until a reliable mechanism for integrating feel with think emerges, however, expect more ambiguity in polling – and more surprises.

CONVERSATIONS