The UK elections give polling a black eye. Is it time to start ignoring polls? And the Huffington Post joins AAPOR's Transparency Initiative. This is HuffPollster for Wednesday, May 13, 2015.
HUFFPOST JOINS AAPOR'S TRANSPARENCY INITIATIVE - On Wednesday, the American Association for Public Opinion Research (AAPOR) welcomed the Huffington Post as the newest member of its Transparency Initiative.
Like the other 43 charter members of the initiative so far, the Huffington Post has committed to routinely disclose the methodological details associated with the opinion surveys we produce and publish in partnership with the research firm YouGov.
Formally launched in 2014, AAPOR's Transparency Initiative is the organization's response to the many changes roiling the survey profession. Five years ago, when AAPOR first announced its plans for the Initiative, then AAPOR president Peter V. Miller spoke of a new "sea of undocumented data" which "is probably, in many cases, untrustworthy." In the new environment, he said, "people can simply fabricate data and put it out there and claim that it's real."
Thus, the goals of the AAPOR Initiative include enabling members of the news media, and in turn the general public, to better differentiate between transparent and non-transparent research and to provide the information necessary for the independent evaluation of survey quality.
So far the organizations that have joined the Initiative are mostly academic and non-profit survey centers, including the Pew Research Center, Gallup, Langer Research Associates, Quinnipiac University and Marist College.
So far, however, the Huffington Post is only the second commercial news media organization -- after with the Washington Post -- to join the list of charter members. HuffPollster hopes many more of our colleagues across the media spectrum will soon follow.
Editor's Note: The news comes on the eve of AAPOR's 70th Annual Conference, which will be held later this week. We are publishing this week's newsletter on Wednesday to allow HuffPollster to attend.
A BAD DAY FOR UK POLLING - The polling misfires have grown a little too frequent. Britain's meltdown last week came on the heels of understatements of support for the Likud Party in Israel in March, of many Republican U.S. Senate candidates in the November midterm elections and of the "no" vote in Scotland's referendum on independence in September.
In their final national surveys, the UK pollsters were nearly unanimous in forecasting a near tie in vote preference between Prime Minister David Cameron's Conservatives and Ed Miliband's Labour Party, but the final results gave the Tories a better than 6 percentage point victory. They won 331 of the 650 seats in the UK Parliament, 42 seats more than even the most optimistic pre-election polling-based forecast.
Later on election night, Silver added some important context: this year's six-point-plus miss in the UK was "far from unprecedented" with several "almost-identical cases in the past." He cited an analysis by FiveThirtyEight's Harry Enten that found past errors of nearly 6 percent on the margins separating the top two parties in 2001, 1997, 1974 and an even larger 9-percentage-point error in 1992. Given the past precedent, Silver conceded, election forecasters "almost certainly ought to have accounted for a greater possibility of an outcome like the one we saw."
Still, as longtime UK poll watcher Anthony Wells put it in assessing the polls' performance, "there is something genuinely wrong here."
On Friday, the British Polling Council launched an independent inquiry similar to the investigation they conducted after the 1992 polling meltdown. "The fact that all the pollsters underestimated the Conservative lead over Labour suggests that the methods that were used should be subject to careful, independent investigation."
It is too soon to come to any firm conclusions about the UK polling error of 2015, but pollsters and poll watchers were quick to offer theories. Here is a sampling:
'Shy Tories' - The repeated past understatement of support for the Conservative Party has convinced British pollsters of the existence of a phenomenon they dubbed the 'Shy Tory' effect. The idea is that some Conservative voters are more reluctant to divulge their true preference to a stranger on the telephone. In theory, the problem should be more pronounced on live interviewer polls than on self-administered surveys conducted online. YouGov's Peter Kellner, who considers the Shy Tory effect "the likeliest explanation" for the 2015 error, admits that it also creates "difficulties for online companies" like his own, since online polls missed the Conservative margin in the final round of polls as much as those using live interviewers.
Biased samples - A somewhat different problem, as Kellner put it, is whether the polls have "screwed up their sampling methods," as he believes they did in 1992. Former ICM polling head Nick Sparrow was similarly blunt: "Somewhere near to top of the list" for those investigating this year's meltdown "must come some questioning of sampling methods." The potential for these problems, he wrote, comes from the many factors that make it "more difficult to achieve a sample that is really representative of all voters," particularly the long term decline in response rates.
Online surveys - More specifically, some question the UK's heavy reliance on internet-based surveys that relied on panels of respondents who had volunteered to complete surveys. Two-thirds of the final UK polls were conducted online. U.S. pollster Mark Mellman concluded that such "technical explanations...don't hold much water," because both online and telephone modes showed a near tie on their final surveys, on average. Yet Conservatives had performed better on telephone early in the campaign and the differences persisted as late as a week before the election.
Herding - FiveThirtyEight's Harry Enten noticed that national polls "seemed to converge rapidly in the final days of the campaign." Virtually all of the final UK polls showed a one point or even margin Labour and the Tories (the one notable exception was an online poll conducted by SurveyMonkey, which correctly forecast the size of the Tory win). Enten calculated a standard deviation to measure the variation across the final polls in their estimate of the margin between the top two candidates. His findings show the UK poll results converging to a greater degree this year than in any other election since 1979. Perhaps confirming the trend, one pollster claimed they "chickened out" and failed to publish results giving the Tories a wider lead because "the results seemed so 'out of line' with all the polling conducted by ourselves and our peers."
The vote preference question - The unique challenges of measuring voter preferences in the UK's parliamentary election add another potential source of error. The results are not based on national vote totals for each party, but rather on winner-take-all elections held in 650 separate Parliamentary constituencies. "It's tricky," Democratic pollster Nick Gourevitch explained to Politico, "because it’s like you’re trying to project every House race in the country based on the generic ballot."
Strategic voting - Washington Post pollster Scott Clement raised the related issue of strategic voting, "where voters that may support one party could end up changing their support if they think that voting for their second-favorite party will give them better representation if they win." The problem, Clement noted, also occurs "in American primaries, where supporters of a long-shot candidate switch to one who appears more viable and likely to win, even if that's not their favorite candidate." Separately, Daily Kos contributor Daniel Donner also noted that errors on the scale of those in the UK are not unusual in the U.S. in general election races where a third party candidate receives 5 percent or better.
Identifying likely voters - The task of narrowing surveys of the electorate to representative samples of actual voters is a well known problem with election surveys in the U.S., especially when pollsters can only rely on voters' self-reports on whether they are likely to vote. "[S]ome survey respondents undoubtedly overstate their likelihood to vote," the Britain's ComRes political team wrote this week. In the U.S., campaign pollsters are turning, increasingly, to samples drawn from voter registration lists in order to gain the actual turnout history of individual voters. The myriad of methods used by UK pollsters to deal with this issue are certain to come under greater scrutiny.
While it is too soon to reach conclusions about specific problems, if this polling misfire is like those that have been closely investigated in the past -- such as the New Hampshire primary polls in 2008, the U.S exit polls in 2004 and the UK polls in 1992 -- the culprit is likely a combination of some or all of the theories listed above. "Every survey has errors," exit pollster Joe Lenski told us two years ago, referring to the many design decisions that polls make, any of which can cause small, typically random errors in one direction or another.
"It's just a matter of, are they small and do they cancel each other out," Lenski explained. "When they're small and they're all in the wrong direction, they make you look bad."
The latest polling failure has renewed calls on politicians and the media to stop obsessing about horserace polls. In the UK, some have already called for a ban on pre-election polls. HuffPollster agrees that political coverage should delve deeper than the horse race, but we do not expect political junkies to abandon their obsession with pre-election polls any time soon. The news media continue to cover polls because so many of our most informed and politically engaged our readers and viewers continue to demand it.
"There isn't a 'world polling problem'," online editor Peter Feld wrote on Twitter. "There's a news media fixation on predictions problem." In such an environment, our obligation is to report polls skeptically -- to highlight their shortcomings and limitations, their methodological challenges and the uncertainty inherent when we treat poll snapshots as predictions.
A link round-up on the UK polling misfire:
-Additional assessments from poll watchers and forecasters include Ben Lauderdale (here, here and here), Anthony Wells, Number Cruncher Politics, Scott Clement, Steven Shepard, Alberto Nardelli, Mark Mellman and Stephanie Slade
MORE OF THIS WEEK'S NATIONAL POLLS
-Americans are feeling pessimistic about their options for 2016. [GWU]
-Americans doubt the Supreme Court can be impartial when considering the latest challenge to the Affordable Care Act. [AP]
-White evangelicals are the only major religious group who consider recent police killings of black men to be isolated incidents. [PRRI]
-Bloomberg's Consumer Confidence Index falls for the fourth straight week. [Bloomberg]
-Gallup's U.S. Economic Confidence Index rebounds after a sharp drop the previous week, but still lower than recent highs recorded earlier this year. [Gallup]
-Reuters/Ipsos finds Republicans covered by Obamacare are generally satisfied with their health insurance. [Reuters]
-The share of Christian Americans dropped 8 percent between 2007 and 2014 and the share of religiously unaffiliated grew by 6 percent. [Pew]
HUFFPOLLSTER VIA EMAIL! - You can receive this weekly update every Friday morning via email! Just click here, enter your email address, and click "sign up." That's all there is to it (and you can unsubscribe anytime).
THIS WEEK'S 'OUTLIERS' - Links to the best of news at the intersection of polling, politics and political data:
-Long-time USA Today polling editor Jim Norman is retiring. [@usatoday_polls]
-Nate Silver says Democrats have no "blue wall." 
-Josh Putnam explains how the presidential nomination process works. [WashPost]
-A federal agency abandons telephone research for snail mail surveys. [MRA]
-The Census Bureau backtracks on a plan to get rid of questions about marriage. [Pew]
-A push poll bill in Maine gets fixed. [MRA]