WASHINGTON -- On Tuesday, Gallup will unveil new details of an "extensive review" of its 2012 polling at a press briefing in Washington. While the investigation is not yet complete, the explanation of why the pollster consistently understated President Barack Obama's support during the 2012 campaign is likely to be complex.
After the election, Gallup's editor-in-chief, Frank Newport, announced that the company would conduct an internal review of its election polling methods, led by Michael Traugott, a renowned political scientist and survey methodologist at the University of Michigan. Gallup has provided few hints of its specific findings so far, but given the scope of the investigation and the nature of similar polling misfires in the past, here's what to expect:
A deep dive into survey methodology: At a briefing in May attended by The Huffington Post and a handful of other pollsters and academic researchers, Newport and Traugott shared a long list of issues being investigating (reproduced at the end of this article). It covers virtually every aspect of the telephone survey process, from drawing samples and interviewing voters to weighting data and selecting the likely electorate.
The company has done more than simply scrutinize the procedures used and data collected during 2012. Gallup has also conducted what Traugott described as "a series of experiments going forward ... involving various aspects of data collection," with the goal of determining if alternative procedures would have produced different results.
The outline of topics is evocative of a similarly extensive investigation conducted by the American Association for Public Opinion Research (AAPOR) of polling failures during the 2008 primary elections. The resemblance is not a coincidence; Traugott also led that investigation and was the primary author of its final report.
This is just part one: At the May briefing, Traugott and Newport cautioned that many of their experiments, particularly those relating to the selection and modeling of likely voters, remain ongoing. So while this week's briefing will likely present in-depth findings about the way Gallup selects and interviews its adult samples, those hoping for an in-depth dissection of Gallup's likely voter model on Tuesday may be disappointed.
Traugott explained in May that because experiments on likely voter procedures only make sense "in the context of a campaign," Gallup decided to undertake "a major experiment in conjunction with one or both of the 2013 gubernatorial elections. Virginia almost certainly, and possibly New Jersey." Analysis of these efforts will released after the November elections.
Don't expect one big thing: If the past is a guide, the news from Tuesday's briefing may be tough for reporters to summarize. Investigations of some of the most infamous polling failures of the last two decades -- including exit poll problems in Florida in 2000 and nationwide in 2004, and errors in surveys conducted before the New Hampshire primary in 2008 -- found not one big culprit, but a series of small errors all creating statistical bias in the same direction.
"Every survey has errors," said exit pollster Joe Lenski, referring to the many choices that pollsters make about how they draw samples, select respondents, ask questions and identify likely voters. Any of these choices can create small, typically random errors in one direction or another.
"It's just a matter of, are they small and do they cancel each other out," Lenski explained. "When they're small and they're all in the wrong direction, they make you look bad."
Several "small things" are already known: Gallup's problems in 2012 did not begin with the fall campaign. A Huffington Post investigation found three factors that appeared to contribute to a Gallup "house effect" that lowered President Obama's job approval rating. These involved the questions Gallup asked to ascertain the race of its respondents, the targets used to weight the racial composition of its adult samples and a consistent underweighting of non-white adults. In October, Gallup announced methodological changes that appeared to address the weighting issues. The firm also changed its race questions early this year, dropping the format criticized by HuffPost's investigation.
A return to transparency? Gallup has long been a leader in pushing pollsters to be transparent about their methods. The company founder, George Gallup, was the first to propose a "national standards group for polling" that became the National Council on Public Polls, and he played a leading role in establishing the Roper Center Public Opinion Archives, where Gallup and other public pollsters have long deposited their raw data for use by scholars.
Though Gallup has been on the defensive over its 2012 misfire, it appears poised to reset the curve for transparency with its ongoing investigation. The topics that Newport and Traugott are reviewing include aspects of the survey process that pollsters are often hesitant to discuss. And, as Traugott explained in May, Gallup "has agreed to make all of this information publicly available." He stressed that both he and Newport, as former presidents of AAPOR, "are firmly committed to transparency."
One unresolved issue is whether Tuesday's event will include a release of raw respondent-level data that would allow other scholars and researchers to test other theories. Newport said in May that although "final decisions" had not been made, "we think we'll try to make the data available as well."
So while Gallup's Tuesday review may not answer all questions, it promises to give polling aficionados much to consider.
Outline of Topics Covered by Gallup's Ongoing Investigation
A. Survey Design Factors
1. Tracking design
2. Call design, 3 call versus 5 call
3. Listed landline versus RDD [random digit dialing]
4. Respondent selection procedure
5. Cell and landline quotas
6. Gender quotas
7. Spanish language interviews
B. Survey Field Process
8. Gallup name
9. Gender composition of interviewers
10. Race composition of interviewers
11. Interviewer effects
12. Distribution of interviews by geography and density within regional quotas
13. Distribution of interviews by local time of interview
14. Interviewer Probing of DKs [don't knows] and Refs [refusals]
C. Post Field Handling of Data
16. Ballot wording and placement within survey
17. Candidate order on the ballot
18. Handling of third party candidates
19. Process of screening for registered voters
20. Process of screening, adjusting for estimated likely electorate
D. Continuing research, including November 2013 VA and NJ Gubernatorial Elections