Gallup Poll's Race Issues Should Be Singled Out

Gallup Poll's Race Issues Should Be Singled Out
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

Over the weekend, The Huffington Post ran an in-depth analysis of the way the Gallup Poll handles race in its survey samples. (Go read it if you haven't yet.) Now, Jay Cost at The Weekly Standard has taken me to task, particularly for "looking at Gallup in isolation." But Cost misses the main point of the article.

Cost says my "bottom line" is that Gallup "tends to place the president's job approval rating about 2.5 points below the average of similar polls, and that a portion of this can be chalked up to under-sampling non-whites."

All pollsters, not just Gallup, typically under-sample non-whites in their raw data. My bottom line is that when Gallup weights those samples to account for the under-sampling, it still under-represents non-whites in its weighted data. The U.S. Census Bureau's Current Population Survey (CPS), the benchmark that Gallup uses to weight its samples, finds that 25.5 percent of American adults describe themselves as Hispanic or black alone (without offering another race). Yet in the results of the seven USA Today/Gallup surveys conducted from January to March 2012 that I examined, the combined percentage of adults who were Hispanic or black alone was just 21.4, a difference of 4.1 percentage points.

I agree with Cost that this difference, in and of itself, does not make Gallup dishonest or untrustworthy. I also agree with Nate Cohn of The New Republic that Gallup's commitment to transparency "enhances the credibility of their polling."

But the difference does make Gallup wrong. Yes, it results from a series of judgment calls about various methodologies and, yes, one can argue that each decision is "defensible," at least in isolation. Yet the net result in terms of their impact on racial composition is plainly in error.

More importantly, this error has a larger consequence. The lower non-white composition of Gallup's weighted adult samples appears to explain why Obama's approval ratings are slightly more negative on Gallup's surveys than on those conducted by other pollsters using similar methodologies.

Cost wishes I had examined Gallup's methodological choices in relation to those made by other pollsters. In fact, the article explains in great detail how Gallup's choices differ in the way it asks about and weights by race.

The article also provides a head-to-head comparison on this subject with the Pew Research Center, whose methodological choices are otherwise very similar to Gallup's. On three national surveys conducted from January to March 2012, Pew Research ended up with a combined percentage of Hispanic and black adults of 24.8 percent -- just 0.7 percentage points under the CPS estimate. So to argue, as Cost does, that the Pew Research surveys "have similar issues" misses the point.

Cost imagines that by "under-sampling non-whites," Gallup may be avoiding a worse outcome: "achieving a proper balance among non-whites [that] creates imbalances in other aspects of the poll, which Gallup judged to be more harmful to its accuracy." Other than the trade-offs involved in trimming large weights, which my analysis addresses, I am not sure what benefit accrues from introducing a skewed racial composition into a polling sample. If under-weighting non-whites helps avoid some otherwise mysterious "imbalance," Gallup did not offer that explanation in nearly two months of responding to my queries.

Yes, as Cost argues, "house effects are endemic to polling," but their sources are typically less mysterious. Take, for example, his observation that the Pew Research surveys find lower disapproval percentages for Obama relative to other polls.

He leaves out that Pew Research's Obama approval percentages also tend to be slightly lower than those of other pollsters using similar methods (although not by as much). The chart below is identical to the one that appeared in my original story, except that Pew Research surveys are highlighted. The majority are slightly below the overall trend.

2012-06-19-Blumenthal-Approvalver3withPew.png

Why would Pew Research slightly understate both approval and disapproval as compared to other pollsters using similar methods? Because it typically reports a slightly higher "don't know" percentage, one of the best-known of pollster house effects.

No such tendency explains away the Gallup effect. In preparing the article, I also broke the other pollsters with comparable methods into two categories based on the size of their "don't know" category on presidential approval: higher than average (Pew Research and CBS News) and lower than average (ABC/Washington Post, AP/GfK, CNN/ORC, NBC/Wall Street Journal, Quinnipiac University and Reuters/Ipsos). Obama's average job approval rating over the past year was 46.4 percent among the former, 47.5 percent among the latter, and just 44.4 percent for Gallup.

Finally, it is fair to question, as Cost and others do, whether sampling all adults is the most appropriate way to try to predict the outcome of an election. It is not. Nevertheless, most national media polls, Gallup included, select subgroups of registered or likely voters from larger samples of adults that have been weighted to Census demographics. If the weighting process under-samples non-whites among all adults, the likely-voter subgroup probably suffers from the same statistical skew.

So when it comes to weighting a poll, race matters. That's the bottom line.

Popular in the Community

Close

What's Hot