The reliability of so-called "internal polls" released by political campaigns and parties is the subject of ongoing debate. Nate Silver has cautioned that such polls should not be taken at face value since they are "almost always designed to drive a media narrative." Silver notes that the topline results can be manipulated by several tricks of the polling trade, such as asking leading questions, or applying implausible likely voter models or demographic weightings. From analysis of such surveys , Silver has devised a rule of thumb under which he subtracts six points from the topline result of the reporting candidate or party. Thus, a poll commissioned by a Democratic candidate showing a race tied translates, under Silver's treatment, into a six-point lead for his or her Republican opponent.
In a similar vein, Mark Blumenthal has stated that "[i]t is always sensible to treat sponsored, internal surveys with extra skepticism when they are publicly released." In support of this view, Blumenthal cites studies by political scientists finding that partisan surveys show an average bias of two to four percentage points favoring the sponsoring party. Blumenthal identifies an additional reason for partisan bias in published internal polls, namely, most internals never see the light of day since campaigns typically choose to share only those polls showing good news for their candidate.
Other political luminaries view internal polls less skeptically. Stuart Rothenberg recently stated that he often places greater weight on partisan polls than nonpartisan ones. Why? "Their numbers drive campaign strategy, with victory or defeat of their candidate hanging in the balance .... Partisan pollsters also ... spend more time making certain their samples reflect the actual electorate, even if it means incurring additional costs."
Charlie Cook sides with Rothenberg. He calls "mistaken" the belief that polls conducted independently from candidates and parties are inherently better or more reliable than campaign polls, noting that the latter are often conducted by superior polling outfits and use more rigorous methodologies.
A look at 136 internal Senate and House polls conducted in the final 50 days of the 2010 midterm campaign gives commentators on both sides of the internal poll debate some new talking points. On the one hand, taking these polls in the aggregate and comparing each topline result with actual returns, a mean partisan bias of 4.5% and a median partisan bias of 5% in favor of the candidate or party who released the poll is reflected. This is broadly consistent with the two to four percent average partisan skew found in the academic studies cited by Blumenthal, and not far removed from Silver's six percent rule. Moreover, a full 70% of these 2010 internals showed the sponsoring party's candidate winning by more points (or losing by fewer points) than he or she actually did on election day. On the other hand, a more granular review of the data reveals that the internal polls taken by certain pollsters were largely or entirely free from bias, lending at least qualified support to Cook and Rothenberg's thesis about the high reliability of campaign polls.
A follow-up question then arises: Who were the unbiased campaign pollsters in 2010?
As it happens, they were mainly GOP-affiliated. Released internal polls conducted by the Republican firms OnMessage and Public Opinion Strategies exhibited no discernable partisan bias, while internal surveys taken by the GOP pollster The Tarrance Group were actually somewhat too charitable to Democratic candidates. (It also bears mentioning that Public Opinion Strategies conducted a late September poll for the Retail Association of Nevada--and thus not included in the present survey of internal polls--which predicted that Harry Reid would win the Nevada Senate contest by five points. This poll was widely seen as a pro-Reid outlier at the time, but turned out to be very accurate.)
Overall, Democratic internal polls showed a more partisan slant than Republican ones. The 60 released internal polls conducted for Democratic candidates showed a mean partisan bias of 8.3% and median partisan bias of 8%, as compared with a mean and median partisan bias of 1.5% and 1%, respectively, evident in 76 released internals taken for GOP campaigns. Moreover, the Democratic internal polls showed the sponsoring candidate outperforming his or her electoral result 90% of the time, as opposed to only 55% in the GOP internals.
Parsing the data further, however, reveals that the GOP pollsters benefited from the lion's share of released internals emanating from House races, where they outperformed their Democratic counterparts in terms of objectivity, rather than Senate contests, where the Democratic firms fared better. Among the 115 House internals, the GOP releases showed a mean and median partisan bias of just 1.0% as compared with mean and median partisan skews of 9.7% and 9% exhibited in Democratic offerings. Yet in the much smaller subset of 21 Senate internals, the Democratic polls displayed less bias than those coming from GOP sources. The Democratic internals released in Senate races showed a mean and median partisan bias of 3.9% and 3.5%, respectively, which compares favorably with the 6.0% and 7% mean and median partisan biases evident in GOP Senate internals.
Considering only the internal polls conducted in the last 20 days of the campaign does not significantly alter the above findings. Among the 44 late-breaking campaign surveys, the 16 conducted for Democratic interests showed a mean and median partisan bias of 7.3% and 8.5%, respectively, whereas the 28 taken for the GOP witnessed a mean and median partisan bias of 2.6% and 2%.
Why Democratic House internal polls, and to a lesser extent GOP Senate internal polls, were infected with unusually high partisan bias in the 2010 cycle is an open question. Perhaps the strong partisan tilt evident in Democratic House internals can be attributed at least in part to the fact that these polls were conducted disproportionately in GOP leaning districts occupied by Democratic incumbents. These districts, so the theory goes, may have returned to their traditional partisan roots and swung against these incumbents in the closing days and hours of the campaign, too late to be picked-up by released internal polling. A similar "late swing" argument might be made to explain the partisan skew evident in Republican Senate internals, many of which were conducted in blue states.
More details are provided in the table below, wherein Senate and House internals are combined. Breakouts for individual pollsters are provided where five or more internal polls taken by the pollster were conducted in the final 50 days of the campaign. Where multiple polls taken by a single pollster in the same race in the final 50 days of the campaign were released, only the poll with the most recent field dates was included.