The following is an excerpt from a chapter on the state of the polling industry in The Surge: 2014's Big GOP Win and What It Means for the Next Presidential Election, published by Rowman and Littlefield.
Democrats lost big in the 2014 elections, but in the aftermath, polling also took a hit for largely understating the size of Republicans’ win. While most polls and poll-based models correctly forecast that Republicans would win a Senate majority, they often suggested closer contests and generally understated GOP margins.
‘‘The results were another black eye for pollsters, in what are already some tough times,’’ Politico’s Steven Shepard wrote immediately after the election. ‘‘As Americans become even harder to reach by phone—and emerging methodologies, such as Internet polling, remain unproven—the poor performance of pollsters this year casts serious doubt on the reliability of surveys during the 2016 presidential race.’’
How wrong were the polls, really?
At first glance, the final polling averages on 2014 Senate contests appeared to show more error than they had in recent midterm elections in estimating the margins separating the candidates. ‘‘The polls really were worse than usual,’’ Kyle Kondik and Geoffrey Skelley of Sabato’s Crystal Ball at the University of Virginia Center for Politics concluded in examining the average misses by two polling aggregators—the polling averages of RealClearPolitics and our own estimates at HuffPost Pollster. Their chart showed the average misses in competitive Senate races higher in 2014 than in the elections held from 2006 to 2012. As Kondik and Skelley noted, these differences were driven in part by some unusually large misses in individual races, including double-digit underestimates of Republican margins in the Arkansas and Virginia Senate races, and misses nearly as high in Kansas and Kentucky.
The analyses of the National Council on Public Polls provide another, broader benchmark, based on statewide contests for Senate and governor. Between 2002 and 2010, their computation of ‘‘candidate error’’—a measure of the average error for the top two candidates on individual polls conducted in the last week of the race—has ranged between 2.0 and 2.3 percentage points in the midterm elections. In 2014, our computation of the NCPP statistics finds a rate of error in final week polls (2.2 percentage points) slightly higher than in 2006 or 2010, but slightly lower than 2002.
The polling misfires of 2014 have greater precedent if we broaden the comparison to include different measures and elections held in the 1990s and earlier. For example, Nate Silver’s initial assessment of average bias— whether polls collectively favored one party or the other—in 2014 found an average four percentage point understatement of Republican Senate margins. However, this calculation also revealed an equally large bias against the Republicans (4.0) in 2002 and an even bigger bias against the Democrats (4.9) in 1998.
A somewhat different assessment, from Eric McGhee, finds that the greater volume of polling in elections since 2004 has made poll-based forecasts (based on aggregating all polls) more accurate. McGhee used a statistical measure called the Brier Score that evaluates how often and how confidently a forecast predicts the correct winner (regardless of the margin of victory predicted). When applied to forecasts produced by the probabilistic model he created for the Washington Post’s Election Lab, McGhee found that the average miss has declined since the volume of polling increased substantially, roughly ten years ago. While the average miss in 2014 ‘‘was a little higher (worse) this year than in some other recent years,’’ according to McGhee, ‘‘we are still clearly in a world of better accuracy’’ in his models as compared to 2002 and earlier.
While polling averages in Senate races appeared to miss by more than usual, broader measures of polling error were roughly in line with previous midterm elections, and the 2014 Senate polls collectively predicted the correct winners in all but one race. However, these findings may not settle the nerves of a profession already roiling from changes in technology and communications habits that threaten to reshape it significantly in the future.