Exit polls are a valuable source of data. But they’re also complex projects with the potential to be misleading early on election night.
What exit polls are:
For decades, news organizations’ exit polls have been conducted by Edison Research for the National Election Pool, a partnership of ABC, CBS, CNN, Fox, NBC and The Associated Press. Those interviews ― a massive, nationwide undertaking ― are largely conducted outside polling places. But for the past dozen or so years, in an attempt to account for early voters, exit pollsters have also used telephone polls conducted over the final weekend of the campaign.
This year, there will be more than the usual set of numbers. AP and Fox News split from the consortium to launch an alternative called VoteCast, relying on interviews conducted just before the election ― an approach AP says “reflects how Americans vote today: not only in person, but also increasingly early, absentee and by mail.” The remaining members of the National Election Pool and their pollster, Edison Research, have announced major changes of their own, including stationing pollsters at early voting centers.
What exit polls should, and shouldn’t, be used for:
Exit poll numbers serve a couple of different purposes. News organizations use them to help call the results of races. And in the days to come, they’ll help give everyone a better sense of how and when midterm voters made up their minds.
Exit polls are not, however, designed to help give the American public a sneak peak of who’s likely to win, especially before the election is actually over. Fundamentally, exit polls are just surveys, and are subject to many of the same sources of error as any other poll. And, like any poll, they need to be properly weighted to represent the population they’re supposed to be measuring. In this case, that means recalibrating the numbers to match the actual results of the election, which we won’t fully know until the end of the night. Before that happens, exit polls can present a misleading picture of the race.
Why even final exit polling data isn’t perfect:
Exit polls are a lot more useful ― to those of us not calling races, anyway ― once the results are in. But even then, they’re not foolproof, especially at describing who voted.
“The problem with them is that most analysts and readers treat them as if they’re infallible,” The New York Times’ Nate Cohn wrote in 2014. ”[J]ournalists might write entire posts that assume that the black share of the electorate was 15 percent in Ohio. In reality, the exit polls just aren’t precise enough to justify making distinctions between an electorate that’s 15 percent black and, say, 13 percent black.”
The National Election Pool is taking steps to address one of the biggest issues from 2016, which is that exit polls overestimated the share of voters with higher education levels, a group that, in the presidential election, was significantly more Democratic than those without college degrees. Exit pollsters this year are changing how they ask about education, and adjusting their samples to account for it.
Language has been added to clarify where exit polls went wrong in estimating the share of voters with higher education levels in 2016.