O'Donnell Closes Gap, or Coons up Big? The Dirt in Poll Details

The media, the public, and elected officials need to understand a very important premise: it's not the numbers that matter, it's the inference you make from the numbers.
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

Over the past 48 hours, two competing polls in Delaware have provided confusing information to the state's electorate and the nation in general. One telephone poll of 797 likely Delaware voters, conducted by Fairleigh Dickinson University (FDU), October 20 - 26, shows Democrat Chris Coons with a 21% point lead over Republican Christine O'Donnell, 57% to 36% (plus or minus 3.5%). However, another telephone poll of 1,171 likely voters in the state, conducted by Monmouth University's Polling Institute (MU), October 25 - 27, shows Coons with only a 10% point lead, 51% to 41% (plus or minus 2.9%). What are we to make of the competing numbers?

Unfortunately for the public, the two polls are NOT comparable, and without more details it's tough to say for sure which is more accurate. Among the many issues, the interviews were conducted over different time periods, they have different target populations, they define their populations of "likely voters" differently, the questionnaires are different, and most notably, while both polls were done by telephone, live interviewers conducted the FDU poll, and the MU poll used a combination of live (n=402) and automated (n=769) interviews. These very basic features of the polls can lead to drastically different results, and the lesson is "care MORE about the details and LESS about the headlines."

Before I say which poll you should trust, I'll explain some of their key differences, and you can draw your own conclusions before reading mine. Of course, explaining the differences requires digesting some of the dirt in the details.

Geek Warning: Read on if you dare..... (cue Halloween thunder and devious laughter)

When you call and who you call makes a difference

First, the survey designs were different. The FDU poll was conducted over six days targeting landline and cell phone extensions, while the MU poll took three days and targeted only landline phones. Different periods mean people could be using different information to make their judgments. For example, a new effective attack ad, or a public relations error might influence voters' judgments. Also, MU's exclusion of cell phones has important consequences for sampling and coverage error. It's estimated that about 25% of all households are now cell phone only, and research suggests landline only surveys may over-estimate Republican support.

Who's doing the calling makes a difference

Second, the FDU interviews were conducted by live interviewers (by Opinion America Inc.), while the MU poll was conducted using both live interviewers (by Braun Research Inc.) and a computerized interviewer (by SurveyUSA). Different modes produce different interviewing experiences, and sometimes, different levels of bias. While automated interviews are thought to reduced social discomforts about revealing private information, they are also more likely to have interview break-offs (people quitting the interview). Also, since there is no interviewer to confirm the intended interview target, there is always uncertainty about who is actually being polled in automated surveys. Finally, automated interviews are typically shorter in length, and tend to use shorter questions and response categories than live interviews.

The representativeness of the data makes a difference

Third, the FDU data are weighted -- brought in statistical alignment with the actual population proportions -- by age and gender, and the MU data are probably weighted too, but they did not report by what variables (e.g., age, sex, race, county). Data are usually weighted when the sample proportions do not match the population proportions. But survey weights can sometimes inflate (or deflate) numbers leading to inaccurate results. The only way for the public to guage the effects of weighting is to look at the demographic numbers. FDU did not report their descriptive demographic statistics, but thankfully, MU did.

There is something odd about the MU results on partisanship and vote choice. In early October Coons held a 83% point lead among Democrats (91% to 8%), a 10 point lead among Independents (55% to 45%), and held 26% points of support among Republicans; but the most recent poll shows Coons with a 75% point lead among Democrats (85% to 10%) down 8% points, he's now minus 5% points among Independents (42% to 47%), and has 19% Republican support.

This movement is very unlikely in the month before the general election. Numbers simply don't move that much at this stage of the election season; and, especially not that much across all party identifiers in a state where almost half (47%) of all registered voters are Democrat compared to one-third (29%) Republican. Couple this with the fact that Coons has his strongest support in the largest country in the state -- New Castle county, where he has been elected in consecutive terms as County Executive--which is largely comprised of Democrats. Also, it seems O'Donnell has been more controversial over the past two weeks; thus, the evidence points to something potentially off with MU's estimates: at the very least, they seem to over-estimate Republican responses and under-estimate Democrats'.

Well defined populations are essential

Fourth, the two polls use different definitions of "likely voters." FDU's respondents were selected using random digit dialing (RDD) procedures, but it is unclear what questions were used to filter and define a likely voter. The MU poll identified likely voters from a "list of households with voters who cast ballots in at least two of the last four general elections" and say they are either "certain" or "likely" to vote on election day. The aforementioned list (i.e., the sampling frame) was obtained from Aristotle Inc. While the Aristotle list may contain past voters, that doesn't necessarily make it representative of the state's current electorate. Questions abound regarding how "voters" are identified and how the eligible population of voters (e.g., new voters) have changed over the past four elections. In the end, we simply don't know enough about the likely voter populations for either of the polls to judge their accuracy with a high level of certainty.

Question ordering makes a difference.

Fifth, based on the released topline reports and questionnaires, it appears that FDU asked respondents questions about President Obama (favorability rating), the economy (right or wrong track), and the two Senate candidates (favorability), before it asked respondents their preferred Senate candidate. Alternatively, the MU poll appears to ask the Senate candidate preference question first -- or much earlier in the survey than FDU. Sometimes the topline reports do not mirror the questionnaire, but for now they are all we have to go by.

Pollsters are well aware of "question order" effects, but they are hard to estimate across surveys using different designs. In the MU poll, it's uncertain whether their computerized interviews were conducted using the exact same order and question wording as the human conducted interviews. There's nothing wrong with using mixed modes of data collect; however, since MU did not provide the results for the two modes, we can't evaluate the quality of their data.

Reporting standards matter

Just to add one more problem with MU's methodology report, based on sample size alone the margin of error for the live interviewer data is larger than that for the automated interviews, but MU reports one error for the entire study. This is very deceptive and MU should clarify these numbers in accordance with professional standards.

Consistency in polling methods makes a difference

Sixth, both surveys make a statement about the changes that have occurred since their last report; however, only one of them can actually make a fair comparison. In the last MU poll of 790 likely voters, conducted 10/8-11 (MOE +-3.5%), Coons held a 19% point lead, 57% to 38%. This poll was conducted entirely with automated interviewing, but the more recent poll uses both live and automated interviews. In the last FDU poll of 801 likely voters, conducted September 27-October 3 (MOE +- 3.5%), Coons held a 17% point lead, 53% to 36%. Similar to the current poll, the older one was done entirely with human interviewers. Thus, FDU has a consistent method and therefore can more appropriately estimate change, while MU is comparing apples to pomegranates .

And the Winner Is ...

FDU's reliance on the same methodology from earlier polling; the fact that their findings are consistent with other live interviewer telephone surveys of Delaware voters--CNN's poll results from 10/12 show Coons with a 19% point lead; the fact that MU employed a questionable design and reported misleading statistical information; and because some of the estimates for MU's data seem to be off; I would put my money on the FDU poll results.

But, the important lesson is ...

In the end, poll statistics are only as accurate as the methods used to gather the data that created the estimates. More importantly, the media, the public, and elected officials should understand a very important premise: it's not the numbers that matter, it's the inference you make from the numbers. More accurate numbers are good, but they're only confirmed on election day. Both polls come to the same conclusion -- Coons would win if the elected were held "today" (which was a few days ago), so for now just lead with that headline.

Popular in the Community

Close

What's Hot