Never Let a Good Election Go to Waste

We did no serious modeling with the raw data - only applying basic demographic weighting at the state level. These straightforward adjustments mean the results shed light on the important roles that scale (large numbers of interviews) and heterogeneity (diversity of respondents and sources) play in poll accuracy.
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

Each day, 2+ million people around the world take a SurveyMonkey survey, at schools, community or government organizations, and companies - big and small - for a great variety of purposes. Two months ago, we started tapping this vast pool of respondents to help us with another project: predicting the congressional midterms.

Mark Blumenthal gives you a clear appraisal of how our main estimates performed. What I want to share here is what we did - and what we are starting to learn a little less than a year after forming SurveyMonkey's survey research team.

Our company mission is to help people make better decisions. Our team's goal is to increase the quality of surveys, moving past the stale - but still pernicious - debates over whether online or telephone polls are better.

Starting Oct. 3, we recruited a random pull of people taking a SurveyMonkey survey to then take this experimental survey. This is the same way we recruit panelists for SurveyMonkey Audience, a uniquely recruited and incentivized respondent panel. We collected data on political attitudes - and yes, campaigns - in all 50 states and Washington, D.C.

We heard from an average of more than 10,000 respondents a day over the final days of the campaign, for a total of about 200,000 interviews over the field period. We continued to interview voters for several additional days; reported here, as part of a new partnership with NBC News.

We did no serious modeling with the raw data - only applying basic demographic weighting at the state level. These straightforward adjustments mean the results shed light on the important roles that scale (large numbers of interviews) and heterogeneity (diversity of respondents and sources) play in poll accuracy. More broadly, we're using the pre-election data as a great opportunity to better understand the attributes, as well as any, ideally fixable, deficiencies of our approach. This effort dovetails with our commitment to more fully explore the relative merits of non-probability polling.

On one level, the results speak for themselves at a time when there is such tumult about proper approaches to pre-election polling. Using all data from the 14 days leading up to the election as estimates, all of our 36 U.S. Senate races were right directionally, with an average marginal bias of zero. The data were also accurate on the winner for 33 of 36 gubernatorial contests, and in the D.C. mayor's race. These are the data we shared with industry experts before the election, including The Huffington Post.

Across the 51 jurisdictions, between 7 and 14 percent of people clicked on the opportunity to answer our survey. Future research will feature our A/B testing of the pages inviting people to participate (we spent a good deal of time in October wondering why our data were so much more Republican than public polling data), our experiment with short and long questionnaires, and the use and varied success of "likely voters" screens.

We also asked substantive questions in our surveys as well, including what issues voters' cared about, "why" they were voting certain ways. But our primary effort was on methodological innovation, not joining the scrum of pre-election polls.

This is an exciting time in public opinion research and we will continue to experiment, test and share what we learn. We'll be reporting all matter of methodological detail at upcoming conferences. See you at AAPOR, and elsewhere!

Popular in the Community

Close

What's Hot