Likely Voters: How Statewide Pollsters Choose Them

Likely Voters: How Statewide Pollsters Choose Them

WASHINGTON -- Six years ago, I spent much of late October writing about how pollsters screen, model and otherwise select the "likely voters" they interview in their pre-election surveys (I posted a shorter version here last month). After spending weeks querying pollsters about their methods, I posted a "users guide" to the likely voter procedures used by 24 pollsters. Last week -- arguably a bit too late -- I decided to try to update that post with information about the most prolific pollsters at the state level this election cycle. As of this afternoon, I have heard back from all but two, so I am posting their responses here in full.

The information provided ranges from cursory to in-depth and detailed. Readers will likely not find magic answers to the questions of why some polls produce divergent results, but the information disclosed here may help us better understand what happened with this year's polls once full vote returns are available.

I asked pollsters essentially the same questions as in 2004:

(1) Do you use screen questions to select likely voters, a Gallup-style index/cut-off model or something else?

(2) If you use an index/cut-off model, what is the cutoff- percentage, i.e. what percent of the adults do you qualify as likely voters?

(3) Regardless of the type of model, what questions do you ask to define or model likely voters?

(4) Does your likely voter model rely at all on voter lists and individual-level vote history?

(5) Do you weight by party identification? (Again, you can learn more about why I asked these questions here).

Last Thursday, I sent these questions to representatives of
CNN/Time, Reuters/Ipsos, Mason-Dixon Polling & Research, Public Policy Polling (PPP), the Quinnipiac University Polling Institute, Rasmussen Reports (& Fox News/Pulse Opinion Research) and SurveyUSA. As of this hour, these organizations collectively account for 70% of the now more than 900 polls we have logged on U.S. Senate races this year.

As of this writing, I have heard back from all except CNN/Time and Mason Dixon. Responses from the rest follow in alphabetical order. I will update this post with any additional answers that I receive (update: CNN/Time added at 5:15 EST).

CNN/Time

Response from CNN polling director Keating Holland:


1) Do you use screen questions to select likely voters, a Gallup-style index/cut off or something else?
2) If an index/cut-off model, what's cutoff- percentage, i.e. what percent of the adults do you qualify as likely voters?

I run a 50-point scale and use a cutoff that comes closest to my estimate of the actual turnout. So if we're in a state with an estimated 40% turnout, I cut off the likely voters at the point where 40% of my weighted sample is included in the likely voter group.

3) Regardless of the type of model what questions do you ask to define or model likely voters?

10-point scale on likelihood of voting; 10-point scale on interest in campaign; past vote asked ina way to create a 10-point scale from past behavior.

4) Does your likely voter model rely at all on voter lists and individual level vote history?

No.

5) Do you weight by party?

Not normally, but I monitor both party ID and party registration (in statesthat have it).

Ispos/Reuters

The Ipsos/Reuters surveys in 2010 used live interviewers. Their response:


1) Do you use screen questions to select likely voters, a Gallup-style index/cut off or somethi else?

Our state polls are polls of registered voters. They are administered via Random Digit Dialing, and participants are screened on a RV question (Q1 below). Most of the data in our polls is reported on for registered voters, except ballot questions (voting questions for the races plus any amendments/propositions we might ask about). Likely voters are defined as individuals currently registered to vote, who voted in the 2008 Presidential election, are a 7-10 on a 10-point likelihood to vote scale, and are interested in following news about the campaign 'a great deal' or 'quite a bit.' Individuals who did not vote in the 2008 Presidential election qualify as likely voters if they are registered to vote, are an 8-10 on a 10-point likelihood to vote scale, and are interested in following news about the campaign 'a great deal' or 'quite a bit.' Data are weighted to ensure that the sample's composition reflects that of the state's registered voter population according to Census figures.

2) If an index/cut-off model, what's cutoff- percentage, i.e. what percent of the adults do you qualify as likely voters?

We do not have a target -- we apply our LV filter across the board. The % who qualify as LVs depends on the state, and has ranged from 60%-75%. In each state we compare our figures to the official turnout stats from CPS (conducted every November after the election) to ensure we're not too far off the mark from the last midterms. The data is available on all our toplines (ie. the total sample size and likely voter base) so it can be calculated for each state poll.

3) Regardless of the type of model what questions do you ask to define or model likely voters?

We use a series of 5 questions (see above explanation of how they're used), the first of which is a filter question for survey participation. Data below is from our last state poll, carried out in Pennsylvania [complete results available here]:

  1. Are you currently registered to vote, or not?
  2. Sometimes things come up and people are not able to vote. In the 2008 election for President, did you happen to vote?
  3. Why not? BASE = All who did not vote at 2008 Presidential Election (30)
  4. On November 2nd, midterm elections will be held. Pennsylvania voters will elect a Senator, Members of Congress, Governor, and other state-level positions. Using a 1-to-10 scale, where 10 means you are completely certain you will vote and 1 means you are completely certain you will NOT vote, how likely are you to vote in the upcoming elections? You can use any number between 1 and 10, to indicate how strongly you feel about your likelihood to vote.
  5. How much interest do you have in following news about the campaigns for the midterm elections in Pennsylvania [a great deal, quite a bit, only some, very little or no interest at all]?

*4) Does your likely voter model rely at all on voter lists and individual level vote history? *

We do not use voter lists at all because they are notoriously inaccurate and are not updated regularly... studies show that they overestimate turnout by up to 30%. We rely on vote history only to the extent that we ask people to self-report if they voted in the most recent election as per above.

5) Do you weight by party?

We don't weight by party as a general rule. There have been a few occasions over the past few years where we have utilized party weights due to a clear problem with the figures -- but where this has happened, we have made it clear in the topline document.

Public Policy Polling (PPP)

PPP interviews voters using an automated, recorded voice methodology. Their response:


1) Do you use screen questions to select likely voters, a Gallup-style index/cut off or something else?

PPP's voter model relies on a combination of vote history from voter files used for sampling, demographic weighting and an introduction to their polls that asks those who do not plan to vote to hang up.

2) If an index/cut-off model, what's cutoff- percentage, i.e. what percent of the adults do you qualify as likely voters?

Not applicable

3) Regardless of the type of model what questions do you ask to define or model likely voters?

We read the following introduction at the beginning of each call: "This is a short survey about this fall's election for Governor and Senate in _. If you don't plan to vote in the November election, please hang up now."

4) Does your likely voter model rely at all on voter lists and individual level vote history?

Our samples are usually based on the voter registration database of a given state. For most states right we are now calling people who voted in either the 2004, 2006, or 2008 general elections.

5) Do you weight by party?

No. We weight by gender, race, and age and those weights are determined by a combination of census numbers and who identifies themselves as a likely voter and answers the poll. Usually we will weight close to census numbers on voters but if we're polling a state that is say 20% black and only 8% of poll respondents are black we're not going to weight blacks any higher than 16%. It really is done on a case by case basis though.

The Quinnipiac University Polling Institute

The Quinnipiac Polling Institute uses live interviewers. They opted to reply to my questions with an update of their response from 2004:


Mechanics - Quinnipiac uses a couple of screening questions that measure intention to vote and interest in politics to determine who is a "likely voter."

Party ID - Quinnipiac does not weight by party ID.

Rasmussen Reports & Fox News/Pulse Opinion Research

Rasmussen Reports interviews voters using an automated, recorded voice methodology. Their Pulse Opinion Research subsidiary fielded surveys for Fox News this year. Scott Rasmussen confirmed in September that:


Pulse Opinion Research does all the field work and processing for Rasmussen Reports polling. They do the same for other clients using the system that I developed over many years. So, in practical terms, polling done by Pulse for any client, including Fox News, will be processed in exactly the same manner. In a Rasmussen Reports poll, Rasmussen Reports provides the questions to Pulse. In a Fox News poll, Fox News provides the questions for their own surveys.

In replying to my questions, Rasmussen confirmed that with one exception, his likely voter methodology is the same as he described it to me in 2004:


Mechanics - Rasmussen uses screening questions to select the sample, and questions about likelihood of voting, past voting and interest in the election (closer to election day) to "further refine" their likely voters ("i.e.-you might get through the screening questions but still be considered a low probability voter"). Rasmussen does not aim for a specific cut off percentage.

The one change since 2004:


In states with early voting, we ask people if they have already voted. This is done before any screening. We begin this as soon as early voting begins in the state.

Nationally, Rasmussen weights its surveys with a "dynamic weighting" procedure they first adopted in 2006:


[We] set our partisan affiliation weighting targets based upon survey results obtained during the previous three months. These shift only modestly month-to-month, but the change could be significant over a long period of time.

Until a few months ago, Rasmussen disclosed these party identification results, weighted by demographics only, on its website. They are now available to subscribers only.

In 2006, I also asked about how they adapt their party weighting system to likely voters in individual states. This was Scott Rasmussen's answer:


The question of states is more challenging and one we continue to work. Our initial targets are set by playing off the national numbers. We note changes from 2004 and/or 2006 and make comparable changes to the state targets from our polling in those years. Broadly speaking, if the number of Democrats are up 5 nationally compared to an earlier period, then the state numbers would be up five too. Due to demographic differences, not every state moves completely in synch with the national numbers, but they are close in our targeting formula.

Then, we monitor the state by state results as we conduct state polls and are in the process of making some modest mid-year adjustments now (in most states, we have at least 3,000 state specific political interviews to draw from, plus our national political tracking data, plus our baseline numbers from the other poll). Realistically, though, the current adjustments are very small. To this point, the national shifts appear to provide a good indicator. As we head to the fall, we will poll every competitive state at least weekly and do larger samples. This will enable our dynamic weighting process to draw upon up to 10,000 state-specific interviews or more to set the targets for key states.

In September, he confirmed that the Fox News/Pulse surveys would use the same targets for weighting, including weights applied for partisan identification:


The process for selecting Likely Voter targets is based upon partisan trends identified nationally (and reported monthly). In an oversimplified example, if the national trends move one point in favor of the Democrats, the targets for state samples will do the same. As Election Day draws near, the targets are also based upon specific results from all polling done in that state. In competitive states, Pulse can draw upon a large number of interviews to help estimate the partisan mix.

SurveyUSA

Response from SurveyUSA CEO Jay Leve:


SurveyUSA has experimented with as few as 3 and as many as 8 screens for likely voters over the years. In addition to asking the obvious question, "Are you registered?", and verifying a respondent's age, we have experimented with many different variations on the direct, "How likely are you to vote" question, including running side-by-side tests on a number of polls experimenting with different scales. We have in past years but not in 2010 asked people where they vote, what time they will vote, whether they will go out of their way to vote or just happen to vote. In some years, we have asked respondents whether and how they voted in a previous year.

In 2010, prior to the start of early voting, we ask people their likelihood to vote on a 1-to-10 scale, where 10 means certain to vote, and 1 means certain not to vote. Depending a number of factors, either [7,8,9,10] is accepted as likely, [8,9,10] is accepted as likely, or [9,10] is accepted as likely. There is no pre-determined, or on the fly, "percentage" cutoff. For each release, LV as a percentage of RV and as a percentage of LV, is reported with the data.

Once early voting begins, in jurisdictions where there is early voting: there is a logic-branch in the instrument: registered voters are asked if they have already ["voted," if precinct based] ["returned a ballot," if vote by mail]. Those who have already voted are asked how they voted (past tense); those who have not yet voted hear the 1-to-10 scale likely question, and those who are included as likely voters are then asked how they will vote (future tense). Those who have already voted are crosstabbed alongside those have not yet already voted, for comparison.

SurveyUSA finds no simple relationship between the number of screening questions and the accuracy of our election poll results. We will almost certainly fine-tune the screening process that we are using in 2010, after the election returns are in, and we have had a chance to learn something new.

There is a continuum of data collection that SurveyUSA does, from work over which we have almost complete methodological control, to work over which the client, which may be another research company, or a university, has methodological control. On the election polls we control, and which carry our name, we typically do not force-weight to Party ID. (Naturally, we poll in a cross-section of geographies, some of which do not have party registration, but in general and across geographies, we typically do not force-weight to party registration). Some clients receive weighted data from SurveyUSA and may ask us to adjust a party composition, and if a client asks us to model a different party composition, we will do so. Other clients (typically academic institutions and other research companies) receive from SurveyUSA unweighted data and those clients weight the data using their own best practices.

Popular in the Community

Close

What's Hot