The HuffPost Pollster Senate prediction model has three parts: averaging public polls, calculating probabilities race by race and calculating the overall Senate forecast. Here, for the interested, are the technical details of how we are predicting this year's election.

**POLL AVERAGING**

HuffPost Pollster begins by collecting every publicly released poll that asks Senate horse race questions. We run a poll averaging model, originally developed by Stanford University professor Simon Jackman, to estimate the trend in support for each candidate based on all the survey data. For the 2014 forecast, we made several important modifications to the original model, but the mechanics still essentially work as outlined by Jackman. (Click here to read Jackman's article with no subscription required.)

Briefly, the underlying averaging procedure is a Bayesian model, similar to a hidden Markov model, since we have some information about the state of the race (the polls), but the actual outcome of the race is “hidden.” Our setup has the model calculate an average for each day based on the polls available prior to that date and 100,000 simulations of the polls in what are called Markov chains.

To run such a model, there are several things we need beyond the poll results. The Markov chains require starting values, or “initial” points. They also need information that tells the simulations how to work -- there are many different shapes that data can take, called “distributions,” and we have to specify what shape the data in our simulations should take. These initial points and distributional shapes are the “priors” for any Bayesian model, and they are often not discussed in great detail despite being important parts of the model.

Many Bayesian models use “uninformed” priors, random numbers that don’t mean anything in substantive terms. This practice is acceptable, since it creates neutral starting points for adding more information -- the polling and polling simulation data. However, a big advantage of Bayesian models is the ability to incorporate prior information, and we have a lot of it based on previous elections. So one of the changes we made in our 2014 Senate model is to use “informed” priors to determine where the model should start. The model is predicting vote share proportions for each candidate, so we need information on how vote proportions have been distributed in the past that is based on some criteria that we can apply to this year’s races. Conveniently, the Cook Political Report has been rating races for many years and usually does a late-summer rating of races in the election year -- a perfect combination of information to inform our priors.

The values for our 2014 priors are based on an analysis of Cook Senate race ratings issued in July or August of the election years 2004 through 2012. For example, we pooled all Senate races rated “tossup” from 2004 to 2012 and calculated average and standard deviation of the actual vote proportions for each candidate. Clearly, final averages for vote proportions will mostly hover around 48 to 50 percent, but the standard deviation controls the width of the distribution, which is where differences are evident. For tossup states, the variance in vote proportions is fairly low since the two major-party candidates will have close to the same proportion. For "lean Republican/Democratic" and "likely" rated contests, the standard deviations will be a bit larger since there is a larger margin between the candidates. And the "solid" states will have the largest standard deviations. Overall, there will be fairly broad distributions for solid Democratic or Republican races, in which we would expect the margin between the candidates to be large, and fairly narrow distributions for the tossup races, in which we would expect little difference between the candidates. Republican- and Democratic-leaning ratings were considered together, since the concern is not which side won but the precision with which the Cook rating describes the outcome.

The model uses these priors to begin running the simulations with the Markov chain Monte Carlo (MCMC) method. Based on the averages of these simulations, the model produces a percentage estimate of support for each candidate on each date, in addition to estimates of undecided voter proportions and the margin between the candidates. The certainty of lead probability, which we use to calculate the win probabilities in the second part of the model, is determined based on the margin between the candidates. The model incorporates the polls that were available for each day, pulling in additional relevant surveys as it continues toward the current date -- at which time all of the polls are being considered. More recent polls are more influential in the average than older polls. We don’t stop the model at the current date, however. It predicts out to Election Day based on the available polling data. The lead probability calculated pre-election slowly decreases as the model moves toward a predicted win probability on Election Day. The decay rate is relative to how stable the polling data are and how frequent the polls have been prior to that date. This accounts for the uncertainty inherent in the passage of time: We don’t know what will happen between any given day and the election, so we can’t be as certain of a win on a future Election Day as we are of a lead today.

Based on these simulations, the model also calculates “house effects,” the amount by which any given pollster’s estimates differ from the polling average. We’ve added a couple of steps beyond the house effect correction that this model used in 2012 and the HuffPost Pollster charts used up through August 2014.

First, to reduce the effects of one-time polls that were conducted with a specific bias and whose results could be outliers, the model groups partisan pollsters who survey only once in a state into two sets of Republican or Democratic pollsters. Instead of having an effect on the poll averages independently, these two sets of surveys are treated as if they came from one Republican pollster and one Democratic pollster.

Second, after the model has run and calculated the house effects, we direct it to a certain set of pollsters -- those who are nonpartisan and did not vary, on average, more than one percentage point from the trend line generated by the HuffPost 2012 model after it was adjusted to the actual election results. This process is also described here. We find the average house effects for this set of pollsters and then subtract that value from all other pollsters and reprocess the output. This procedure calibrates the entire model, the point estimates, the lead probabilities and the house effects to the average of these selected pollsters.

**RACE-BY-RACE WIN PROBABILITIES**

Once we have the output from the model, we make two slight adjustments to the predicted lead probability on Election Day before finalizing it as a win probability: We deal with uncertainty due to (1) undecided voters in the polls and (2) unknown events that could significantly alter election outcomes.

*Undecided voters:*

The proportion of undecided voters matters most in the context of the margin between the two candidates. If the proportion of undecided poll respondents is smaller than the margin that separates the candidates, the effect the undecideds have on the outcome probability should be very small. For example, if the average undecided proportion is 7.9 percent, but candidate A is ahead of candidate B by 20 percentage points, the 7.9 percent undecided would not change the outcome of the race even if they all ultimately voted for candidate B. The most they could do would be to reduce the margin by which candidate A is ahead to 12.1 percentage points. However, if the undecided proportion is larger than the margin separating the candidates -- say, there are 7.9 percent undecided, but candidate A leads candidate B by only 3 percentage points -- those undecideds have the potential to change the outcome of the race.

To quantify this relationship, the model simply calculates the average proportion of undecided respondents relative to the margin between the candidates. This produces a measure of uncertainty that is then subtracted from the poll prediction probability.

The smaller the margin between candidates, the more the undecided will matter. In extreme cases, where margins are exceptionally small, we can generate some problematic results with our calculation. Imagine 7.9 percent undecided and 0.5 percent margin. That gives us: 7.9/0.5 = 15.8. If the margin is already that small, the unadjusted win probability could easily be less than 65 percent; subtracting 15.8 percent would cause the probability to dip below 50 and in effect “flip” the race in favor of the other candidate. Because we do not know that any given proportion of the undecideds will vote for one candidate or the other, it would not be good practice to allow our adjustment to change the predicted outcome of the race. There is also a chance of an extremely small margin in a race with a large proportion of undecideds, leading to a nonsensical value: For example, 7.9 percent undecided with only a 0.07 margin would lead to an adjustment of 112 percentage points. To avoid both of these scenarios, the undecided adjustment is capped at 10 percentage points, and the probability of the favored candidate winning is not allowed to fall below 50 percent due to this adjustment.

Another feature of this adjustment is that as Election Day nears and the proportion of undecideds in the polls decreases, this adjustment will decrease as well. If 7.9 percent undecided in a Senate race in early September decreases to 3.5 percent by the end of October, the adjustment will decrease to reflect the lower impact of undecideds on the certainty of the predicted outcome: 3.5/20 = 0.175, rather than the earlier 7.9/20 = 0.395; 3.5/3 = 1.167, rather than 7.9/3 = 2.633; 3.5/0.5 = 7, rather than 7.9/0.5 = 15.8 (which would be capped at 10).

*Unknown events:*

To account for the possibility that some event late in the race could affect the outcome or simply that the polls could be wrong, we add random noise to the calculation of the win probabilities to decrease our certainty of the outcome. A version of the HuffPost model produced by Simon Jackman in 2012 also included this type of adjustment.

We do this with a simple function that loosens the model’s constraint on the amount of random variation and thereby allows some random noise into the model. The function controls how much variation we let in and what distributional shape that variation takes. Since we don’t know in which direction or otherwise how any disruptions would affect the election, we use a normal distribution for the shape -- meaning that the random noise is distributed in a bell curve defined by a mean and a standard deviation. The mean and standard deviation control how much noise is introduced, and here we have some guidance from history. In both 2006 and 2010, using the same model, the final polling margins for Senate races were, on average, 4 percentage points different from the actual electoral margins, with a standard deviation of 3.7 in 2006 and 2.7 in 2010. Assuming that 2014’s performance will be somewhat close to that of prior years, the mean in the model for 2014 is set to the equivalent of 4 percentage points and the standard deviation to the equivalent of 3 percentage points.

Now this produces a very large range of values, some of which are substantial enough to “flip” the race over to the other candidate in close contests. Additionally, if there are relatively few polls in a race or the polls are inconsistent, the random noise will affect the win probability more.

We did not want this noise to have such a large effect, but we still needed to maintain the functional form specified by our tests on 2006 and 2010. So once the win probability is calculated with the extra noise, the model forces it to stay above 50 percent. We do this by calculating how much the noise changed the probability relative to the probability without the noise, multiplying that ratio by the difference between the unaltered probability and 50 percent, and adding 50 percent back in. Thus for a win probability of 48.4 percent, which was 52.3 prior to adding the random noise, the calculation finds the ratio of 48.4/52.3, which is 0.93. Since 52.3, the original probability, is 2.3 above 50, we multiply 2.3 by 0.93 to get 2.1. This number is added to 50, for a probability of 52.1 percent. This calculation ensures the random noise doesn’t make inappropriately large changes to the win probabilities.

After these two adjustments for undecided voters and unknown events, we consider the win probabilities on those Senate races final.

But that still leaves us with figuring out win probabilities for those Senate races without any polls. To calculate probabilities for contests with zero -- or, actually, fewer than five -- polls, we turn back to our historical analysis of the Cook Political Report ratings. Again, all the Senate races from 2004 through 2012 were pooled according to their corresponding late-summer Cook ratings. This time each race was coded 0 if the Cook rating had been wrong -- for example, the race was rated “lean Democratic” and the Republican candidate won -- or 1 if the rating had been correct. The average for all races with a particular Cook rating constitute the probability that the Cook rating was correct. For solid Democratic or Republican states, only one race was called incorrectly in the years analyzed, so the probability of a win given a "solid" rating is 99 percent. Most of the states without polls fall into this category.

**THE OVERALL SENATE FORECAST**

Once we have the final probabilities for each state, we calculate the probability of Republicans winning 51 or more seats in the Senate using a Monte Carlo simulation.

It works like this. For each race, the computer picks a random number between 0 and 100. It compares that number to the probability of the Republican candidate winning, and if the number is lower than or the same as the probability, that "spin" counts as a Republican win. If it is higher, chalk one up for Democrats. For example, if the Republican candidate in a given race has a 35 percent chance of winning according to the model, a random number from 0 to 35 would count as a Republican win, but a number from 36 to 100 would be a Democratic win.

We repeat this process for every race and count the number of Republican-won seats. If it's 51 or more once we add in the 30 Senate seats held by Republicans not up for re-election, the simulation counts as a Republican win.

That whole process repeats a million times, simulating a million different elections. The probability of a Republican takeover of the Senate that HuffPost reports is the proportion of times that the Republicans win 51 or more seats in these simulations.

Recently, the situation in the Kansas Senate race threw a wrench into this part of the process. As we explained earlier, if independent candidate Greg Orman wins on Nov. 4, three outcomes are possible in the battle for control of the Senate: The Republicans might win enough Senate seats to gain a majority, Democrats might retain majority control or an Orman victory could trigger a stalemate in which he decides which party holds the majority.

In the rare scenario in which Orman wins and the chamber is split with 49 Democrats (including two other independents caucusing with them) and 50 Republicans, our model assumes that Orman has a 50 percent chance of caucusing with the Democrats and a 50 percent chance of caucusing with the Republicans, so that the overall probabilities of each party winning the majority still add to 100 percent. The forecast we publish also notes the probability of this situation, which we call “the Orman factor,” occurring.

The race-by-race win probabilities and the overall chances of Republicans taking over the Senate, along with other features, can be found here. The code for the model will be posted on Github.