My Confident Predictions for the Oscars

Despite my concern that no nominee should be that confident of victory, I have no choice but to stick to the data and models. My data and models have proven correct over and over, while hunches and guts checks are prone to failure.
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.
HOLLYWOOD, CA - JANUARY 22: Marc Friedland, designer of the Academy Awards winners envelope and invitations, during a preview of the 85th Academy Awards Governors Ball on January 22, 2013 in Hollywood, California. Academy governor Jeffrey Kurland, event producer Cheryl Cecchetto and Puck will return to create this year’s Governors Ball, the Academy’s official post-Oscar celebration, which will immediately follow the 85th Academy Awards ceremony on Sunday, February 24. The 1,500 guests include Academy Award winners and nominees, show presenters and other telecast participants. (Photo by Kevork Djansezian/Getty Images)
HOLLYWOOD, CA - JANUARY 22: Marc Friedland, designer of the Academy Awards winners envelope and invitations, during a preview of the 85th Academy Awards Governors Ball on January 22, 2013 in Hollywood, California. Academy governor Jeffrey Kurland, event producer Cheryl Cecchetto and Puck will return to create this year’s Governors Ball, the Academy’s official post-Oscar celebration, which will immediately follow the 85th Academy Awards ceremony on Sunday, February 24. The 1,500 guests include Academy Award winners and nominees, show presenters and other telecast participants. (Photo by Kevork Djansezian/Getty Images)

I am stunned at the confidence of my predictions for the Oscars, seen in real-time here and here. Of the 24 categories that the Academy of Motion Pictures Arts and Sciences will present Oscars for live this Sunday, the favorite in eight of them are 95 percent or more likely to win their category. Yet despite my concern that no nominee should be that confident of victory, I have no choice but to stick to the data and models. My data and models have proven correct over and over, while hunches and guts checks are prone to failure.

I created and tested these models using historical data and then release them to run prior to the Oscar nominations; I do not make any tweaks to my models once they are live, because I do not want to inadvertently bias my results with considerations of the current predictions. The Oscar predictions rely mainly on de-biased, aggregated prediction markets. This method has proven not just accurate, but has the added benefit updating in real-time and is so scalable that I can provide predictions in all 24 categories. Further, I incorporate user generated data to help determine the correlations between categories, within movies, that I use to create predictions on the number of Oscars for each movie.

The biggest errors in my 2012 election forecasting were painfully obvious to me, even as I published my forecasts in mid-February, but I stuck to them. The errors came from state-by-state predictions of vote share for Massachusetts and Utah. Despite Rick Santorum dominating the nation's polling for the Republican nomination, I was extremely confident of a Mitt Romney nomination. Our model demanded the home state of the Republican nominee and we provided Romney's official home state of Massachusetts. "Everyone" knew that he would get a home state bump in Utah, where he has religious roots and was instrumental in saving the 2002 Winter Games, and not in Massachusetts. But there is no objective data for swapping out the official home state. Making arbitrary model/data changes is bad science and costly; I design my models to be easily scalable to new questions and categories of questions and I do not want to manually review each individual prediction for extra data. So, I proudly overestimated Romney's vote share in Massachusetts (although I still had him losing!) and underestimated his vote share in Utah (although I still had him winning!). Because, while that hunch was correct, over time science is much more reliable than hunches. So, I am sticking to my increasingly confident predictions going into Oscar night and feeling confident about them.

Further, while the eight strong predictions are very salient, the average prediction is at its exact historical level. The average likelihood of victory for the favorite nominee across the 24 categories is 75 percent. In five categories the favorite nominee is not even 50 percent likely to win! Thus, if my model is properly calibrated, I will only get 18 out of 24 categories correct (i.e., 75 percent of the categories). This is the exact same average likelihood that the market-based forecasts provided in 2011 and 2012, and in both years, 3 of every 4 categories landed correctly.

I leave you with a question; is there any particular likelihood, in any of the 24 categories, which you would place a large wager against? What looks to high and what looks too low for you? I invite you to go on my website, PredictWise, and prove me wrong and you right!

This column syndicates with my personal website: www.PredictWise.com.

Popular in the Community

Close

What's Hot