Is Rotten Tomatoes a Reliable Source for Movie Reviews?

Is Rotten Tomatoes a Reliable Source for Movie Reviews?
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.
Instants/Getty Images

Has Rotten Tomatoes become less reliable in recent years? originally appeared on Quora: the place to gain and share knowledge, empowering people to learn from others and better understand the world.

Answer by Mark Hughes, Screenwriter, Film Critic, on Quora:

I’m a critic featured on Rotten Tomatoes (RT), and I do think it serves a purpose similar to Siskel and Ebert giving “thumbs up/thumbs down” for films — you don’t get the nuances or the degree of approval, but you at least get a quick snapshot of whether a majority of reviewers are claiming a film is good or bad. There are flaws to the counting, and there are problems with how people are interpreting the data, but as an aggregate, the site offers a limited way to see if a film is mostly accepted or mostly rejected by film reviewers, and if you use the Top Critics option then you’ll be able to narrow it down to the reviewers who are generally considered more reliable and who offer more detailed assessments of the films if you want to then check out the specifics of their “Fresh” or “Rotten” scoring of the film.

I realize that most folks don’t know the site actually does offer reviewers additional options — we don’t just have to pick “Fresh” or “Rotten,” we can also add star ratings or a point-ratings to each film, and the site provides that to users so you can look at more nuanced data for the film’s average scores and average star ratings. Which means, if you look at the overall RT score, then the stars and points, then the average user rating of the film, you actually can get a broader set of information. Then you can narrow it down to Top Critics for at least what is considered the more reliable and reputable reviewers, and also look at their pull-quotes summarizing their opinions of the film, and you can even see their full reviews.

That’s why I’m less critical of the site itself per se and what it really tries to accomplish, and I’m far more critical of the way reviewers (sadly, particularly fan site bloggers who seem more inclined toward hyperbole and clickbait motivations) and users of the site that are increasingly making RT less helpful and less reliable. How do critics and users adversely affect the reliability and reputation of the site? Let me explain four key ways this happens:

  1. Some critics who are on RT literally don’t even know how to enter their data into it. At least one critic wasn’t aware he was supposed to select “Fresh” or “Rotten” from the dropdown menu, and was just adding his pull-quote and a star rating, which the site then (in absence of him also selecting “Fresh” or “Rotten”) compared to the average scores for that film and apparently used it to select “Fresh” or “Rotten” on his behalf. He was complaining that sometimes the site’s choice of “Fresh” or “Rotten” didn’t conform to his intent in his review.
  2. Some critics intentionally created “Fresh” or “Rotten” scores before even really seeing the film — most famously, a Top Critic on the site once bragged about adding a negative review/rating of a film just to anger fans of the director (Christopher Nolan) and to ruin the film’s perfect 100% rating at the time. Many people complained, and I personally reported him to the site for ranking a film without seeing it (he admitted not having seen it yet, when he was bragging about ruining its score) and he was demoted from RT for his behavior.
  3. Some critics give films “Fresh” or “Rotten” scores based not on their actual feelings, but simply to stand out from what other critics are saying, in order to get more clicks on their articles. Some professional reviewers admit to having mixed feelings about a movie and therefore waiting to see what most other critics rate the film before deciding whether to give it a positive or negative review. Some have given films a positive review but with enough complaints sprinkled throughout to justify giving it a “Rotten” score, because negative reviews tend to get more readership than positive ones. This is far more common than you might suspect, despite many film critics insisting they aren’t influenced by anything except their artistic assessment of a film. There’s no reason to believe that all other forms of journalism are influenced by ulterior motives, agendas, personal animosity/support, etc, but somehow entertainment journalism is magically and singularly immune to any such untoward influences.
  4. A large portion of RT users utterly fail to understand what the ratings even mean, and think the score is an average of the overall critical grading of the film. I’ve had extended, often heated debates with people — including actual film critics — who fail to grasp the way the score actually works, or who understands the basic mechanics of it but still make absurd arguments insisting that the numbers accurately represent the level of quality of the film in terms of “percentage of quality.”

The score is equivalent to asking 100 people to raise their right hand if they thought a film was good, and then counting the number who raised their right hand. The number doesn’t tell you how much they liked it, and the number is skewed by some people accidentally raising their left hand instead of their right hand, and a few people who liked it but refused to raise their hand because they wanted to change the outcome of the vote, and then everyone watching this vote assumes that the final number of people who raised their hand is a percentage equivalent to a letter grade in school (where 50% is an F, 75% is a C, and 90% is an A, reflecting literal quality of the film).

So at the end of the day, regardless of the site’s intentions, the final outcome is inevitably driven by the critics and users. Which is why the above problems that are increasingly common at RT, are making the overall value of the scores increasingly dubious for anything that falls outside of the 90+% range or the 45-% range.

I think the fix to this is relatively easy — RT should require reviewers to not just pick “Fresh” or “Rotten,” but to instead use a little sliding scale (literally, a horizontal line with a little tomato icon we can slide back and forth, left and right) where we move it into “Fresh” or “Rotten” territory and we control how far into either territory we set it. This sliding scale would replace stars or grades, etc. from reviewers, and would be the single metric for measuring “freshness” and “rottenness.” Then, for the overall score for the film, the site can show the final overall RT average percentage, with an image of the rotten or fresh tomato sitting on an image of a sliding scale (the tomato would be positioned above the average percentage number from all reviews). It’s a cool visual that quickly demonstrates both the final verdict (positive or negative) plus the degree of positivity or negativity (i.e. an equivalent “grade”).

Let me demonstrate with some images…

In the above image (my bad iPhone doodle rendering), the black blob near the top-middle is supposed to be a little tomato icon; you’d click on it and slide it left or right (“Rotten” or “Fresh”), and it would change to either a ripe tomato or a green splat tomato depending on which percentage-score line you place the tomato icon over. Reviewers would use this little sliding scale on the page where we also add our quote blurb and the link to our full review, and then we’d submit it once everything is where we want it.

Then, the main page for a film would then change the current “Fresh” or “Rotten” Tomatometer images (here’s a sample of one)…

So that the Tomatometer has the icon image of "Fresh" or "Rotten" and the corresponding final percentage of reviewers who approve of the film, and directly below that would be the “Reviews Counted” and breakdown of number of fresh reviews and rotten reviews. Then, the “Average Rating” portion (currently sitting right below the icon and %) would be moved to the bottom (below the “Reviews Counted” breakdown), and instead of a set of numbers, the average rating would be demonstrated with a small sliding scale icon (like the one in my awful doodle above) where the little tomato image sitting above the average percentage rating by all reviewers.

Alternately, maybe the very best option is to use the sliding scale concept as the new single Tomatometer, and do away completely with the current system of “how many reviewers said it’s good, how many said it’s bad.” After all, you get that basic info — but in more nuanced form, requiring doing the math yourself to figure out the pure “what percentage of people said it’s good” number — via the “Reviews Counted” data provided under the Tomatometer icon. But I suspect people like the idea of a simple “do most people say it’s good” measurement instead of “how good do most people say it is” measurement. The funny thing is, whichever of those questions you ask first, usually, inevitably, most folks will then follow up with the other question for more clarity. Thus I believe it’s worthwhile to have a simple visual representation for both concepts, and then at a glance you’ll find out (a) did most people like it or hate it? and (b) how much did most people like it or hate it?

Moving to that method of scoring and visual representation would, I feel, instantly solve a lot of the problems — not just for how we interpret and use the data, but also fixing some of the clickbaiting by some reviewers, since they’d have to actively choose how far into negativity or positivity they’re willing to go, and that can affect their reputation so many would be inclined to avoid too much hyperbolic use of the sliding scale and might set it barely into one territory or the other (or set it dead-center to avoid taking a firm position either way, thus justifying writing a review that’s mixed and not overtly positive or negative, as they often do). It wouldn’t be perfect, but it’d be simple and easy to use/understand and make things much better, in my opinion.

This question originally appeared on Quora - the place to gain and share knowledge, empowering people to learn from others and better understand the world. You can follow Quora on Twitter, Facebook, and Google+. More questions:

Popular in the Community

Close

What's Hot