It's not a bad thing to give a ranking some consideration -- especially if it's about its general position rather than its exact number. But if some say to take rankings with a grain of salt, I would recommend giving the salt shaker a few more shakes.
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

Earlier this month, Forbes released its annual America's Top Colleges rankings, starting a season of college reviews that also sees numbers from US News and World Report, Newsweek, and The Princeton Review. Most college applicants will look at rankings of some sort, and over one in six consider the rankings "very important" in selecting a school. Using these rankings can be tempting because they take an abstract question -- what are the best colleges? -- and answer it with an easy-to-use numerical list. But while people may know if the college they're applying to, the one their child is applying to, or their alma mater has a high ranking, few understand the methodology behind these numbers. As it turns out, the highly subjective nature of college rankings is reason to use them with caution.

Most college rankings fail to consider direct feedback from students, the people who are actually in the classrooms (The Princeton Review stands as a notable exception). Instead, Forbes bases 17.5 percent of its rankings on professor reviews from ratemyprofessors.com, a third-party site whose sample is biased toward people with extremely positive or negative experiences who want to make their opinions public. Forbes' way of using ratemyprofessors.com includes reviews of professors no longer teaching at an institution, and the majority of the reviews they use are more than a few years old. Similarly, Newsweek uses third-party data from College Prowler for its rankings. US News bypasses student feedback altogether, instead basing up to one quarter of a school's ranking on its reputation among administrators at other colleges and, to a lesser degree, high school counselors. Given that reputations take time to change, this proxy measurement for quality may not reflect the extent to which a college has improved or, to the contrary, fallen behind in recent years.

Though not all criteria are included in as poor a manner as the faculty reviews and reputation data in Forbes and US News, college rankings often incorporate other data with little context. US News and Forbes both provide a rankings list without breaking it down by the various criteria behind the rankings, making it hard to determine if a school is strong in a criterion such as alumni success, but weak in another, such as its retention rate. In addition, Forbes gives weight to alumni salaries without considering their fields: an excellent college that prepares a lot of K-12 teachers may be at a disadvantage to a mediocre one that has a lot of engineering majors, simply because most engineers make more money than even the best teachers. US News and Newsweek reward colleges whose admitted students have high SAT and ACT scores, but do not consider how statuses such as class can affect those scores, ultimately harming colleges that aim to create a diverse student body. The Princeton Review and Newsweek list the top 20 or 25 colleges for a given attribute, such as best professors or happiest students, but don't go further, making it impossible to know if a school that didn't make the cut would have been #26 or #326.

Finally, college rankings perpetuate themselves. Forbes, US News, and Newsweek base part of their rankings on admit rates; a 2011 Harvard Business School report found a 1 percent increase in applicants for every one-rank improvement on US News' best colleges list. Assuming a school doesn't admit more students, more applicants means a lower admit rate, resulting in higher rankings. In addition, high rankings in previous years can bolster a college's reputation, leading to even more points in US News' methodology. There are other factors in admission rates, too, such as to what extent a college seeks out students who have little chance of getting in, as well as location. After all, it doesn't matter how good the University of Alaska Fairbanks is; there are a lot of people who would never apply there.

The problem isn't just methodology, but how the ranking organizations present their lists. While it may be that a school in the top 100 for US News or Forbes is better than school #358, the 'one, two, three' nature of rankings suggests that school #29 is objectively better than school #31, a conclusion I'd be weary of given how the rankings are compiled. Additionally, the rankings fail to consider two important factors: how good a college is for a person's area of interest (as it turns out, the University of Alaska Fairbanks is a wonderful place for someone who wants to study arctic biology) and the overall fit. I don't think any ranking organization would deny this, but neither do they go out of their way to make it clear.

To be sure, rankings are not all moot. They do look at important factors such as retention and graduation rates, availability of financial aid, class sizes, and faculty resources -- all legitimate means of evaluating a school. Nonetheless, rankings are only one piece of a puzzle -- certainly no substitute for a campus visit or talking with current students -- and can be hard to interpret when they aren't broken down by the criteria that go into them. It's not a bad thing to give a ranking some consideration -- especially if it's about its general position rather than its exact number. But if some say to take rankings with a grain of salt, I would recommend giving the salt shaker a few more shakes.

Popular in the Community

Close

What's Hot