Reimagining University Rankings

Ten years ago this month, Colin Diver, the President of Reed College, penned an article explaining the single most liberating decision an administrator at an institution of higher learning could make: Withdrawing from the US News and World Report college rankings.

Reed had announced a decade earlier that it would no longer return questionnaires from US News, decrying how the one-size-fits-all nature of the ranking scheme promotes homogeneity, how the emphasis on factors like graduation rates creates perverse incentives to make a student's educational experience easier rather than more challenging, and how the rankings -- which depend heavily on unaudited, self-reported data -- leave room for inaccuracies if not outright cheating.

While the article inspired plenty of envy from Diver's fellow college presidents, it did not inspire significant changes to the US News methodology or to public perceptions. The flawed practice of list-ordering diverse colleges and universities according to simple inputs like faculty resources, student retention, and alumni giving continues to have outsized impact on how students select a school, how employers weigh recent graduates, and how funders and policymakers assess the worthiness of institutions of higher ed. As New York Times columnist Frank Bruni put it, there's a "paradox of pervasive contempt, and yet widespread obeisance" in the relationship between universities and their rankers.

It doesn't have to be this way. In an age in which universities matter more than ever -- as economic engines that cannot be outsourced overseas, as sources of solutions to mounting technical and social challenges, and as powerful vehicles for social mobility in an age of inequality -- we need to measure institutions of higher learning according to their real contributions to society. The best way to reimagine university rankings is to move from measuring mere inputs (like wealth and selectivity) toward measuring real-world outcomes (like innovation and educational enrichment). This is too big a task for any single crew of magazine editors. But it's a task that government, businesses, and philanthropies can and should collectively accomplish.

When Raymond Hughes, the president of Iowa State University, hatched the idea of formally ranking universities ninety years ago, he was on to something important. His 1925 survey of university reputations and the expanded studies that emerged over the following decades were helpful in catalyzing healthy competition among universities to build names for themselves by undertaking important research and contributing to the public good.

So what went wrong?

The rankings have remained reductionist even as they've gotten unimaginably influential. Because it's incredibly difficult to measure all the subtle and subjective factors that make an institution worthy or not, U.S. News -- since the release of its first "America's Best Colleges" list in 1983 -- has used a changing set of proxies for quality: academic reputation (now weighed at 22.5%), student retention (also 22.5%), faculty resources (20%), student selectivity (12.5%), financial resources (10%), graduation rate (7.5%), and alumni giving (5%). These proxies have been widely reviled for creating perverse incentives (to ramp up spending, make coursework easier, inflate grades, and waste resources on advertising). The overall enterprise has increased neurosis and implied a false sense of authoritative precision and rigor. US News is hardly the only culprit: The Times Higher Education Supplement functions as an international equivalent, Princeton Review rankings focusing on factors like campus housing create similar spending pressures, and the space is now filled with dozens of competitors, each with a specific niche but similar pitfalls.

The rankings have ultimately missed the mark by confusing inputs with outcomes. In 2013, Bill Gates remarked that "there is a perverse metric rating system for U.S. colleges. The problem is that it gives credit to schools that attract the best students rather than schools that take poorly prepared students and help them get ready for the next stage." Norman Augustine, a former top federal official and CEO of Lockheed Martin, put the problem simply: "The youth who get into Harvard probably would succeed because of what it took to get into Harvard, not because of what he or she learned at Harvard."

In an age when we need universities to be focused on advancing public priorities like economic growth, social mobility, and innovation, there's an even bigger critique of university rankings: They encourage self-serving inputs like building fancier dining halls rather than societal outcomes like solving climate change or antibiotic resistance.

So what might outcome-oriented college ratings look like?

Rather than measuring the wealth, prestige, and attractions of the school, they might focus on quantifying the change between when students enters and when they leave, and how the culture at a school prepares students to tackle real-world challenges. This means capturing not just traditional outcome metrics like post-graduation salaries, but also richer data on how the educational experience leads to positive personal and societal impact. For instance, this could mean soliciting alumni input that goes beyond the usual university experience ratings, probing the degree to which institutions prepared graduates for achieving their goals through their early and mid-careers. It could also mean carefully curating data from employers on the effectiveness of graduates from different universities -- again as they progress through their careers.

At major research universities, outcome-oriented ratings could assess innovation and effectiveness by measuring outcomes like new drugs, products, business spinoffs, and demonstrable impacts on governmental decisionmaking processes. These kinds of assessments will not only better serve students as a reliable indicators of university quality; they will also push universities to recognize their inventive potential, to capitalize on their own strengths, to create private sector jobs, stimulate economic growth, and build a better factual basis for public sector choices.

These kinds of data are far harder to collect and synthesize than the metrics used in current ranking systems, but the insights they provide could actually shine light on how universities are really serving society.

Washington Monthly has already taken a step in this direction by developing new socially-oriented ratings that "ask not what a college can do for you but what the college can do for the country." But, while their measures assess important factors like research spending, percentage of faculty in the National Academies, and percentage of students receiving assistance based on low-income, the focus is still squarely on inputs. While President Obama's new proposal for official ratings based on access, affordability, and value includes outcomes like post-graduation salary, there's still no emphasis on a university's broader social impact or contributions toward achieving graduates' personal aspirations.

We get what we measure. And, right now, we're getting schools primed to attract and graduate students but not necessarily to create transformative change in their lives or in the society in which they live. While it's tempting to follow Reed's example and flee from the flawed ratings game, there's a better option: Let's create more nuanced ratings that measure inspiration, service, and enrichment.