A Better Way to Measure the Impact of Higher Education

The Obama administration's College Scorecard, while perhaps a good idea in theory, has serious shortcomings, as it does not acknowledge regional differences, fails to break down the data by field of study, and ignores the impact of transfer students and drop-outs.
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

Declining state appropriations, rising student debt, parent and student dissatisfaction with the increasing cost of degrees, and skepticism among employers about graduates' job readiness have raised troubling questions about the value and impact of U.S. higher education. Especially difficult to gauge is the performance of individual colleges and universities.

The Obama administration weighed into this tangle two years ago, announcing in August 2013 plans to "measure college performance through a new ratings system" that would enable students and families to select the schools that provide the best value.

The administration backed away from the idea of a government rating system. Just days ago, however, the administration rolled out its online "College Scorecard," which the White House says "provides the first comprehensive data on costs and student outcomes at nearly all post-secondary institutions in the United States." The site, they say, will help students "make informed decisions about enrolling in higher education and choosing the best college for their needs."

Not so fast. The Scorecard, while perhaps a good idea in theory, has serious shortcomings, as it does not acknowledge regional differences, fails to break down the data by field of study, and ignores the impact of transfer students and drop-outs.

The ongoing discussion and publication of the College Scorecard has made one thing clear: America's colleges and universities need to increase their transparency and develop clear performance standards that demonstrate value received per dollar spent.

Various organizations, including U.S. News & World Report, The Princeton Review, and Forbes already rate and compare U.S. colleges and universities. Britain's Times Higher Education magazine provides global rankings. But the rankings and lists often rest on misleading and uninformed data and hidden biases that reflect the creators' notions of quality, but may be at odds with other measures of success.

For example, U.S. News & World Report's "Best Colleges" rankings, the latest version of which also was just released, give disproportionate weight (22.5 percent) to a school's "reputation," a criterion based on opinion surveys that may have been influenced by previous rankings. In effect, respondents are being asked to rate the undergraduate programs of hundreds of universities without any real first-hand knowledge. "Selectivity" is another criterion commonly used in rankings.

Neither measure tells us anything about how well students are learning in the classroom and engaging beyond the classroom. Indeed, what they may tell us is that a school excels at marketing and is able to attract a large pool of applicants - most of whom it intends to reject in order to increase its selectivity score.

Certainly, there must be a better way to determine the value and impact of higher education and how well our colleges and universities are serving the needs of their students, communities and other stakeholders.

The first step in this process is to recognize that U.S. colleges and universities are all different, with distinctive identities, cultures, student populations, and missions - teaching, research, lifelong learning, community engagement, to name just a few. So a one-size-fits-all approach won't do.

We need an evaluation system that is flexible, nuanced, and considers each institution's characteristics, such as its size and mission. The University of North Carolina, Pembroke, for example, is a mid-sized rural state university. New York University is a large private urban research university with a commitment to globalization. Hamilton College is a small rural liberal arts school. Miami Dade College is the largest community college in the U.S., serving more than 160,000 students. An intellectually honest evaluation system will take into account their profound differences.

How can we assess how well such varied institutions are preparing students for personal and professional success and addressing the needs of their wider communities by providing resources and expertise - if this is part of their mission?

It's not by lumping them all together and rating them based on which school has the largest endowment, turns down the most applicants or is best known.

You determine quality and value by comparing schools with similar characteristics and missions and determining which are doing the best job of fulfilling the mission. If the mission of a state university system is to provide a quality education to state residents at a reasonable price, as the North Carolina state constitution mandates - you make affordability part of the equation. If job preparedness is paramount, you look at the percentage of students who have job offers in their field of study within six months after graduation; you look at earnings and job satisfaction one, five and ten years after graduation; you look at job advancement data.

While most college raters in the United States focus heavily on inputs, such as reputation and selectivity, the Higher Education Quality Council of Ontario, Canada (HECQO) relies primarily on outcome-based metrics, such as "value to students" (including affordability and learning outcomes) and "value to society" (including job creation, innovation and citizen engagement). This approach is saying that what comes out at the end is more important than what goes in at the beginning. That makes sense.

Educators and administrators need to get out in front on this. They can't hide in the ivory tower and claim that "academic freedom" provides them with immunity from the kinds of reality checks that parents, students, donors, employers and politicians are now demanding.

Given the contentious debate about rankings that the Obama administration's proposal triggered, faculty, administrators and trustees need to work with other interested parties to identify the outcomes that most reasonably demonstrate success and to develop assessment and measurement systems that most reasonably measure those outcomes.

This is tricky business, so they shouldn't try to do everything at once. Start with voluntary pilot programs appropriate to each sector. Learn. Then expand.

Scott Cowen is President Emeritus of Tulane University and a Senior Advisor to The Boston Consulting Group (BCG). J. Puckett is a Senior Partner of The Boston Consulting Group and Global Leader of its Education practice.

Popular in the Community

Close

What's Hot