At a well-publicized speech in Buffalo, New York on August 22, 2013, President Barack Obama announced that he would create a plan for sweeping reform of higher education. The details were sketchy, but the projected plan would institute a rating system of colleges and universities nationwide. The stated purpose of the plan is to hold institutions of higher education accountable to consumers, especially in the areas of access, affordability, and outcomes.
The president promised to solicit ample input from all interested constituencies before unveiling the actual plan, and he charged the Department of Education with holding public forums and working out the plan's details.
The outline of the plan originally presented by the president drew substantial criticism from university leaders, some public officials, and numerous professional organizations. Nineteen of the nation's most important and influential professional organizations representing higher education (such as the American Council on Education, the Association of Public and Land-Grant Universities, and the National Association of Independent Colleges and Universities) cosigned a letter to the administration expressing serious reservations about the plan.
Despite receiving a huge volume of input on the proposed plan -- the overwhelming amount of it negative -- Department of Education officials apparently have made little change to the proposed plan, judging from the "framework" of the final plan released by the administration just this month.
The plan still proposes to rate colleges using metrics that are problematic, if not seriously flawed. On the surface, several of the metrics sound reasonable, but a close examination reveals a system with very little nuance or awareness of context.
Take completion rates, for example. As I point out in an earlier post, while we all want more college students to earn their degrees and graduate, using a simple statistic of how many of an institution's full-time students graduate in six -- or, worse, four -- years does not account for how the demographics of the college student population has changed in the last several decades.
It would be entirely reasonable to expect a high percentage of a college's students to proceed smoothly through four or so years of school and graduate "on time" if that population were composed primarily of students from relatively affluent families and who were not the first in their family to attend college -- as was the case 30 or 40 years ago. Such students enjoy significant family and financial support and don't typically need to work while attending college.
Today, however, many of our students are the first in their families to attend college; many are people of color, or citizens of other countries; some are single parents or products of them; females outnumber males; and a large number of students are older than traditional age. Given this demographic, many students -- perhaps the majority -- have no choice but to work while they are attending college. Needless to say, holding down a job -- even a part-time job -- while attending college full time significantly decreases a person's likelihood of sailing straight through four years of college to graduation.
The recent large influx of military veterans into the ranks of college attendees also makes today's student population nothing like that of the Father Knows Best years. Vets often bring their own challenges to college life, including problems with sustained concentration and struggles with post-traumatic stress, and so this subpopulation needs extra support to help them succeed and to do so in a timely fashion.
As currently formulated, the proposed "reform" plan will use a stark completion percentage
devoid of any accounting for personal, institutional, or demographic context. As such, it is hardly a useful statistic for rating colleges -- not to mention that it could become an incentive for some colleges to lower their standards in order to boost their completion rates.
Another misguided metric still included in the plan despite widespread criticism is "labor market success," meaning tracking the salaries of just-graduated students and comparing that average to those of other institutions. The recently published "framework" admits that this metric drew the most criticism and states that an attempt will be made in the final plan to devise instead a "substantial employment" metric, but even this misses the point.
Tracking how much students earn is misleading and will contribute to a skewed "rating" of institutions. The students graduating from an institution heavily oriented toward the arts and humanities, for example, will likely be entering relatively low paying professions. In contrast, students graduating from a college with a heavy orientation toward health care professions or engineering will be earning considerably more.
Many of the metrics in the proposed reform plan are equally misguided, but I will not address them here. My larger point is that the architects of the plan need to pay closer attention to the input they receive. The proposed plan at its best will be grossly misleading and at its worse will be actively destructive to the extent that it incentivizes colleges to make poor choices for the wrong reasons.
Better yet, perhaps the administration should scrap the proposed plan altogether and instead adopt a version of an already existing system operated by the National Association of Independent Colleges and Universities called the University and College Accountability Network (U-CAN), a website designed to offer prospective students and their parents "concise, consumer-friendly information about the nation's private, nonprofit colleges and universities in a common format." Why reinvent the wheel?