In a tale worthy of O.Henry's combs and watch fob, just as colleges and universities across the country have begun to get the hang of good practices in learning outcomes assessment, in large part due to pressure for accountability imposed by the U.S. Department of Education (USDE) on accrediting agencies, USDE has now declared learning outcomes impossible to consider in its framework for establishing ratings on the quality and value of institutions of higher education. Instead, reverting to the outmoded academic practice of determining quality through inputs, the Department's metrics are heavy on inputs data about the economic profile of students (highlighting the percentage of Pell Grantees in the student body is like counting books in the library) but short on true learning outcomes (the seriously flawed federal graduation rate measuring seat time in one place is a surrogate for the question of whether students have actually learned anything).
By dismissing the primary work of colleges and universities -- teaching and learning -- as impossible to measure across institutions, the proposed framework exposes the utter folly of the Administration's plan to impose some kind of generic rating system on the thousands of disparate institutions of higher education in this nation.
The inputs metrics set forth in the ratings framework emphasize low income students while glossing over other students and important student characteristics. In its quixotic effort to socially engineer the economic profile of the student bodies of some elite colleges and universities, the Obama Administration appears to dismiss as unimportant such fundamental other inputs as student abilities, K-12 preparation, academic interests, and the kinds of personal characteristics that lead hundreds of thousands of students to choose Catholic and other religiously affiliated colleges, HBCU's and Minority Serving Institutions, women's colleges, Hispanic-serving and Tribal colleges, and other institutions that align mission with student interests and characteristics quite successfully.
Access to higher education is surely a worthy social goal, and many colleges and universities like my own, Trinity in Washington, do focus on broad access for all students as part of mission. But rather than providing more support for those of us who do the very hard work of broad access for marginalized students, the inputs of the proposed ratings framework are perversely aimed at elite institutions that have different missions and different student bodies. For quite some time, the Obama Administration has promoted the idea that elite institutions should enroll more low income students on the theory that such students will suffer opportunity loss if they choose more modest colleges that don't spend a lot of money on climbing walls and presidential expense accounts. The theory actually reinforces elitism, an irony that seems lost on its advocates.
Nothing in the framework reveals what really counts for student success in college: the level of student academic preparation for collegiate level work, the ability of the faculty to teach to a range of learning styles in one classroom, the college's support network and range of engagement strategies, the student's socialization to collegiate culture, and personal factors such as health, family obligations, friends and personal motivation to persist when the going gets rough.
Faculty are absent from the ratings framework as if their work does not count at all. Student learning in college, particularly for at-risk students, is a product of focused, consistent and somewhat relentless faculty engagement with students who must devote considerable amounts of time, struggle, passion and pride in mastering general education goals and major learning objectives. The fact that faculty are completely absent from the framework is probably a good thing considering the damage that the Department of Education has done to K-12 teachers. Nevertheless, the framework's inability to account for teaching and learning guts the whole idea of a quality rating for an academic institution.
Graduation rates in their current form and future measurements of post-graduate income are not appropriate substitutes for assessments of academic quality. Such academic outcomes assessments do exist in accreditation reports and program reviews, among other internal sources. The assessment data across many academic programs and degree levels simply cannot be reduced to a single factoid, a truth the Department of Education acknowledges as a reason for dismissing consideration of true learning outcomes data entirely, rather than admitting that the idea of a single rating cannot work.
USDE should do the right thing and step back from this expensive, time-consuming and ultimately misguided effort to create a master algorithm that will produce a single rating for the very complex processes that occur in every college and university across a broad range of programs and degree levels. Instead, if the Department truly believes that the metrics it has identified are important for the public to know, then select and present the data points on a chart with clear identification of what they measure and what they mean. The White House College Scorecard already exists for this purpose, though its utility to consumers is debatable. Nonetheless, expand the scorecard if the data points in the ratings framework are that important for the public to know. Colleges don't shrink from data, we simply object to its misuse for political purposes.
If access, affordability and quality are the goals, then align each metric to each goal in a sensible way and explain why that is important. But mixing up all of the data into one big pile of fudge that becomes a singular rating unrelated to the real work of higher education in teaching, learning and research is a grave disservice to colleges and universities, our students and faculties, and the remarkable national asset that is American higher education.