"Measure what can be measured, and make measurable what cannot be measured."
I recently gave a lecture on "Vision Setting and Strategy" to a mix of middle managers and faculty leaders at the university where I am president. I commented that while setting strategy, it is critical to simultaneously develop metrics, as one cannot improve, certainly not in a planful and deliberate manner, that which cannot be measured.
In short order, I was challenged by a member of our humanities faculty, who argued that we can improve conditions for the better without being able to measure them. He gave critical thinking and civic engagement as examples.
Taking up the challenge, I suggested there are many examples of parameters that initially appear unmeasurable, but with purposeful reflection are found to be measurable ... including those in the liberal arts.
The conversation reflects a debate brewing nationally about the measurement of student outcomes in higher education, a debate that is heating up at Purdue University. President Mitch Daniels called on faculty to find or develop an assessment tool to measure student progress in skills such as critical thinking and problem solving. Such assessments are in part in answer to research showing that many students do not gain in these areas and in response to U.S. businesses that report today's graduates are lacking such skills.
While Daniels says he is confident the results of any such assessment will show Purdue students are, in fact, learning these skills, his plan has been met with strong and, unfortunately, predictable resistance among faculty and other university stakeholders.
Learning From the Health Care Industry
As a clinical-translational scientist for the past 35 years, I have frequently had to figure out how to measure what was previously considered unmeasurable.
But more relevant are my experiences in academic medicine. As a faculty physician in a clinical setting, I have lived through a 20-year period where we learned to measure the quality of the care we provide in the most intricate of manners. A goal that has been strongly applauded by consumers of health care services ... including many of the very faculty who resist measuring the quality of their own services.
The fact is that the impact of education is no less critical on the individual, and on humanity overall, than the impact of health care. And so no less worthy of measuring.
When pressures started accumulating on hospitals, nurses, and physicians to provide measures of quality and outcomes, we were shocked, flabbergasted ... and angry. How do you measure such a thing when every patient is different, when resources vary greatly, when there are multiple factors that impact the outcome, when medicine is often more art than science, and when, regardless of what we do, none of us lives forever?
Furthermore, we physicians had been in practice for over 2,000 years, and we could easily prove that our efforts helped make the world a better (at least healthier) place than it was two millennia ago.
But measure we did, and there is now unassailable evidence that many of these measures (not all, of course) have helped to rapidly improve outcomes, including reducing medical errors.
And we learned a few things in the process that our higher education colleagues in other disciplines may want to consider. For example, metrics should be:
- Relevant: What are we trying to achieve? Better critical thinking skills, greater knowledge in specific subjects, higher employability, enhanced creativity, problem solving skills, etc.? We should carefully delineate the universe of relevant desired endpoints first, without consideration of their measurability. And in service sectors like ours, remember that others get to weigh in, e.g., students, their families, employers, taxpayers, legislative bodies, etc.
- Measurable: Since many of the listed endpoints seem unmeasurable on their face, we must deliberately strive to make them measurable, understanding that this will be an iterative process. We may need to begin by assessing parameters that measure endpoints indirectly or tangentially, e.g., progression towards graduation or degrees awarded. However, we should beware of proxies that have little to do with the endpoint of relevance, just because they are easy to obtain. Indeed, employer complaints that earning the degree too frequently does not reflect the attainment of problem solving or critical thinking skills are partly driving the call for better metrics in the first place.
Other lessons learned include that we must continuously reassess metrics chosen so that we gradually improve their approximation to the ultimate endpoint(s) and in order to minimize chances that, consciously or subconsciously, we will be tempted to game the system. It takes time to develop a system of value.
And we should be very careful about applying the results of measures of global or group outcomes to the individual, recognizing the heterogeneity in faculty and student circumstances.
Finally, the most important lesson learned was the need to take charge, be proactive, and avoid burying our heads in the sand, hoping this might be only a passing fancy. The measurement of educational outcomes is here to stay, not the least because it is the right thing to do.
So it is best to get involved early, and make sure our voices are heard, not in obstruction but in construction. Better to have metrics designed with our input, since we know the intricacies of the field better than anybody, than to have an external body design and mandate them without our input.
We scholars and scientists have been working on measuring the unmeasurable for millennia. Let's use this power to assess the outcome of what we do today.