Higher Education Metrics: <i>Caveat Emptor</i>, <i>Caveat Venditor</i>... Oh, Heck: <i>Caveat Omnis</i>

As an information, data, and statistics junkie, I rarely find statistics per se to be deceptive -- incorrectly applied, yes, but deceptive, no. What can be deceptive in statistics, however, are the underlying assumptions and full description of the data.
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

"There are three kinds of lies: lies, damned lies, and statistics" is a quote that Mark Twain attributed (possibly erroneously) to British politician Benjamin Disraeli. Regardless of the quote's origin, it's often used when someone is attempting to discredit a counter-view that relies on data and the interpretation of those data.

As an information, data, and statistics junkie, I rarely find statistics per se to be deceptive -- incorrectly applied, yes, but deceptive, no. What can be deceptive in statistics, however, are the underlying assumptions and full description of the data. As an example of full description of data, at Oregon Tech we are justifiably proud of our post-graduate success, which was affected very little even during the recent post-2008 recession. We boast that from 2008 to now, 90 to 98 percent of Oregon Tech's graduates were either in a career-track job or in an advanced graduate or professional program within six months of graduation. Obviously, we can only make these claims based on data from our graduates who respond to our post-graduation inquiries. Our goal is to get at least a 50 percent response, and during my time at Oregon Tech, we have done better than that on a regular basis (60 percent, 58 percent, 67 percent, and 59 percent for 2008 to 2011, respectively). This is more of a "truth in advertising" issue for us, so we occasionally add the caveat "...of those who responded to our survey" as a way of reminding people that not every graduating student returns our survey.

Another example of full description of data in higher education is the definitions of "faculty" and "administration." One would think that such straightforward categories of employment in higher education would be standard and used consistently by everyone across the range of colleges and universities in the panoply of our post-secondary landscape. Au contraire! Depending on what higher-education system you are in, "faculty" may include: part-time faculty, adjunct faculty, clinical faculty, extension-office faculty (in some land-grant universities), coaches, and contract faculty that teach only distance education classes, as well as the more conventional full-time tenured and tenure-track faculty. Similarly, administration has no obvious cutoff in rank name or job category, and may include ranks from department chairs and assistant directors to presidents and chancellors, with myriad administrative ranks modified by vice, associate, assistant, interim, acting, etc. So when various higher-education entities report faculty-to-administration ratios, those ratios can have dramatically different values depending on the underlying definitions of faculty and administration, including the number of people employed by a university or college that are not considered to be either faculty or administration (e.g., hourly employees).

So, can universities and colleges manipulate rankings by how data are reported, defined, or otherwise portrayed? Absolutely. Can subsequent rankings of universities and colleges that rely on those data be manipulated a lot? Rankings that use several different types of data are unlikely to change much from year to year, whereas rankings that rely primarily on one or very few data sources may show dramatic differences from year to year. Large-scale changes in ranking also can occur from year to year if a fairly large numbers of colleges and universities are separated by very small changes in values (i.e., 10 colleges tied for No. 20, followed by 15 colleges tied for No. 30, followed by eight more colleges tied for No. 45, with each category separated by a small percentage of the overall possible value range).

In my opinion, the more relevant question is whether colleges and universities actually do manipulate rankings or outcomes through their reported data. There are some fairly well-known examples of this from recent rankings, ranging from allowable operational opinion to totally unacceptable practice, as documented in various media outlets (e.g., USA Today, Forbes, Chicago Tribune, Chronicle of Higher Education, Time).

It is not just rankings that draw controversy and are affected by underlying data and operational interpretations. More states and education systems are moving toward outcome-based funding, leading to increasing temptations to manipulate both the measures themselves as well as the data used to underpin those measures and outcomes in order to maximize the reward structure. Higher-education professionals and the various boards and legislatures that oversee them have to keep a laser focus on one fundamental question: Are the metrics proposed for measuring/reporting appropriately driving a higher-education system towards desired outcomes? If a state's higher-education goal is more graduates, then yes, absolutely assess and count the number of degrees awarded. But if the goal is more in-state graduates or more STEM graduates or more graduates from underrepresented groups, then counting those qualified degrees becomes a bit trickier.

The ability to produce consistent data and well-understood, correctly applied definitions are critical steps in making sure that assessments are done fairly and evenly. I did not use the term "correctly" to describe how assessments (outcomes) are done because what is correct for one set of preferred outcomes based on the underlying student demographics may not be correct for another set of preferred outcomes based on different student demographics. The most obvious of these differences is time to degree. Although time to degree is a valuable measure for assessing progress of full-time, entering-freshmen students who do not transfer to another university, it's an almost useless metric for transfer students, part-time students, and students who stop and start their degrees based on financial needs.

In using rankings to assess universities, know what you want, know what is important to you, and closely examine the ranking methodologies to find the universities that fare well in those areas. The same advice applies to outcome-based assessments--states/boards need to know what they want, know what is important, and make absolutely certain to tailor outcomes to meet those needs and expectations. It really is the only way to actually use statistics to distinguish lies and damn lies from truths and partial truths. Caveat omnis!

Popular in the Community

Close

What's Hot