“Higher education too can make a fetish out of ‘objectivity’ and ‘rationality,’” observes John Warner, confronting specifically The Pitfalls of “Objectivity” in teaching composition.
Warner’s argument is a subset, however, of the larger problem with the white men of academia, as I have examined recently: Concepts and terms such as “objectivity,” “scientific,” “valid,” “reliable,” and “rationality” prove to be extremely powerful in academia and scholarship, yet the great irony of that power is that these concepts and terms are veneer for maintaining white male power ― inequity grounded in the racism and sexism that academics are prone to refute in their rhetoric while maintaining in their practices.
“Objectivity,” for example, frames a white male subjectivity as the norm (thus “objective”), rendering racialized (non-white) and genderized (non-male) subjectivity as the “other,” as lacking credibility.
Explore the history of which research paradigms count, and you confront the bias in favor of quantitative (experimental and quasi-experimental) research (paradigms created by and maintained by men) over qualitative research (paradigms championed by women and racial minorities) ― the former is “hard,” “scientific,” and “objective” while the latter is “soft,” “personal,” and “(merely) anecdotal.”
In an institution created by white men, academia, where their claim is that everything is based on empirical evidence filtered through the rarified lens of objectivity, we run into a real conflict when unpacking the evidence. For example, the annual or biannual self-evaluation process linked to promotion, tenure, and merit as well as the embedded key element of student evaluations of faculty.
As a tenured full professor, I am currently drafting my bi-annual self-evaluation, a process I have been using as a political document to confront the inherent inequity in both the faculty evaluation process and the traditional use of student opinion surveys.
The self-evaluation is flush with traditional norms about what counts as excellence—peer-reviewed publications, for example, but not public intellectual work. And as is the case at many colleges and universities, student evaluations are central evidence in the entire faculty evaluation process.
We are directed in our self-evaluations to “include numerical results from student opinion survey forms” ― the double whammy of quantitative and the ubiquitous student evaluations. The narrative at my small selective liberal arts university is that of the three areas of evaluation ― teaching, scholarship, and service ― teaching remains primary; therefore, I make my strongest advocacy case (nearly equaled by my argument for valuing my public intellectual writing) about how we determine faculty teaching quality.
My opening to my teaching effectiveness self-evaluation begins, then:
My teaching effectiveness has exceeded [our] high standards for teaching over the past two academic years. As evidence below, I will not refer to student opinion surveys because they have been shown to be biased (against women and faculty of color) and poor indicators of student learning . Since I strongly support concerns raised in our Gender Equity study and FU’s Diversity and Inclusion initiative, I believe use of the student opinion surveys are contradictory to those goals.
If, I argue, our university has gender equity and diversity/inclusion initiatives, then using student opinion surveys contradicts those goals because the evidence is overwhelming that these student surveys are gender and race biased; they serve the interests of white male academics.
Here, then, are some key readings and research to support rejecting and resisting student evaluations of faculty:
- Boring, A., Ottoboni, K., & Stark, P.B. (2016, January 7). Student evaluations of teaching (mostly) do not measure teaching effectiveness. ScienceOpen Research.
- Uttl, B., White, C.A., & Gonzalez, D.W. (2017, September). Meta-analysis of faculty’s teaching effectiveness: Student evaluation of teaching ratings and student learning are not related. Studies in Educational Evaluation, 54, 22-42.
- MacNell, L., Driscoll, A. & Hunt, A.N. (2015). What’s in a Name: Exposing Gender Bias in Student Ratings of Teaching. Innovative Higher Education, 40(4), 291–303. doi:10.1007/s10755-014-9313-4
- Student evaluations of teaching are not only unreliable, they are significantly biased against female instructors, Anne Boring, Kellie Ottoboni, and Philip B. Stark, LSE Impact Blog
- How Student Evaluations Are Skewed against Women and Minority Professors
- New study could be another nail in the coffin for the validity of student evaluations of teaching
- New analysis offers more evidence against student evaluations of teaching
- Study finds gender perception affects evaluations
As a white man in academia, if my claimed scholarly focus and agenda are grounded in equity, social justice, and not only naming but also dismantling racism, sexism, and white privilege, then I am beholden to both word and action in those pursuits.
In the ideal, yes, academia has the potential to be a model for equity and justice; in reality, academia is more often than not a white man’s world with garnishes of elevated rhetoric.
As Warner concludes in his interrogation of “objectivity,” students deserve a commitment to their “agency that allows them to make space for their ideas in the world,” adding what we can and must extrapolate to all of academia:
In this context, “objectivity” is not a value, but a pose, and one that’s usually sussed out by students as phony. They easily recognize it as a confidence game because it’s a game they’d previously been trying to practice, and during that practice they knew it was a pose.
Too often this pose in higher education is that of a gatekeeper, a position garnered through privilege but flaunted as merit.
If it were only phony, maybe we could brush it aside, but this pose of white male academics is determinant—it shapes, defines, and controls the careers and lives of everyone.
Changing academia in the pursuit of equity must be the work of white men, and two ways to begin that shift is to reimagine faculty evaluations and to end the use of student evaluations of faculty in that process.
 The footnote I provide includes the bulleted research and links above.