Faculty reviews and teaching evaluations are such an issue at school lately. Our student ratings are incorporated in our annual reviews, so the numbers count in our overall annual evaluation by the school administration. Now, on the one hand, this doesn’t actually mean much since there’s no material reward for a good review these days and no one’s job is threatened by an average review.
On the other, my happiness is still tied to them. In the ideal sense, they measure whether or not your teaching is effective. Frankly, teaching evaluations are political. Commitment to a certain type of pedagogy, for example active learning vs. “old school lecturing,” is political and cultural. So, evaluations generally reflect commitment to a pedagogy. If they do not share your personal pedagogy, then they are meaningless to you as a teacher.
Let’s set that issue aside. Let’s move on to the problem of what’s getting evaluated. In my experience, students give you high scores on everything or low scores on everything depending on how they feel about you. There’s probably some research somewhere agreeing or disputing. My scores bear this out. Very rarely do the replies vary based on questions. They vary across students, yes, but not within the questions themselves. Someone who gives me a 4 for the first question pretty much gives me 4’s all the way down, maybe with one or two variations up or down.
This pattern is most apparent when students rate things that are not within the instructor’s control, such as the quality of the textbook for multi-section courses. Logically, the answer to that question should be relatively consistent across all sections. Either the textbook sucks, or it doesn’t. How the textbook is presented will vary, yes, but the quality of the book itself is a constant. Looking across three sections of the same class, the responses to the textbook question fairly parallel the responses overall, so the whole point of teaching evaluations seems moot. If students like a teacher, they like a book. If they don’t like a teacher, they don’t like a book. This suggests that teaching evaluations might as well ask just one question: How much do you like your professor?
As Stanley Fish has written recently, teaching evaluations reward pretty packaging. I’ve always called this the “happy meal.” The happy meal is the marketable teaching package that sneaks some nutritional value into the whole thing. It’s a hard balance to find and often my students walk away with the cheap plastic toy and nothing more. Also, honestly, how much nutrition is in a happy meal? Not much except an apple slice to appease a parent’s conscience. As Fish points out, though, teaching evaluations encourage fast food teaching (not to belabor the whole paradigm shift to consumer-driven education). Teaching evaluations sometimes seem like comment cards at the fast food restaurant. Customers fill them out when they’re really happy or really angry, and their answers are knee-jerk, non-critical, hormonal responses. Management in major universities doesn’t genuinely care about the evaluations unless there’s a glaring problem. The increased push to measure quality and to reward or punish based on teaching evaluations shows how much of an actual shift has occurred in higher education.
Reflecting on all raises the question of whether students are evaluating what we want them to evaluate. For instance, one question asks whether the instructor connects assignments to the learning outcomes of the class. This is a good question on the surface, and probably a common one. Still, I can’t picture my students being able to state the class learning outcomes even by the end of the semester. They could probably say something like, “I’m supposed to learn how to give a speech,” or “I’m supposed to learn how to have better relationships.” Beyond that, students can barely remember the details without prompting. Given specific questions, like tell me what you learned about listening, students could respond. But they are not thinking about objectives A through G listed on the syllabus or whether the assignments missed objective C. They might have an overall sense of whether they learned what they were supposed to, what the course advertised they would learn, or what they wanted to learn when they enrolled. So what exactly is the question measuring, since the responses are knee-jerk, glandular responses based on whether or not the students liked the class?
Of course, my response to students’ evaluations of me is to remember what I tell my students about their grades. They are a snapshot of one particular moment that might or might not reflect my overall performance and capability. I totally bombed in one class because of the chemistry; I totally sucked in a given semester because of life problems; I was on a teaching high because I found some exciting new approach; I got to teach a topic or class that I love.
The irony of this position is that, like the administration, I invest meaning and feeling into the evaluations even though I know that their merit is only what I invest in them. Evaluations do offer information, but only with careful interpretation, and not the information that we think they do.