Wednesday, July 7, 2010

Student Ratings of Teaching: Part 1.5

I do not share Lee's sense of irritation that articles about student evaluations of teaching too strongly convey the idea that students are incompetent judges of professors' teaching ability.

As you may recall, in SRT Part 1, I agreed with fellow curmudgeonly critic Olivares that “to think that students, who have no training in evaluation, are not content experts, and possess myriad idiosyncratic tendencies, would not be susceptible to errors in judgment is specious.” And in addition to potential problems in students' ability to rate teachers accurately lies the extent of their motivation to do so. Perhaps many to most students do want their ratings to reflect their true perception of their professor's teaching effectiveness, but others will clearly have other agendas. Especially in a small class, it would not take many students who are dissatisfied with their expected grade to tank a professor's ratings. This is not to say that SRTs have no informational value; I agree with the SRT supporter Olivares quotes as saying, "Student ratings provide information on how well students like a course." But that is a pretty limited amount of information, and it's unclear how much we would want this evaluation to influence important decisions like tenure.

This article provides information about some higher education reforms of extremely dubious quality that are being considered in Texas public universities, with TAMU at the vanguard. I admit that I have a bias here: in the many years I spent working for a Texas state agency, I saw zero instances of governor or legislative intervention in the internal operation of departments that made any sense whatsoever and a zillion examples of idiotic and/or ill-intentioned meddling. (My god, I never blogged the lawnmower incident? the Survey Monkey incident? The "why don't we replace state parks with privately-funded amusement parks" incident? What self-control I had.) Thus, I am inclined to view any plan that comes from a conservative Texas think tank with full endorsement of Governor Goodhair Emptysuit with severe skepticism on the basis of source credibility alone.

Indicator of Idiocy #1: Their goal to "create a 'simple tool' to measure faculty efficiency." Oh by all means, let's make sure it's simple. We wouldn't want to get bogged down in any of the complexities of measuring this construct. Note that they are suggesting the following data be collected: "salary and benefit cost [huh?], number of students taught over the last year, average 'student satisfaction rating' and 'average percentage' of As and Bs given." If that last metric isn't enough to make you weep until you laugh at the obvious strategy that profs can immediately employ without improving their teaching at all...

Indicator of Ill-intentions #1: Their goal to split teaching and research budgets, ostensibly for the sake of transparency. OK, there's the fact that any graduate program is going to find it impossible to distinguish between teaching and research (since grad students spend so much time with faculty being taught to do research), but let's ignore these picayune details for now. Is it credible, for even a moment, that the goal of this proposal is not to reduce research budgets? When they talk about "cost containment," what they mean is "eliminate all that useless research that those lazy, self-indulgent professors are doing on obscure topics like, you know, how to define and measure teacher effectiveness."

Actually, I guess I should be relieved that they have not (yet) suggested turning the state universities of Texas into for-profit companies providing coursework 100% over the Internet. And if it's over the Internet, they could outsource all teaching to the Philippines or wherever labor is cheap these days...though it doesn't get much cheaper than using grad students and adjuncts, so there may be no savings there after all.

[UPDATE: And lest you think my negativity toward a lot of weight being put on SRTs stems from my getting bad evals, I got a 4.6 / 5.0 this past semester that would have put me on track to get the $10,000 bonus at TAMU.]

1 comment:

Tam said...

In my experience (which has both a small sample size and in which I measure a dubious quality, i.e., my own satisfaction), professors with very high ratings are usually excellent (but occasionally merely very entertaining), professors with very low ratings are usually not very good, but professors with middling ratings might be either mediocre or good but difficult or not-so-friendly. I enjoyed reading this and re-reading Part 1, which was barely familiar by now.

At my (last) undergrad place, I was able to see the numbers for professors' past courses, and I did look, which is where the impression above comes from. I also often looked on ratemyprofessors.com to see what comments people had. The comments, even on that site, were far more illuminating than the numbers that the school collected. So my entirely unscientific opinion is that they should go with a comments-only system. Even though students will write all kinds of nonsense, I suspect some of the comments are useful to the professors, and a discerning reader can also tell what to ignore.

But it's probably borked no matter what.