Student Biases Skewing Higher Ed. Teaching?

Graham breathes a sigh of relief…

because ten years ago he was headed for a teaching career, maybe. Then the Great Recession scrambled those plans. Perhaps it’s just as well. There’s increasing evidence that student bias plays a much bigger role in student evaluation of teachers than was believed, and that is starting to skew higher education — even to turn it away from the mission. The likely first effects?

“A growing number of faculty lawsuits against employers,” predicts Peter F. Lake, director of the Center for Excellence in Higher Education Law and Policy at Stetson University.

With society much more polarized on matters such as sex discrimination, and colleges having become more likely to deny faculty members promotions or contract renewals based on considerations such as student feedback, “it does not surprise me to see these issues coming forward,” he says.

In reviewing students’ anonymous feedback on instructors, Mr. Lake says, it can be “hard to interpret whether what you are reading is hateful or helpful.”

Students can be harsh critics, without the knowledge or restraint to back up the critique

The practice of using students’ anonymous course evaluations to judge faculty performance has long been controversial, with a growing body of research finding that students’ assessments are biased by race, gender, age, and factors such as personal attractiveness.

About half of colleges have shifted from asking students to submit written course evaluations to asking them to fill out such forms online, and the online questionnaires have much lower response rates and are more likely to convey strongly negative or positive opinions, the American Association of University Professors found in a 2014 survey that asked faculty members about their teaching evaluations. Many of the survey’s more than 9,000 respondents reported that students filling out the anonymous, online questionnaires had begun using an “abusive and bullying tone,” an AAUP summary of the survey’s findings says.

“I have read student evaluations that use language that I would not repeat to anybody unless I was forced to in a court of law. It’s gross,” says Linda B. Nilson, director emeritus of Clemson University’s Office of Teaching Effectiveness and Innovation and a researcher of the use of such instruments.

Read All About It

And as a teacher commented in an earlier posting of this article, it is hard for a teacher to alter teaching methods or class style when anonymous students will not explain what the issue is. The standard question on the evaluation survey,  “Would you recommend this instructor?” could mean the student failed the course, or the work was hard, or they did not enjoy the lecture or material or teaching style.

Quite how that question can be helpful for the teacher, when all of its possible nuances are represented only in a numerical score from 1 to 10, is difficult to understand. And being in a job where you’re unsure what’s required, and what constitutes an effective performance, is a classic workplace nightmare.

John wryly remembers …

… his student days when everything he said or did was generally regarded as wrong. Talk about a 180!

He also thinks there is a whole new business to be had in helping the educators. In business, everyone thinks they can write a survey. And they can. But the answers that come back are rarely meaningful – let alone helpful. It seems that despite being educators – they too do not understand how to put together a survey.

Think I’m wrong? Consider how badly professional survey people are in asking questions – and getting the right answer. So how does a simple teacher get it right?

Graham goes on …

Survey design is indeed a difficult discipline.  But it’s not the teacher designing these–it’s the institution’s own assessors. Which in a sense makes it even worse–they are paid to know this stuff, and to understand the need to tailor, but still they revert to boilerplate

John comments …

… yes … Graham does indeed go on …

… and on and on!