Student rates

We’ve developed a way to filter student feedback to ensure…

(MENAFN – The Conversation)

This week, many Australian universities will be sending academics the results of first semester student assessment surveys.

For some, this will be a worrying and unpleasant time. The comments university students make anonymously in their teaching evaluations can leave academics feeling fearful, upset and demoralized.

And with good reason. As a 2021 survey of Australian academics and their experiences of student feedback found:

Hurtful or abusive comments may remain permanently recorded as a measure of performance. These files may have an impact on requests for promotion or job retention.

The authors of the 2021 survey, led by Richard Lakeman of Southern Cross University, are among those calling for the removal of anonymous online surveys. Some academics, burned by their experience of student feedback, say they no longer open or engage with student evaluation reports. They said the risk of harm outweighed the benefits.

Read more: ‘Lose weight’, ‘old fool’: Universities should no longer ask students for anonymous comments about their professors

In the Netflix show The Chair, a memorable scene sees Professor Joan Hambling’s character burn through her students’ assessments. Obviously, another solution is needed.

Student feedback can always be helpful in improving teaching standards and it is important that students have their say.

We have developed a filtering system using machine learning (where software modifies its behavior by “learning” from user input) that allows students to talk about their experiences while protecting scholars from unacceptable comments.

Read more: Read the student survey responses shared by academics and you’ll see why Professor Hambling is right to burn his own

Why a new approach is needed

University codes of conduct remind students of their general obligation to refrain from abusive or discriminatory behavior, but not specifically with regard to student evaluations.

Instead, universities rely on self-regulation or others to report incidents. Some institutions use profanity blockers to filter comments. Even then, these often fail to detect emerging abuse terms in online discourse.

Thus, by setting up our screening system, we wanted to:

  • promote the well-being of staff and students
  • improve the reliability and validity of student feedback
  • improve confidence in the integrity of survey results.

We have developed a method using machine learning and a dictionary of terms to filter out unacceptable student comments. The dictionary was created by QUT based on historically identified unacceptable comments and incorporating previous research on abusive and discriminatory terms.

Our ‘Screenomatic’ solution

There is not much published work on detecting unacceptable or abusive comments in student evaluation surveys. Our team therefore adapted previous research on detecting misogynistic tweets. This worked because often the student comments we looked at were similar in length to the 280-character limit of a tweet.

Our approach, which we call “Screenomatic”, automatically reviewed over 100,000 student comments in 2021 and identified those that appeared to be abuse. Trained assessment staff members manually reviewed approximately 7,000 reported comments, updating the machine-learning model after each semester. Each update improves the accuracy of auto-detection.

Read more: Gender bias in student surveys of teaching has increased with remote learning. What can the united do to ensure fair treatment of female staff?

In the end, 100 comments were removed before the results were released to educators and supervisors. University policy allows comments to be re-identified for potential misconduct. The central assessment team contacted these students and reminded them of their obligations under the code of conduct.

The Screenomatic model can help protect both teachers and students. Staff are protected from abuse and at-risk students – who make comments indicating they need mental health help, include allegations of bullying or harassment, or who threaten staff or other students – can be offered support. Universities can share data to train the model and maintain currency.

Importantly, the process allows universities to act morally to harness student voices while protecting people’s well-being.

Helpful feedback, no abuse

The number of educators who receive abusive comments may be relatively small. However, it is still unacceptable that universities continue to expose their staff to offensive comments in full knowledge of their potential impact.

Read more: Our university professors were already among the most stressed in the world. COVID and student feedback only made it worse

With last year’s High Court ruling on liability for defamatory messages and attempts to improve online safety, there is growing recognition that people should not be able to post anonymous messages and detrimental.

After all, the cost of screening responses pales in comparison to the cost to individuals (including mental health or career consequences). And that’s ignoring the potential costs of litigation and legal damages.

Ultimately, anonymous comments are read by real people. As a tweet in response to Lakeman’s findings noted:

The Screenomatic model goes a long way in enabling “tons of useful feedback” to achieve its intended purpose while ensuring that people are not harmed in the process.

MENAFN18072022000199003603ID1104550241


Legal disclaimer: MENAFN provides the information “as is” without warranty of any kind. We assume no responsibility for the accuracy, content, images, videos, licensing, completeness, legality or reliability of any information in this article. If you have any complaints or copyright issues related to this article, please contact the provider above.