Fairness-accountability-transparency in algorithmic comment moderation

Moderation of reader responses under news articles is very labour intensive. With the help of artificial intelligence, moderation is possible at a reasonable cost. Since any application of AI must be fair and transparent, it is important to explore how media can comply to these values.

Objective

This PhD-project will focus on the fairness, accountability, and transparency of algorithmic comment moderation systems. It provides a theoretical framework as well as actionable measures that will support news organizations in complying with recent policy making towards value-driven implementation of AI. Now, as more and more news media enter the age of AI, they must embed fairness, accountability, and transparency in their use of algorithms.

Results

Although algorithmic moderation is very attractive from an economic point of view, news media should know how to mitigate inaccuracy and bias (fairness), disclose the workings of its algorithms (accountability) and let its users understand how algorithmic decisions are made (transparency). This dissertation advances knowledge on these topics.

Duration

01 February 2022 - 01 February 2025

Approach

The main research question of this dissertation is: How can and should news media embed fairness, accountability, and transparency in their use of comment moderation algorithms? To address this question, the doctoral research is split up in four sub questions.

  • How do news media use algorithms for comment moderation?
  • What can news media do to mitigate inaccuracy and bias in algorithmic comment moderation?
  • What should news media disclose about their use of algorithmic comment moderation?
  • What makes explanations of algorithmic comment moderation understandable for users of different levels of digital competence?
 

 

HU researchers involved in the research

Related research groups

Any questions or want to collaborate?