Jump to content
P6

ComAI Bots for Conflict Prevention: From reactive to proactive moderation in political discourse on social media

PhD project of Patrick Frey

The ability to automatically prevent online discourse from derailing into destructive interpersonal conflict,
indicated by phenomena such as toxicity, is highly beneficial for both users and social media platforms. As previous research highlights, this approach could enable timely, scalable, and cost-effective moderation, reducing harm both to users and to human moderators. In contrast, reactive moderation, which still represents the predominant practise in automated online moderation, functions as a form of “post-hoc damage control”. Consequently “[e]ven the best reactive moderation can only come after damage has already been done — possibly exposing any number of users to incivility and distracting or preventing productive discussions“ (Schluger et al., 2022, p. 2).

Although communicative AI (ComAI) opens new possibilities for making proactive approaches feasible, such moderation tools still remain scarce on social media. Thus, the objective of my dissertation is to advance this field of research by going beyond reactive conflict detection and intervention towards an automated prediction as well as proactive mitigation and, ideally, prevention of destructive interpersonal conflict online.


In doing so, my planned research expands existing a-posteriori approaches and addresses the limitations of reactive communicative interventions in asynchronous social media conversations while adhering to strict ethical standards and mitigating biases. The interventions will follow a transparent and autonomy-preserving approach, which is based on the strategies of human moderators and mediators. This means that interventions are not intended to restrict freedom of expression. Instead, they offer users voluntary de-escalation options, which will first be tested in a controlled experiment with human-in-the-loop monitoring.


However, this research paradigm entails a wide range of risks and ethical challenges, which raise further research questions to be addressed. Thus, a reflective ethical framework underpins and informs my entire research process. For instance, I will critically examine implications concerning issues such as freedom of expression, unintended manipulation of users, transparency, and potential biases both in automated prediction and intervention. This also includes the crucial distinction between constructive and destructive conflict, as constructive social conflicts can be drivers of societal change and progress.
Addressing these considerations requires an integrated research approach. Accordingly, my dissertation
examines the following:

  1. Building on existing ethical frameworks to develop a coherent framework for responsible automated
    ComAI conflict prediction and intervention,
  2. Mapping the state of the art in automated conflict prediction and proactive prevention to identify key
    research gaps (e.g., LLM biases in prediction and intervention),
  3. Designing and benchmarking LLM-based and other forms of conflict prediction approaches,
  4. Developing and testing (semi-)automated proactive ComAI interventions,
  5. Analysing user perceptions, (dis-)appropriation processes, and (hybrid) practices

Contact

Funded by DFG (German Research Foundation)FWF Österreichischer Wissenschaftsfonds

Contact:

Prof. Dr. Andreas Hepp
ZeMKI, Center for Media, Communication and Information Research University of Bremen

Phone: +49 421 218-67620
Assistent Mrs. Schober: +49 421 218-67603
E-mail: andreas.hepp@uni-bremen.de

Uni BremenZeMKI Uni BremenLeibniz Instituts für Medienforschung | Hans Bredow InstitutUni GrazUni GrazUni Wien