P4 | Governance: Private ordering of ComAI through corporate communication and policies
How does content moderation for Communicative AI work? What are the issues and controversies that tech companies need to respond to? This project examines content moderation and private ordering — the regulation of behavior through rules set by private companies rather than the state — as a key dimension of how new media such as Communicative AI (ComAI) come to be defined and shaped in practice.
The emergence of new technologies is not only a technical process, but also a normative one: companies make consequential decisions about what their products should and should not do. These decisions can become part of broader public controversies. How should systems like ChatGPT handle political content and potential misinformation? Where is the appropriate line between creative inspiration and copyright infringement? Such questions have substantial implications for what ComAI applications become in practice — and for the societies that use them.
Against this background, we ask two related questions: First, what are the rules and norms that ComAI systems operate on? Second, do public controversies and regulation effectively challenge or reshape corporate decisions?
We investigate these questions across four cases — Alphabet’s Gemini, OpenAI’s ChatGPT, Amazon’s Alexa, and Mistral AI’s Le Chat — selected to capture variation across company size, geographic origin (US vs. EU), and application type. We use three empirical approaches: (1) the analysis of key controversies in media coverage, (2) the systematic examination of acceptable use policies and platform guidelines, and (3) the study of how companies position themselves within controversies through corporate communication.
As part of this project, we are also expanding the Platform Governance Archive to include a dedicated collection documenting and archiving the policies of ComAI and GenAI products over time. This will create a valuable open resource for the research community studying platform governance and the emergence of AI products.
PUBLICATIONS:
- Bareis, J., & Katzenbach, C. (2021). Talking AI into Being: The Narratives and Imaginaries of National AI Strategies and Their Performative Politics. Science, Technology, & Human Values, 47(5), 855–881. doi:10.1177/01622439211030007
- Dergacheva, D. & Katzenbach, C. (2023a). Mandate to overblock? Understanding the impact of European Union’s Article 17 on copyright content moderation on YouTube. Policy & Internet. doi:10.1002/poi3.379
- Dergacheva, D. & Katzenbach, C. (2023b). “We learn through mistakes”: perspectives of social media creators on copyright moderation in the European Union. Social Media + Society. doi: 10.1177/20563051231220329
- Gorwa, R., Binns, R., & Katzenbach, C. (2020). Algorithmic content moderation: Technical and political challenges in the automation of platform governance: Big Data & Society, 7(1), 1–15. doi:10.1177/2053951719897945
- Hepp, A., Loosen, W., Dreyer, S., Jarke, J., Kannengießer, S., Katzenbach, C., Malaka, R., Pfadenhauer, M. P., Puschmann, C., & Schulz, W. (2023). ChatGPT, LaMDA, and the Hype Around Communicative AI: The Automation of Communication as a Field of Research in Media and Communication Studies. Human-Machine Communication, 6, 41–63. doi: 10.30658/hmc.6.4.
- Hofmann, J., Katzenbach, C., & Gollatz, K. (2017). Between coordination and regulation: Finding the governance in Internet governance. New Media & Society, 19(9), 1406–1423. doi:10.1177/1461444816639975
- Katzenbach, C. (2017). Die Regeln digitaler Kommunikation. Governance zwischen Norm, Diskurs und Technik. Springer VS. doi:10.1007/978-3-658-19337-9
- Katzenbach, C. (2018). There Is Always More Than Law! From Low IP Regimes To A Governance Perspective In Copyright Research. Journal of Technology Law and Policy, 22(2): 99-122. https://nbn-resolving.org/urn:nbn:de:0168-ssoar-55704-7
- Katzenbach, C. (2021a). “AI will fix this” – The Technical, Discursive, and Political Turn to AI in Governing Communication. Big Data & Society, 8(2), doi:10.1177/20539517211046182
- Katzenbach, C., Kopps, A., Magalhães, J. C., Redeker, D., Sühr, T., & Wunderlich, L. (2023). The Platform Governance Archive v1 – A longitudinal dataset to study the governance of communication and interactions by platforms and the historical evolution of platform policies [Data Paper]. doi:10.26092/ELIB/2331
- Katzenbach, C., & Ulbricht, L. (2019). Algorithmic governance. Internet Policy Review, 8(4). doi:10.14763/2019.4.1424
- Mager, A., & Katzenbach, C. (2021). Future imaginaries in the making and governing of digital technology: Multiple, contested, commodified. New Media & Society, 23(2), 223–236. doi:10.1177/1461444820929321
- Richter, V., Katzenbach, C., & Schäfer, M. S. (2023). Imaginaries of artificial intelligence. In S. Lindgren (Eds.), Handbook of Critical Studies of Artificial Intelligence (pp. 209–223). Edward Elgar Publishing. doi:10.4337/9781803928562.00024
PRESENTATIONS:
Ermler, K., Katzenbach, C. (2026): Platform Goverance Archive: Forschungsdaten zu Regeln in Social Media und Generative AI. DGPuK Jahrestagung, Institut für Journalistik der Technischen Universität Dortmund.
Katzenbach, C., Ermler, K., Runge, L. (2025). To Ghibli or not to Ghibli? How Tech Companies Set Normative Standards on the Use of Generative Artificial Intelligence for Creative Practices. GenAI & Creative Practices: Past, Present, and Future, University of Amsterdam, Amsterdam.
Katzenbach, C., Ermler, K., Runge, L. (2025). Content Moderation, next Level? Emergence of Platform Governance from Social Media to Generative AI. PlatGovNet 2025, Online.
Contact:
Prof. Dr. Andreas Hepp
ZeMKI, Center for Media, Communication and Information Research
University of Bremen
Phone: +49 421 218-67620
Assistent Mrs. Schober: +49 421 218-67603
E-mail: andreas.hepp@uni-bremen.de







