New preprint: Value-driven AI Governance
Together, Rebecca Scharlach, CJ Reynolds, Vasilisa Kuznetsova, Blake Hallinan and Christian Katzenbach worked on a paper on value-driven AI governance. Here you can find the available preprint.
27. January 2026About the paper:
Values are omnipresent in AI regulation. State actors and AI companies alike emphasize commitments to values such as fairness and safety. Despite this seeming agreement, we know little how normative principles are interpreted, operationalized, and assigned responsibilities within particular contexts.
In this study, we compare the values articulated in the EU AI Act, and (GenAI) policies from OpenAI, Anthropic, Google AI, Meta AI, and Mistral AI.
Using a combination of frequency analysis and inductive keywords-in-context analysis (for which we coded over 1 k paragraphs manually), we show that public and private actors largely invoke the same values (accuracy, authenticity, control, improvement, privacy, safety, and security). However, their specifications often diverge.
Values like privacy, security, and safety typically mean the same thing in policy discourses, but we found stark differences in the understanding of improvement, tensions between technical and normative operationalizations of values, and shifts of responsibilities for upholding values from one stakeholder to another.
These differences matter. We argue that value specifications surface the politics of values in AI governance, exposing how private actors employ polysemy to claim alignment with the public interest while avoiding substantial accountability.
Contact:
Prof. Dr. Andreas Hepp
ZeMKI, Center for Media, Communication and Information Research
University of Bremen
Phone: +49 421 218-67620
Assistent Mrs. Schober: +49 421 218-67603
E-mail: andreas.hepp@uni-bremen.de







