Pioneer Communities: Imagining ComAI and its possible futures (Prof. Dr. Andreas Hepp)
Taking historical developments at Stanford University and MIT as well as today’s developments at OpenAI (GPT-4) and Aleph Alpha (Luminous) as examples, P1 focuses on ComAI’s pioneer communities: groups who create “social horizons” for future development through their imaginative and experimental practices.
P1 combines a historical perspective on earlier pioneer communities and tech movements as their contextual figurations, a perspective on the current influences of both, and a perspective on pioneer communities’ contribution to the spread of ComAI. The analysis is guided by four research questions:
- How did tech movements and pioneer communities prefigure today’s ComAI?
- What characterizes their imaginaries of ComAI and their influence on current ComAI developments?
- What do pioneer communities contribute to the spread of ComAI?
- What role do pioneer communities play in the sociomaterial constitution of ComAI?
To answer these research questions, P1 uses a mixed-method design analyzing historical sources, media discourses, interviews, observations, and online-networks in Germany, the UK and the US.
Interfaces: Implementing user-centered ComAI (Prof. Dr. Rainer Malaka)
With the rise of Large Language Models (LLMs) and their adaptation to human-computer communication based on human feedback, users are increasingly expecting human-like interaction from ComAIs.
However, these models designed specifically for human-computer communication face mainly two problems: First, they rely only on the data they are trained with, which is often biased and insufficient.
Second, they are only created to produce text and responses, but there is no validation of the truthfulness of the generated output during the training. Therefore, when designing and building ComAIs, it is important that users know that such problems are a feature of LLMs. Interfaces are crucial in helping users to identify problematic information and evaluate the data sources’ quality and reliability.
Against this background, P2 uses the example of conversational bots to examine the design and implementation of ComAI interfaces as a dimension of their sociomaterial constitution. Investigating both the conversational and paralinguistic features of interfaces, we research which implementation features of the interface design influence user-alignment and how.
Law: The Juridification of ComAI (Prof. Dr. Wolfgang Schulz)
n P4 we trace the juridification in the field of ComAI. We focus on the legal frameworks for conversational bots (specifically ChatGPT) and social bots (specifically on X/Twitter and Facebook), first, from the perspective of communications law, and second, emerging AI regulation.
P4 centers around the legal situation in Germany, reconstructing basic concepts of media law like “personhood”, “opinion” and “expression”. The project will also cover the current and soon-to-be-enacted EU legislation—namely the “AI Act”, on which a political agreement was reached in December 2023 based on the EU Commission’s proposal—to include the constructions underlying regulation of ComAI.
It will undertake a functional comparison with UK, AUT and US legal contexts to include more approaches to the ongoing juridification. Our focus lies on how legal definitions and concepts are part of the sociomaterial constitution of ComAI and which elements and connections of hybrid figurations are legally significant. In these ways, P4 addresses the challenges of hybrid forms of agency from a legal perspective.
Governance: Private ordering of ComAI through corporate communication and policies (Prof. Dr. Christian Katzenbach)
In P5 we investigate private ordering as one dimension of ComAI’s sociomaterial constitution with regard to corporate communication and policies in the context of public controversies, focusing on Germany, UK and US.
P5 thus investigates the ways in which corporate strategies and product policies of companies such as Alphabet, Amazon, and OpenAI as well as public controversies contribute to and negotiate what ComAI products are and how they are governed.
- How is the ordering of ComAI portrayed and politicized in public controversies?
- How do companies position ComAI as a product? What are the policies and terms of services that industries enforce for using them?
- Welche Rolle spielt „Private Ordering“ für die soziomaterielle Konstitution kommunikativer KI?
- Which is the role of private ordering in the sociomaterial constitution of ComAI?
- And how is ComAI’s agency negotiated and attributed in all this?
These five questions will be investigated across four conversational bots and artificial companions (Alphabet’s Bart and OpenAI’s ChatGPT, Amazon’s Alexa and a further case yet to be determined) by using both qualitative and quantitative (computational) content analyses of public material as well as interviews with company representatives.
Journalism: Automating the news and journalistic autonomy (Prof. Dr. Wiebke Loosen)
P6 investigates ComAI’s involvement in journalism by analyzing the challenge of journalistic autonomy at the interactional, organizational, and societal levels.
We assume that journalism is particularly concerned with relationships between humans and machines within societal communication, a relation that this is also relevant to self-reflection and the appropriation of ComAI in the journalistic field. Our research is guided by four questions:
- How do journalists and other professional domain actors interact with ComAI and what agency do they construct in relation to it?
- What patterns exist within ComAI’s organizational embeddings and its related forms of hybrid agency
- How does ComAI relate to conceptions of news and objectivity, journalistic roles, audience relationships, and imaginaries of ComAI’s futures?
- How is ComAI appropriated in journalism, possibly challenging journalistic autonomy?
To answer these questions, a mixed-methods design is applied, consisting of ethnographies in three different types of media organizations in Germany, Austria and the UK as well as interviews, group discussions, and ethnographies at events and conferences.
Political discourse: ComAI and deliberative quality (Prof. Dr. Cornelius Puschmann / Dr. Gregor Wiedemann)
Political deliberation on the internet is widely seen as potentially vital to the larger public debate about fundamental societal challenges by virtue of its speed, breadth and openness. At the same time, debates on social media platforms are often polarized and plagued by problems such as incivility, lack of factuality and one-sidedness of arguments.
P7 will investigate communicative AI in the domain of political discourse by means of online discursive monitoring and intervention. Taking on a largely experimental approach, we will study the effect of social bots that utilize large language models (LLMs) on the quality of deliberation. As our case study we will approach debates related to climate change on German-language Twitter/X, Mastodon and Bluesky.
Combining discourse theory with recent innovations in LLMs, we will both monitor and intervene in public political discussions. Enlisting a group of public speakers on climate change, we will also closely investigate how social bots are appropriated in the domain of political discourse by analyzing discourse trajectories with and without bot intervention, and through accompanying user surveys.
Personal sphere: Companionship and ComAI (Prof. Dr. Michaela Pfadenhauer)
P8 investigates the emergence of artificial companionship-apps (e.g., Replika, Nomi.ai, Paradot) in the personal sphere which corresponds to the changing nature of companionship in the twenty-first century. Since these apps draw on professional expertise in the counselling field, we examine artificial companionship with regard to already existing companionship services. With grief and day-to-day life management, we compare two variants of companionship in the personal sphere that differ in their levels of intervention. We analyze companionship as a communicative form that constitutes an ideally “close to equal” but nevertheless asymmetric relation.
This is realized by an exchange of “narrative episodes” and thus built up across situations. As a communicative form, companionship is not purely individual, it operates as a facet of societal communication. We approach the concept of companionship through discourse and genre analysis and explore the (hybrid) agency in the companion relation through (digital) ethnography.
Four research questions guide our investigation:
- How is the concept of companionship discursively constructed?
- In which way do “narrative episodes” constitute companionship as a communicative form?
- How does ComAI refigure companion relations in terms of agency?
- How can we theorize the appropriation of ComAI in the personal sphere the challenge of companionship?
Health: Caring through ComAI (Prof. Dr. Juliane Jarke)
ComAI is increasingly presented as a solution to the care demands of a growing older population vis-à-vis a defunding of healthcare systems and a shortage of healthcare professionals. They are also promoted as supporting “healthy ageing”, a policy objective that aims to advance the wellbeing of older adults. In this context, technology companies and policy makers create regimes of anticipations that shape expectations and future imaginaries, and define what is thinkable and desirable.
In these anticipation regimes, ComAI is ascribed different “care obligations”: managing healthy ageing, providing health information and facilitating older adults’ access to health care.
P9 researches the emergence and constructions of hybrid healthcare figurations through digital methods and qualitative case studies in Austria, Germany, UK and the US. P9 aims to reconstruct care practices of different older populations, healthcare professionals and informal carers through ComAI.
This contributes to the RU’s research objectives to typify patterns of appropriation in social domains and explore new forms of hybrid agency
Education: ComAI for learning and teaching (Prof. Dr. Andreas Breiter)
In education, particularly in higher education, technologies have a long history of improving learning and teaching. Most recently GPT-4 and other LLMs are seen both in media discourses and in politics as a “game changer”, confronting higher education with societal expectations and fears. In the light of this, P10 will address how learning and teaching as well as supporting administrative processes are challenged by the appropriation of ComAI in higher education.
We will investigate how higher education institutions cope with these technological changes and its (side-)consequences of embedded biases and related social inequalities.
This will be investigated with qualitative interviews in case studies at five German universities, accompanied by student surveys, analyses of data journeys and administrative process models. To understand the dynamics of ComAI’s appropriation and the challenges that emerge through its introduction in higher education administrations—including related patterns of coping and imaginaries of future developments—we will contextualize our research at these five universities with investigations of US universities where ComAI is already being used on a larger scale.