ECPR

Install the app

Install this application on your home screen for quick and easy access when you’re on the go.

Just tap Share then “Add to Home Screen”

ECPR

Install the app

Install this application on your home screen for quick and easy access when you’re on the go.

Just tap Share then “Add to Home Screen”

Reflections on Conversational Agent Voting Advice Applications (CAVAAs): Ethics and Trust in AI

Elections
Qualitative
Communication
Elke van Veggel
Tilburg University
Elke van Veggel
Tilburg University
Christine Liebrecht
Tilburg University
Naomi Kamoen
Tilburg University

Abstract

Despite VAAs’ widespread usage and positive impact, studies also show that VAA users often struggle to understand the political statements they are asked to evaluate. These comprehension problems stem from a lack of semantic (“What is dog tax?”), pragmatic (“How high is the current dog tax?”), and opinion-based information (“Why would you want to increase the dog tax?”). VAA users have been found to make only a minimal effort to search for this missing information and frequently rely on assumptions, which can lead to less accurate voting advice. To facilitate searching and finding relevant information, a Conversational Agent (CA) has been integrated into a VAA in a lab setting (Kamoen & Liebrecht, 2022). In these so-called Conversational Agent Voting Advice Applications (CAVAAs), users may ask questions about political issues in VAA statements, with the CA providing tailored answers from a database of manually crafted responses. CAVAAs are seen to enhance political knowledge and user satisfaction compared to traditional VAAs. Method: We will conduct semi-structured interviews with Dutch members of the registry and local politicians from the municipalities of Tilburg and Eindhoven to examine their perspectives on information provision in CAVAAs. These interviews are based on VAA users’ information needs identified in previous studies and will cover two broad topics. First, we will examine what types of information are ethically and legally acceptable for politicians and registry members. Key questions about ethical considerations include whether opinionating information (e.g., arguments for and against in political debates) can be provided neutrally and which resources are deemed reliable. Second, we will investigate trust in AI integration within CAVAAs. While AI is currently limited to intent recognition and not text generation in these systems, tools like ChatGPT may lead to misconceptions about CAVAAs’ capabilities (Hu, 2023). Trust in AI is vital for implementation, yet stakeholders may question the accuracy and authenticity of CA-responses, AI-generated or not. Addressing these concerns is critical as AI continues to evolve, shaping expectations and trust. Implications: The findings of the study will be presented during the ECPR General Conference. Our research will contribute to ongoing ethical debates in the fields of philosophy, AI, and politics, particularly in two areas: bias and fairness and democratic participation and manipulation. By examining these issues in the practical context of CAVAAs, our study will provide real-world insights into the ethical challenges associated with using AI in voting tools. We hope to bridge the gap between theoretical ethical discussions and the practical implementation of AI systems in political decision-making, ultimately supporting a more informed and ethical democratic process.