ECPR

Install the app

Install this application on your home screen for quick and easy access when you’re on the go.

Just tap Share then “Add to Home Screen”

In person icon Worthy of Trust? AI Governance and the Role of (Dis)Trust - (Panel 1/2)

Governance
Public Administration
Public Policy
Regulation
Technology
P053
Martino Maggetti
Université de Lausanne
Madalina Busuioc
Vrije Universiteit Amsterdam

In person icon Building: Hertie School (Friedrichstr. 180), Floor: 2, Room: 2.61

Thursday 15:30 - 17:00 CEST (12/06/2025)

Abstract

Trust has become a pervasive desideratum in AI debates, one of those “feel-good” concepts no one can be against. Trustworthy AI debates and studies on citizen attitudes to AI are awash with recipes about what “levers” to pull to enhance user / citizen trust in the technology, often detached from considerations as to the actual trustworthiness of underlying systems. This direction seems to stem from a specific perspective on the topic that is, by and large, concerned with “user acceptability” and “usage” of the technology – and how to optimise or even, maximise on this. More critical normative considerations and implications, especially relevant in a context where the technology is making significant inroads into public governance, seem to have fallen by the wayside. This is not without serious downsides. Attempting to “maximize” citizen trust and perceived legitimacy in AI decision-making, irrespective of actual system performance, raises the prospect of “manufactured trust”. In other words, inflated trust (or “artificial trust”) that fails to reflect the underlying quality or robustness of systems, whether, and if so to what extent, such trust is, in fact, warranted in view of system functionality. In a context where AI is increasingly adopted across public sectors and consequential public sector domains, and where failures pertaining to the reliance on suboptimal AI technologies in this context abound, this is unsatisfactory and normatively disconcerting. On the other hand, distrust is almost always decried. However, while unconditional distrust towards AI applications in the public sector may indeed undermine the legitimacy and effectiveness of public governance, a certain measure of vigilance or “healthy skepticism” in AI systems is crucial to ensuring fairness, transparency, accountability in their development, adoption, and use. The panel problematizes this direction in influential debates on trust and AI, and invites critical (empirical, theoretical or methodological) contributions on the role of trust and distrust in algorithmic governance. We especially invite contributions on AI governance on the following topics: - Citizen perceptions of AI and democratic and normative implications thereof - How users’ context-dependent conditions (e.g. vulnerability) mediate trust in AI - The interplay between subjective and objective dimensions of trustworthiness in AI - New methodological approaches to the study of trust that move beyond polling of citizen attitudes (for instance, behavior studies) - The role of distrust and “healthy skepticism” in algorithmic / AI governance - The link between (dis)trust and accountability

Title Details
Does identity matter in legitimacy judgments on algorithmic governance? View Paper Details
AI Governance and (Dis)Trust View Paper Details
Designing for trust: A conjoint experiment on citizens’ support of regulatory regimes for ensuring trustworthy Artificial Intelligence View Paper Details
Moving from distrust to trust in AI – The Copernican Turn of Regulations View Paper Details