ECPR

Install the app

Install this application on your home screen for quick and easy access when you’re on the go.

Just tap Share then “Add to Home Screen”

ECPR

Install the app

Install this application on your home screen for quick and easy access when you’re on the go.

Just tap Share then “Add to Home Screen”

The Iatrogenic Problem of AI Governance

Democracy
Governance
Regulation
Critical Theory
Decision Making
Technology
Big Data
Policy-Making
Roy Heidelberg
Louisiana State University
Roy Heidelberg
Louisiana State University

Abstract

This paper examines the iatrogenic nature of artificial intelligence (AI) governance, arguing that efforts to regulate and control AI often exacerbate the very problems they seek to address. By exploring AI’s integration into governance systems, the paper situates its critique within the broader context of digital authoritarianism, where the reliance on digital technologies intensifies practices of surveillance, control, and decision-making at the expense of human agency and democratic values. Drawing on the concept of iatrogenesis—the phenomenon where medical interventions cause harm rather than cure—this paper demonstrates how regulatory frameworks like transparency, accountability, and participation deepen societal dependence on AI systems. These frameworks, often heralded as solutions to AI’s risks, operate as mechanisms that reinforce the algorithmic logic of governance, embedding AI further into decision-making processes and expanding its capacity for control. For example, transparency initiatives generate high volumes of complex data that require AI systems to interpret, while accountability mechanisms impose forms of control that rely on AI’s scale and speed to manage. Both practices mirror the dynamics of digital authoritarianism by prioritizing efficiency and control over deliberation and relationality. The analysis draws connections between AI’s governance logic and practices of digital authoritarianism. It argues that AI governance mechanisms function as a form of “democratic authoritarianism,” where ostensibly democratic values, such as fairness and equality, are operationalized in ways that depoliticize human decision-making and amplify algorithmic power. This paper critiques the underlying paradox: efforts to constrain AI require the expansion of its capabilities, resulting in recursive systems that erode democratic participation and human oversight. The discussion extends to the global implications of these dynamics. The exportation of AI-driven governance systems and their adoption by non-democratic regimes intensify authoritarian practices like mass surveillance, predictive policing, and behavioral control. However, the paper contends that these practices are not exclusive to authoritarian states. Rather, they reveal systemic tendencies inherent in modern governance systems that have prioritized bureaucracy, efficiency, and impersonal authority. By framing AI governance as an iatrogenic problem, the paper challenges the dominant narrative that AI can be effectively controlled through regulation alone. Instead, it calls for a reframing of the relationship between digital technologies and governance, one that critically examines the limits of democratic ideals when scaled to global systems. The analysis contributes to the study of digital authoritarianism by linking its practices to the deeper bureaucratic logics that underpin AI’s development and deployment. This paper invites scholars to rethink the conceptual and methodological tools used to analyze the intersection of technology and political regimes. By exposing the recursive traps of AI governance and its alignment with authoritarian tendencies, the paper offers a critical lens for understanding the broader societal and political ramifications of digitization. It ultimately seeks to advance a holistic understanding of how digital technologies, even in their democratic applications, can reproduce and perfect the conditions of digital authoritarianism.