ECPR

Install the app

Install this application on your home screen for quick and easy access when you’re on the go.

Just tap Share then “Add to Home Screen”

ECPR

Install the app

Install this application on your home screen for quick and easy access when you’re on the go.

Just tap Share then “Add to Home Screen”

Automated Decision-Making in the Public Sector: Legitimacy and the Global-Local Tension

Democracy
Governance
Political Theory
Public Administration
Global
Normative Theory
Technology
Markus Furendal
Stockholm University
Markus Furendal
Stockholm University

Abstract

Public decision-making is becoming increasingly automated, with decisions being delegated to software processes and artificial intelligence (AI) technology. The main rationale for this is the promise of decreasing costs and increasing the consistency of public authority by shifting decision-making from human bureaucrats, who are expensive to employ and sometimes make mistakes. The aim of this paper is twofold: first, it surveys and systematizes what concerns this trend raises from the standpoint of democracy, and second, it sketches a theoretical framework for assessing how different kinds of automated public decision-making require different considerations with regard to democratic legitimacy. One set of worries relates to the opacity inherent in the dominant machine learning (ML) paradigm in AI development, and specifically the concern that, unlike rule-based Robotic Process Automation (RPA), ML-based public decision-making is essentially incapable of living up to the demand for publicity necessary for decisions to be democratically legitimate. Another set of worries, which has received comparatively less attention from scholars, is the fact that in the real, non-ideal world, there is a tension between the global context in which many AI systems are being developed, and the national and local contexts in which automated authority is being exercised. Specifically, the AI systems adopted in public sectors world-wide will most likely be developed by a handful or large, multinational companies for a world market, but will be applied in highly specific jurisdictions and distinctive environments. Such systems will, moreover, often be incapable of simply training on specific data from each environment, since data is notoriously non-standardized and often incomplete or systematically flawed. Unless tailored to each use case, the clash between global AI development and local AI deployment may lead to such AI systems being inefficient, incorrect, or both – and the rationales for adopting them are thus undermined. Taking these democratic challenges in automated public decision-making as its point of departure, the paper argues that it will be necessary to distinguish between the varying contexts and complexities of public decision-making when evaluating the requirements of democratic legitimacy of AI applications. For example, it might be the case that for routine, low-stakes decisions, such as processing standardized applications or enforcing straightforward regulations, RPA systems may suffice, provided they adhere to principles of transparency and accountability. However, more complex or discretionary decisions—those that involve value judgments, context-sensitive trade-offs, or significant societal impact—might demand higher levels of deliberative scrutiny and public engagement. In these cases, decisions cannot be fully delegated to ML technology without losing democratic legitimacy. In addition, the global-local tension in AI system development may call for additional demands, such as ensuring cultural and jurisdictional responsiveness, to safeguard against one-size-fits-all solutions that might erode legitimacy.