Democratic Deficits in Global AI Governance: The Legitimacy of Standard-Setting Bodies
Cyber Politics
Democracy
Governance
Institutions
Global
Normative Theory
Technology
Abstract
The global governance of artificial intelligence (AI) is increasingly shaped by high-level declarations that promote ethical and sustainable principles such as trustworthiness, human-centeredness, and alignment with planetary boundaries. However, these ideals often fail to connect with the technical, commercially driven realities of AI development. This paper identifies technical standards – operational documents specifying requirements and guidelines – as the missing link between aspirational international principles and practical implementation. Standards translate abstract values into measurable, enforceable criteria, making them crucial instruments in the governance of AI. Yet, despite their significance, the bodies that create these standards have largely escaped scrutiny from a democratic legitimacy perspective.
A key example is the European Union’s AI Act, the first binding legal framework on AI operating at a supranational level. Rather than prescribing concrete definitions or methods to evaluate AI risks, the Act delegates responsibility for operationalization to European standard-setting organizations, primarily CEN and CENELEC. These organizations are tasked with developing harmonized standards that will serve as benchmarks for compliance with the AI Act. While such delegation to technical experts is not inherently problematic and has precedents in other domains (e.g., central banking), this paper argues that the current model of delegation in AI governance raises profound democratic concerns.
The core problem lies in how standard-setting bodies function and who participates in them. Many of these organizations are dominated by corporate representatives and industry experts, with limited transparency, minimal public oversight, and often prohibitively high costs for participation. For instance, over half of the joint committee members responsible for operationalizing the EU AI Act represent corporate interests, including major tech firms like Microsoft, Amazon, and Huawei. This corporate dominance risks undermining the democratic legitimacy of standard-setting by sidelining voices from civil society, academia, and marginalized communities, while also enabling regulatory capture.
The paper develops a normative framework to evaluate the democratic legitimacy of these bodies, drawing on three interrelated concepts: epistemic independence (freedom from undue influence by vested interests), epistemic integrity (reliance on sound reasoning and inclusive expertise), and the legitimate role of experts (appropriate balance between technical authority and democratic accountability). This framework is then applied to assess the EU’s approach to AI standardization under the AI Act, revealing serious shortcomings with respect to transparency, inclusiveness, and accountability.
Ultimately, the paper argues that current practices in global AI standard-setting reflect significant democratic deficits. It calls for a rethinking of how such bodies are constituted and how their decisions are governed, urging more inclusive, transparent, and accountable structures that better reflect the diverse stakeholders affected by AI technologies. The paper contributes to both normative democratic theory and empirical debates in AI governance by offering a systematic account of when the delegation of authority to standard-setting bodies is justified – and when it is not.