Unmasking Human-Centric AI: the Role of Political Organization in AI Policy
Governance
Policy Analysis
Public Administration
Power
Technology
To access full paper downloads, participants are encouraged to install the official Event App, available on the App Store.
Abstract
The policy concept of human-centricity has recently entered the glossary of global knowledge governance as ‘human-centric AI’ (HCAI), accompanied with a wave of calls for more transparency, accountability, and trustworthiness in AI governance. How to approach the governance of AI in a more responsible and human way is a ‘wicked problem’ in AI policy from European Union to Google. Prima facie, HCAI has an extraordinary quality in the sense of re-centering governance from large technological structures towards human facing the system, a way out from Kafkaesque situations in algorithmic governance. However, research on HCAI has remained in an abstruse level. This paper provides empirical substance to HCAI in policy-making by studying AI policy in Finland, and more in-depth the national AI program AuroraAI (2020-2022). I will be focusing on policy framing in AuroraAI, examining how HCAI affects policy-making on AI, how AI governance is thought about, and how HCAI was articulated in the implementation of AuroraAI.
In Finnish AI policy since 2015, the state of Finland has searched for ways to improve economic mobilization and competiveness in a national level with AI. After various policy papers, the state experimented with AI governance, conducting the policy experiment of AuroraAI. In AuroraAI, HCAI was employed as a design principle with which to reorganize public management in Finland with AI and machine learning technologies. AuroraAI was an instance where the state formulated an institutional account on HCAI in public management, aiming to produce a digital platform integrating both state and non-state actors within public service provision with algorithmic decision-making.
Despite the high anticipations on HCAI to improve algorithmic governance, after administrative, legal and ethical evaluation of the program, it was concluded that increased HCAI can actually have contradictory or even perilous consequences for democratic governance. Indeed, in AuroraAI the normative challenges originated from non-state actors operating within algorithmic networks in unsupervised manner; the ‘algorithm’ or ‘AI’ produced problematic impact for policy-making, such as juridical inconsistency between state and non-state actors, as well as various challenges for public power, such as parity, non-discrimination and legality in governance. Nonetheless, the experiment demonstrated the value of political organization administering policy-making on AI. AuroraAI was an experiment whose technological implications were modest, but whose implications for organizational learning were great; it ‘unmasked’ HCAI and provided further questions for public organizing around the complex problem of AI as well as how to protect citizens in governance.