ECPR

Install the app

Install this application on your home screen for quick and easy access when you’re on the go.

Just tap Share then “Add to Home Screen”

ECPR

Install the app

Install this application on your home screen for quick and easy access when you’re on the go.

Just tap Share then “Add to Home Screen”

Against Agential Auto-Complete: The Normative Ethics of the Digital Leviathan

Human Rights
Political Theory
Identity
Internet
Ethics
Normative Theory
Political Ideology
Technology
Yael Peled
Max Planck Institute for the Study of Religious and Ethnic Diversity
Yael Peled
Max Planck Institute for the Study of Religious and Ethnic Diversity

Abstract

AI tools trained on large language models are often viewed, in multilingual societies, as a convenient, fair and accessible “institutional fix” to the challenge of their diverse linguistic realities. For example, for the purpose of public services provision (e.g. healthcare), or for facilitating civic participation (e.g. democratic deliberation). This view can be said to constitute an emerging linguistic ideology, which mobilizes historical (yet lingering) linguistic ideologies such as standard language ideology and the monolingual mindset, from the human-to-human communication into the rapidly expanding sphere of human-to-machine communication. And it can be broadly described as a conviction that there exists – or at least could exist - a nonhuman linguistic entity that is somehow capable of “solving away” the fundamental challenge of language barriers, in the institutional and civic life of linguistically-diverse human societies. This emerging linguistic ideology, however, has deeply troubling ethical implications. Its deferential attitude to a Hobbesian-like “digital Leviathan” is in stark contrast with a human-centred approach to communicative ethics premised on values such as liberty, equality, autonomy and dignity. The core of the tension between that human-centred approach and the “digital Leviathan” lies not merely in the promise to eliminate language barriers, but rather in the implied message that LLMs-based AI tools effectively make language awareness itself (i.e. the capacity for a higher-order epistemic reflection over the social and political life of language) an obsolete life skill in our contemporary 21st century world. The belief that a nonhuman linguistic entity may somehow successfully – and justly – figure out any and all communicative challenges, as well as provide their optimal solutions, constitutes what is perhaps one of the most fundamental ethical dangers in human-machine communication: the abdication of human reasoning in assessing communicative ethics, for the opaque and unaccountable algorithmic reasoning of a nonhuman linguistic entity. The paper maps and explores the normative challenges that emerge from a growing deference to a “digital Leviathan” as a source of moral, social, political and linguistic authority, brought about by an expanding presence of LLM-based AI tools in the institutional and civic life of multilingual societies in the human-machine era. In particular, it considers its implications for a human-centred language policy in the 21st century, and proposes a new category of language rights, defined as “the right to human-based reasoning in linguistic communication”, especially in domains of particular vulnerability that require greater reasoning transparency and accountability, such as healthcare and immigration.