ECPR

Install the app

Install this application on your home screen for quick and easy access when you’re on the go.

Just tap Share then “Add to Home Screen”

ECPR

Install the app

Install this application on your home screen for quick and easy access when you’re on the go.

Just tap Share then “Add to Home Screen”

Feminist Linguistic Justice and Machine Translation

Gender
Political Theory
Representation
Social Justice
Feminism
Identity
Normative Theory
Technology
Seunghyun Song
Tilburg University
Annick Backelandt
Tilburg University
Seunghyun Song
Tilburg University

Abstract

With its unprecedented growth and spread, machine translations, especially large language models (LLMs) are now an undeniable fact of multilingual contexts. While recent research map out the normative issues that arise from gender bias perpetuated by machine translation and LLMs, the scope of research remain limited. This paper specifically focuses on LLMs and how it exhibits gender bias. We argue that LLMs’ gender bias constitutes a particular learning environment that may become an integral part of human life, behaviour, social norms and institutions, which violates a distinct language-based interest that individuals hold in language. The paper proceeds as follows. First, we provide evidence of LLMs’ gender bias. Second, we articulate an interest-based theory of feminist linguistic justice as an analytic toolkit to provide a normative analysis of LLMs’ gender bias. On this basis, we will argue that as linguistic beings we have something called representational interest in language, which makes us invested in not being discriminated against at the level of language. LLMs’ gender bias violates said representational interest, which should be subject to remedy. As such, the contributions of this paper lie in the pursuit of gender justice in a society that is increasingly dominated by algorithms.