What you say and how you say it: Does sentiment bias stance detection in political texts?
Alex Hartland, Daniela Braun, Daniel Gayo-Avello, Benjamín López Pérez, Cristian González García
University of Saarland and University of Oviedo
Distinguishing between sentiment and stance has become a key consideration in the analysis of political texts. Where once they were conflated, it is increasingly clear that the two concepts should be measured separately. For example, a text may express negative sentiments while also being in favour of a given policy position. However, it is not yet clear how the two interact on a range of issues, and whether this interaction biases the measurement of either concept. To address this question, we turn to a large corpus of texts collected and hand-coded as part of the Horizon Europe-funded ActEU project. The texts are produced by a range of actors, from elected officials, journalists, and members of the public across a number of platforms including Twitter/X, Telegram, and news articles. They express both sentiment and stance on the issues of migration, gender, and climate change in nine European languages. Using the hand-coded texts to train and validate several automated text classifiers, we examine the potential for sentiment to bias stance detection in a variety of contexts, and the methods which are better able to overcome this bias. Using these classifiers, we then show the stances and sentiment expressed by different actors in the larger corpus. As large-scale automated text classification becomes increasingly feasible at reduced cost, our paper contributes to improving its validity.