ECPR

Install the app

Install this application on your home screen for quick and easy access when you’re on the go.

Just tap Share then “Add to Home Screen”

Magic Words or Methodical Work? Challenging Conventional Wisdom in LLM-Based Political Text Annotation

Political Methodology
Methods
Quantitative
Communication
Big Data
Empirical
Lorcan McLaren
University College Dublin
James P Cross
University College Dublin
Lorcan McLaren
University College Dublin
Martijn Schoonvelde
Rijksuniversiteit Groningen

To access full paper downloads, participants are encouraged to install the official Event App, available on the App Store.


Abstract

Generative large language models (LLMs) have been embraced by the research community as a low-cost, quick, and consistent way to classify textual data. Prior scholarship has demonstrated the accuracy of LLMs across a variety of social science classification tasks. However, there has been little systematic investigation of the effect of model size, hyperparameter settings and prompt style on classification performance. Furthermore, little attention has been paid to the trade-off between performance and efficiency that results from these decisions. This paper evaluates the importance of these choices across four distinct annotation tasks from the field of political science, using human-annotated texts as a benchmark. In doing so, it establishes empirically-grounded best practices for future research.