Populism has emerged as a pervasive phenomenon, manifesting primarily among radical right and radical left political actors and citizens, and influencing democracy, political culture, policy-making, and political behaviour. A key challenge in studying populism -- and its core conceptual components, such as people-centeredness, anti-elitism, and a Manichean worldview -- lies in effectively analyzing political actors' texts, including manifestos, speeches, and social media posts. While human coding has been a reliable method, it is too resource-intensive to scale to large corpora and requires both domain-specific knowledge as well as the ability to read the texts in their original languages. Automated text-as-data approaches, on the other hand, frequently fail to detect populism and its sub-dimensions, often yielding low accuracy even in prototypical cases. We propose a solution using GPT-class Large Language Models (LLMs) using instruction tuning to mimic human coding behaviour, but at scale and without natural language limitations or any requirement for pre-processing or even document conversion. Unlike earlier generations of machine learning approaches, LLMs can accurately identify populism and its sub-dimensions, even when facing a needle-in-the-haystack problem of low-frequency populist statements. We validate our approach by (a) comparing LLM-derived populism scores with established expert surveys on party-based populism and (b) benchmarking sub-dimensions against results from human coding.