The rapid expansion of AI systems in recent years has made the need for their regulation one of the most pressing issues of contemporary politics. While political entities across the globe are now drafting legislation to regulate AI, the EU with its AI Act is at the forefront of regulatory efforts, both in terms of the state of the legislative process as well as in terms of the strictness of the proposed rules. The dynamics, actors, and competing interests that have shaped and continue to shape this process in the EU, however, have yet to be examined. Despite some excellent theoretical work on the role of civil society organisations (CSOs) in AI governance, and certainly to some extent due to the novelty of the issue, to date, little work has empirically examined the role of CSOs in shaping the EU AI Act. In this article, I bridge two theoretical strands of literature on discursive framing and interest group strategies to construct a conceptual framework that allows me to excavate 1) how CSOs sought to influence the AI Act, and 2) why they chose to focus on and politicise distinct sets of issues. Utilising a systematic policy and campaign document analysis, I show that CSOs in Brussels were split between traditional digital rights groups which highlighted fundamental rights as the most pertinent issue to be dealt with in the AI Act, and more recently founded groups with a specific AI-focus which advocated for the mitigation of existential risks posed by AI systems. I argue that these differences in framing strategies can be theoretically accounted for by considering the institutional, organisational and financial factors that underpin the work of these organisations.