With the adoption of the AI Act in 2024, the EU legislator is trying to tackle a phenomenon widely acknowledged to be a seismic shift in informational power with a piece of product safety legislation. At almost the same time, another traditional legal instrument entered the global regulatory space for AI: a treaty which has been negotiated and signed in the context of – but not limited to the membership of – the Council of Europe: The Framework Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law.
We are entering a new phase of AI regulation in Europe in which the new norms, including those expected to be produced by standard-setting bodies in the near future, will have to be translated into standards and working processes and be aligned with existing and new regulatory initiatives. As we know from experience with other EU legislation directed at digital technologies such as the GDPR, authoritative legal interpretation by highest courts will be years away. In the meantime the broad requirements of the AI Act will need to be specified in order to give AI providers and deployers the interpretative tools for complying with the Act in a manner that strikes a balance between individual protection and the promotion of innovation.
This paper discusses what this aspect of the AI Act means for its interaction with the Framework Convention and proposes a novel approach to studying this type of early, non-judicial interpretation of legislation. Several ‘interpretative sites’ have been selected where the early implementation of the AI Act can be studied through text-based methods that are part empirical part legal: AI Act training courses, the Dutch AI regulatory sandbox, the ‘Algoritmekader’ of the Dutch Ministry of the Interior and the ‘algoprudential’ activity of civil society organizations.
Paper available upon request (a.c.m.meuwese@law.leidenuniv.nl).