The development of AI instruments comes along with concerns about the future quality of democracy. Recent European and national elections have witnessed systemic risks generated by foreign interference in elections followed by the annulment of presidential elections -- as it happened in Romania in November 2024. In this context how are very large on-line platforms (VLOPs) and very large search engines (VLOSEs) complying with the European hard regulation related to elections, such as the Digital Service Act or the Artificial Intelligence Act? Previous literature shows that platforms comply with soft laws related to digital campaigning such as the Code of Practice on Disinformation in principle and on paper but are behind with substantial actions (Borz et al 2024). Additionally, we know that compliance is at its lowest with regards to measures related to citizens’ rights and freedoms. Compliance with hard laws however is less studied and should provide a good picture of private actors’ ability to adapt and implement required (non-voluntary) changes.
Using a risk theoretical framework, we examine whether risks related to digital campaigning feature as a priority for VLOPs and VLSEs in the electoral periods or they are merely prompted by member state authorities at times of relevant elections. We also test whether compliance depends on platforms’ and engines’ organisational features or on their internal policies. Using computational text analysis methods and qualitative interviews conducted by the DIGIEFFECT project (www.digieffect.eu) we analyse all transparency and risk assessment reports submitted by private actors in response to the DSA. Thus, our analysis contributes to the literatures on digital campaigning, democratic theory and policy effectiveness.