Overlaps, gaps and inconsistencies
The European Commission’s proposed Artificial Intelligence (AI) Act attempts to regulate a wide range of AI applications, aligning them with EU values and fundamental rights through a risk-based approach. The scope, instruments and governance framework introduced by the proposal are still being debated and refined by European co-legislators. Both the Council of the European Union and the European Parliament have proposed possible amendments to the regulation, with potentially far-reaching impacts on its overall scope and content. An agreement seems possible by mid-2023, but this will depend on whether the co-legislators converge on key issues such as the definition of AI, the risk classification and associated regulatory remedies, governance arrangements and enforcement rules.
The act has been presented as a ‘horizontal’ piece of legislation, even if several limitations and exemptions apply. This, combined with the expected, pervasive impact of AI on the economy and society, may lead it to overlap with several legislative provisions – both horizontal and sector specific. As a result, gaps and inconsistencies may emerge that negatively affect legislative quality and regulatory certainty. This study addresses this issue by analysing the interplay between the AI Act and EU digital acquis. We map the gaps and limitations of the AI Act in relation to 14 pieces of legislation. Our research draws on desk research, qualitative interviews and an online workshop.
We identify eight key areas where challenges may emerge, and make the following policy recommendations: 1) there is a need to clarify and align the terminology with the legal categories and notions in existing EU legislation related to AI; 2) negotiators should ensure better fine-tuning of the interactions of the act with sector-specific rules (notably in the health sector); 3) the act should be made consistent with EU data protection rules, for example regarding the lawfulness of personal data processing; 4) the act’s risk-based approach features a number of loopholes that need to be addressed to improve legal certainty for AI providers and users; 5) while the act aims to complement existing product safety rules, it requires more detailed provisions to allow for meaningful integration with EU acquis; 6) the act introduces a weak enforcement scheme, which should be strengthened and aligned with other digital policies; 7) EU legislators should tackle the growing divergence between the stated goals of the act and emerging data transfer rules; and 8) the act would benefit from exemptions aimed at promoting scientific research.