2024 in Review: Key Legal Developments in AI - Part 2
While 2024 saw significant legal developments in AI regulation and governance, many challenges remain unresolved. Key issues such as enforcement, data protection, liability, and copyright continue to pose questions. Despite progress, the path to ensuring accountability and trustworthy AI technologies is still evolving.
In the first part of our two-part series, we explored the key legal developments in AI in 2024. Despite these developments, several significant issues remain, highlighting the challenges and complexities of regulating AI. We discuss five of these key issues below.
1. Enforcement under the AI Act
The AI Act allows for either a centralised or decentralised model in appointing regulators. However, it remains uncertain which model will be adopted by Member States. Additionally, the identity of the relevant regulators for many of the Annex III high-risk AI use cases remains unclear. Many stakeholders, including Italy's data authority and Germany's Datenschutzkonferenz, argue that data protection authorities (DPAs) are well-suited for the role, given their expertise in privacy and enforcement. Others propose that AI’s broader impacts might be better managed by new or cross-disciplinary agencies. Belgium’s Secretary of State for Digitalisation, for instance, suggests a single digital regulator to oversee EU-wide digital regulations. The choice of regulatory body will be crucial for consistent enforcement, compliance, and effective oversight.
2. Copyright
The tension between the development of AI, particularly large language models (LLMs), and copyright law continues to spark significant debate. This year saw the initiation of more high-profile litigation in both the EU and the United States to address issues related to copyright and model training. In particular, courts are being asked to address the question of whether AI training qualifies as fair use under copyright law. The European courts, in July, examined the legality of using copyrighted works, i.e. a download of an image by a photographer, for the purpose of training an AI model. The court dismissed the copyright infringement claim in October. It noted the use of the image fell within an exception for text and data mining. It remains to be seen how questions surrounding ownership, licensing, and fair use of AI-generated works will be resolved. Of particular interest will be how the copyright transparency provisions under the AI Act will be enforced against providers of general-purpose AI models.
3. Data protection
Beyond copyright concerns, data protection issues are also central to AI training. Much of the data used in training contains personal data, pushing companies to address how to safeguard privacy rights and ensure transparency. This year, these challenges came to the forefront. Several providers paused AI training on EU user data, while others opted not to launch certain AI products in the EU. In some instances, data protection authorities have launched investigations over privacy concerns. DPAs conclude that companies must focus on establishing a legal basis for using personal data in AI training. This is essential to ensure compliance with GDPR principles, including fairness, transparency, and data minimisation.
4. Liability
The Artificial Intelligence Liability Directive (AILD) was a significant focus in 2024. It aims to clarify liability issues associated with AI products. The AILD was finally published in November. However, the AILD’s progress through the legislative procedure has been delayed and its future is uncertain. Both the EU Parliament and Council are sceptical as to whether the AILD is required, particularly given that the Product Liability Directive (PLD) now covers software. The EU Parliament called for a further impact assessment of the AILD. The assessment recommends a revised Product Liability Regulation and AI Liability Regulation, to be reframed as a software liability regulation, instead of the PLD and AILD. Since the publication of the assessment the PLD has entered into force, and it is unclear what approach the EU will take regarding the AILD.
5. AI codes of practice
The AI Act introduces critical requirements for transparency and accountability in the use of AI models. However, questions remain about how these principles will be practically enforced. This is especially true regarding:
- Model monitoring
- Auditing, and
- The appropriate level of transparency
Central to this is determining the scope of information that model providers must disclose, both to regulators and the public. The forthcoming General Purpose AI Codes of Practice, being developed by the AI Office in collaboration with stakeholders, aim to address these standards. The drafting process reached its first important milestone with the publication of the first draft of the General Purpose AI Code of Practice on 14 November. A second draft was published on 19 December. The drafting process will continue until early next year with the finalised text expected by April 2025.
Conclusion
2024 has been a landmark year for AI law, with regulatory frameworks and international cooperation advancing responsible AI development. While progress has been substantial, challenges around enforcement, data protection, copyright, and liability persist. In the coming year, pivotal milestones in AI regulation are set to further reshape the compliance landscape. New obligations on prohibited AI practices will take effect, with guidelines to clarify enforcement, and the anticipated Codes of Practice will establish standards for transparency, accountability, and ethics in AI use. The appointment of Member State Market Surveillance Authorities will strengthen regulatory oversight across the EU. This will be supported by new guidance on handling serious AI incidents. Additionally, the publication of AI standards will help ensure safe and consistent practices. The Product Liability Directive will redefine AI liability, impacting providers and consumers alike. In addition, data protection and copyright guidelines are expected to bring further clarity. Together, these developments mean 2025 will be another defining year for shaping the regulation of AI in Europe and globally.
For more information on the implications of integrating AI systems into your business, please contact a member of our Artificial Intelligence team.
The content of this article is provided for information purposes only and does not constitute legal or other advice.
Share this: