Internet Explorer 11 (IE11) is not supported. For the best experience please open using Chrome, Firefox, Safari or MS Edge

2024 in Review: Key Legal Developments in AI

As 2024 draws to a close, AI continues to take centre stage. Governments and regulators remain focused on how to effectively regulate the rapid and widespread development and use of AI across all sectors. This year brought a series of pivotal legal advancements which will shape AI’s future. Our Artificial Intelligence team recaps the top five legal developments in AI for 2024.


2024 has been a pivotal year for AI regulation, with significant legal and policy advancements setting the stage for how AI will be governed globally. From the EU’s landmark AI Act to the first international AI treaty, regulators are tackling the complex challenges of transparency, accountability, and cross-border consistency. Public consultations and updated standards reflect growing efforts to refine governance frameworks and balance innovation with responsible oversight.

1. The EU publishes the AI Act

The EU’s long-anticipated AI Act[1] was finally adopted in 2024. The legislation marks a significant milestone in the global regulation of AI. The AI Act entered into force on 1 August 2024. It tackles issues of safety, transparency and accountability aiming to balance innovation with trustworthy and safe AI. The legislation adopts a risk-based approach, classifying AI systems into risk categories based on their potential use and impact on individuals and society. The first obligations concerning prohibited AI will apply from 2 February 2025.

2. The AI Office is established

Early 2024 marked the unveiling of the AI Office by the Commission. The AI Office will play a key role in the implementation of the AI Act including the regulation of general-purpose AI models. The AI Office will comprise five units, employ more than 140 staff and be led by the Head of the AI Office, Lucilla Sioli.

3. The Council of Europe’s Framework Convention on artificial intelligence and human rights, democracy and the rule of law

May 2024 also saw the Council of Europe adopt the world’s first international treaty on AI, which outlines shared principles for ethical AI and establishes commitments for signatories. The Framework Convention aims to harmonise AI standards across national boundaries by adopting a risk-based approach and addressing issues like transparency and human oversight. The Framework Convention’s adoption emphasises a growing global consensus around responsible AI governance. It also sets the stage for international cooperation in AI regulation. Initial signatories include:

  • The EU
  • United States
  • United Kingdom
  • Andora
  • Georgia
  • Iceland
  • Israel
  • The Republic of Moldova
  • Norway, and
  • San Marino

4. AI consultations take place

A series of public consultations on AI-related policies and frameworks took place throughout the year, reflecting broad stakeholder engagement and the dynamic regulatory environment. The most notable consultation this year related to the Commission’s consultation on prohibited AI systems and the definition of AI systems, as well as the consultation on the Codes of Practice. In addition to this consultation, in May the Department of Enterprise, Trade and Employment in Ireland launched a public consultation on the national implementation of the AI Act in Ireland. In November, a refresh of Ireland’s National AI Strategy was announced. Data protection authorities have also launched multi-stakeholder events including:

  • The EDPB’s stakeholder event on AI models
  • The ICO’s consultation series on generative AI and data protection, and
  • The CNIL’s consultation on AI systems

5. Key guidelines and standards for responsible AI

Several multilateral organisations and national bodies have published guidance and updates on AI best practices in recent months. In May, the Organisation for Economic Co-operation and Development (OECD) revised its AI Principles in response to the emergence of new AI technologies including general-purpose AI and generative AI. At a national level, the Irish Data Protection Commissioner published guidelines on Large Language Models (LLMs) and data protection. In the UK, the British Standards Institution (BSI) issued Global Guidance for Responsible AI Management, and ISO/IEC released updates to their AI standards, strengthening the global regulatory landscape for AI. Harmonised standards are being developed by European standardisation organisations however these have been delayed and are not expected until the end of 2025.

Conclusion

While 2024 marked significant progress in AI regulation, key issues remain unresolved. These include:

  • The designation of AI Act regulators
  • The approach to protection of copyright holder rights
  • Navigating privacy rights in AI model development, and
  • Clarifying AI liability

In the next part of this two-part series, we will look at the issues that remained unresolved in 2024.

For more information on the implications of integrating AI systems into your business, please contact a member of our Artificial Intelligence team.

The content of this article is provided for information purposes only and does not constitute legal or other advice.

[1] Regulation (EU) 2024/168



Share this: