Recent AI Act Guidance: What You Need to Know

The European Commission recently published guidance on the definition of an AI system and prohibited AI practices. In addition, the third draft of the General-Purpose AI Model Code of Practice was published. We break down these updates and their impact on AI providers and deployers.
What you need to know
- AI System Guidelines: These guidelines help determine whether a system is regulated by the AI Act by clarifying key elements of the Article 3(1) definition, such as autonomy, adaptiveness, and system objectives.
- Prohibited AI Guidelines: These provide practical guidance on AI practices banned under Article 5 of the AI Act, helping organisations understand what is considered unlawful.
- General-Purpose AI Model Code of Practice: The third draft further refines and streamlines the Code to better align with the AI Act’s requirements.
We highlight the key takeaways for organisations seeking to comply with their obligations under the AI Act.
Third draft Code of Practice
The third draft Code of Practice was published on 11 March 2025. In this third draft many commitments and measures have been refined to better align with the AI Act. It also no longer contains key performance indicators (KPIs) and states that the final version of the Code will not contain any KPIs. Here are some of the key points to consider.
- Transparency: The transparency commitments have been streamlined to better align them with the AI Act provisions. The level of detail to be provided has been diminished in some cases. In addition, a clearer distinction has been made as to the level of information to be provided to the AI Office/national competent authorities compared to downstream providers, the former requiring more detailed information than the latter. There is also a reduction in emphasis on training and transparency regarding copyright.
- Copyright: The copyright measures have been softened compared to previous iterations of the Code, although onerous obligations remain. For example, the obligation concerning publicly publishing copyright policy has been removed. Notably, compliance has been made proportionate to the “size and capacities” of providers.
- Downstream liability: Overall, the onerous, and out of scope, requirements in earlier drafts on dealing with downstream providers/licensees have been somewhat reduced. The requirements to enforce licensing terms related to model evaluation for systemic risk have been removed. However, there is still some suggestion that signatories should obtain information from licensees/downstream providers to assist with their model evaluations which goes beyond the requirements of the AI Act.
Stakeholders are encouraged to provide comprehensive feedback on this draft. The deadline for feedback is 30 March 2025.
AI system definition guidelines
The AI system guidelines were published in February 2025. The guidelines are not binding, and do not represent an exhaustive list of all potentially covered AI systems. However, they do provide helpful guidance for organisations assessing whether their AI systems are in scope.
The guidelines break out the definition of AI system into seven main elements:
- A machine-based system
- That is designed to operate with varying levels of autonomy
- That may exhibit adaptiveness after deployment
- And that, for explicit or implicit objectives
- Infers, from the input it receives, how to generate outputs
- Such as predictions, content, recommendations, or decisions
- That can influence physical or virtual environments.
Importantly, the definition adopts a lifecycle-based perspective encompassing two main phases:
- The pre-deployment or ‘building’ phase of the system, and
- The post-deployment or ‘use’ phase of the system.
This approach clarifies that the seven elements of the definition are not required to be present continuously throughout both phases of that lifecycle.
Finally, the guidelines list AI techniques that are deemed in and out of scope. Examples of in-scope techniques include machine learning, logic-based approaches and knowledge-based approaches. Out-of-scope techniques include basic data processing and simple prediction systems.
Prohibited AI guidelines
The Prohibited AI Guidelines were published in February 2025. The guidelines aim to assist organisations in understanding whether their AI systems are prohibited under Article 5 of the AI Act. The guidelines can be leveraged by companies to prepare compliance documentation. The guidelines have been approved, but will only be applicable once they are formally adopted by the Commission.
The guidelines appear to be pragmatic on the whole and provide useful content on the scope of the provisions, exemptions and market definitions. Organisations should carefully review them in light of their own AI systems, especially to assess what may now fall outside the scope.
Some key points to note:
- Many of the prohibitions contain several cumulative conditions, all of which must be fulfilled for the prohibition to apply. Careful consideration should be given to each condition as there may be scope for the prohibition to be disapplied.
- The AI Act should not be reviewed in isolation. For each prohibition, guidance is provided on the interplay with other EU laws. When reviewing AI systems for potential prohibited practices, due consideration should be given to other EU laws which may impact the application and scope of the prohibition.
- The guidelines provide helpful context in terms of the rationale and objectives of the prohibitions. It appears clear that these prohibitions are not intended to operate as strict liability provisions but instead need to be carefully considered in view of the context, purpose and objective of the prohibition.
Overall, the guidelines and Code provide valuable clarity for organisations navigating the AI Act. They offer practical insights and help align AI practices with regulatory requirements. While we’ve highlighted key points, the full documents are well worth a read for anyone seeking a deeper understanding. For advice on the implications of the AI Act on your business, or any other tech related issues, contact a member of our AI team.
People also ask
What are prohibited AI practices under the EU AI Act? |
Article 5 of the AI Act prohibits certain AI practices that are considered to present an unacceptable risk to the safety, livelihoods and rights of people. Examples of prohibited AI practices include harmful AI-based manipulation and deception, social scoring, and individual criminal offence risk assessment or prediction. |
What is the definition of an AI system? |
An AI system is defined in Article 3 of the AI Act as “a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments”. |
What is an example of general-purpose AI system? |
General purpose AI systems are able to perform generally applicable functions such as image/ speech recognition, audio/video generation, pattern detection, question answering, translation, etc. They have a wide range of possible uses, both intended and unintended by developers. They can be applied to many different tasks in various fields, often without substantial modification and fine-tuning. |
The content of this article is provided for information purposes only and does not constitute legal or other advice.
Share this: