The European Data Protection Board recently published guidance for organisations on effectively conducting audits of AI Systems. This helpfully offers organisations a “checklist” which can be used to assess the impact of an AI system and its compliance with relevant laws. The audit checklist concerns the impact of both the GDPR and the AI Act.
Our Technology team provides an overview of some of the key elements of the EDPB’s audit checklist guidance, which is relevant if you are considering developing or acquiring and integrating an AI system into your business.
The European Data Protection Board (EDPB) commissioned and published guidance
in June 2024 on how to conduct audits of AI systems (AI Audits).[1]
This “checklist” (Audit Checklist) can be used by both data protection regulators and businesses who wish to integrate an AI system into their business to assess the impact of an AI system and its compliance with certain laws. Relevant laws that an audit can assist with include the GDPR and the AI Act. We provide an overview of some of the key elements of an AI Audit which the EDPB suggests to carry out.
What is an AI Audit?
The EDPB advise in the Audit Checklist that for those “… acquiring and incorporating AI systems into their operations, audits provide crucial evidence that enable due diligence and proper assessment and comparison of the characteristics between different systems and vendors.” In other words, AI Audits can allow businesses who are integrating AI systems into their day-to-day operations to gather and assess information which will help them determine if the AI system is both suitable to integrate into their business and compliant with the GDPR and AI Act.
The EDPB advise that an AI Audit is an “iterative process of interaction between the auditor/s and the development team/s.” Key elements of the AI Audit include:
- Model card: Reviewing a model card, which compiles information about the AI system's training, testing, features, and motivations
- System map: Creating a “system map” to establish relationships between the algorithm, the technical system, and the decision-making process
- Biases: Identifying potential Biases in the AI system
- Bias testing: This involves statistical analysis, fairness metrics, and considerations for protected groups
- Adversarial audit: An optional Adversarial Audit may be conducted to test the system in real-world conditions, and
- Report: Generating a final report following the AI Audit
These steps are summarised in turn below.
Model cards
Model cards are documents designed to compile information about the training and testing of AI models. The AI Act references model cards as a method for developers to consider in order to comply with AI Act documentation requirements (Recital 89):
“Developers of free and open-source tools, services, processes, or AI components other than general-purpose AI models should be encouraged to implement widely adopted documentation practices, such as model cards and data sheets, as a way to accelerate information sharing along the AI value chain, allowing the promotion of trustworthy AI systems in the Union.”
The EDPB advise auditors to ask for and review available model cards. This approach will enable them to have an initial overall picture of the AI system. The model cards can serve as an index of the available information that the developer is making available. This will also enable auditors to determine what legal issues need to be further explored before integrating the AI system into their business.
System maps
System maps establish the relationship and interactions between an algorithmic model, a technical system and a decision-making process. The EDPB recommend that a first version of a system map be designed by the auditor(s) following the information provided in the model card. The final version can then be completed and validated by the developer, according to the EDPB. The EDPB provide a list of suggested questions to complete a system map under the following broad headings:
- Identification and transparency of AI based component: This is evidence related to transparency provided by the developer related to the AI Component. Evidence can include documentation of data sources used to train the model, or data protection oversight related to the development of an AI Component, such as the involvement of a DPO. It also includes information about the parameters used in the training of the AI system.
- Purpose of the AI based component: Information about the core uses and potential secondary uses of the AI Component, as well as the lawfulness of any data processing related to the AI Component. This includes information about the necessity and proportionality of processing, the recipients of the data subjects, or any storage limitation measures related to date the developer has in place.
- Bases of the AI Component: Information about the core underlying development of the model. Information of this description includes whether there has been documentation created about the methods for selecting, collecting and preparing the AI Component’s training data or whether metrics have been developed for measuring the behaviour of the model.
A system map is especially relevant for data protection supervisory authorities, as it provides a detailed checklist covering aspects like identification and transparency of the AI component, its purpose, data management, and security measures. As such, the questions suggested by the EDPB provide an indication as to what supervisory authorities will look for when reviewing the GDPR compliance of an AI system.
Moments and sources of bias and bias testing
This part of the Audit involves identifying potential biases that the AI system can generate at different stages of the AI lifecycle, from pre-processing to post-processing. To establish this, the EDPB provide a detailed checklist of questions to consider in order to guide the audit process on this topic. Questions that the EDPB suggest cover topics related to data management, verification and validation, performance, consistency, stability, traceability, and security of the AI system.
Bias testing
Bias testing is a step which aims to understand who could be impacted by the AI system. It includes suggested steps like defining protected groups, testing system outputs, examining training data, and reviewing fairness metrics provided by the developer.
Adversarial audit (optional)
This optional step suggested by the EDPB involves testing the AI system in real-world production settings to uncover hidden biases or issues. This type of audit is particularly recommended by the EDPB for high-risk and unsupervised machine learning systems. Methods of carrying out an adversarial audit can include interviewing end users or creating fake profiles to trigger and analyse system outcomes.
The audit report
The final section outlines three types of audit reports which auditors should create:
- An internal report with mitigation measures
- A public report describing the audit process and results, and
- Periodic follow-up reports which test the effectiveness of the mitigation measures
Comment
The EDPB conclude by emphasising the importance of documentation throughout the audit process and the need for ongoing monitoring and improvement of AI systems which have been deployed.
While not mandatory for organisations to carry out under the AI Act, carrying out an AI Audit will help organisations to understand and assess data protection safeguards in the context of the AI Act. It can also provide valuable evidence that you have complied with the accountability obligation under Article 5(2) GDPR. This will be a particularly relevant safeguard if your business is considering deploying a potentially higher risk AI system.
For more information on AI Audits and the implications of integrating AI systems into your business, please contact a member of our Artificial Intelligence or Technology teams.
The content of this article is provided for information purposes only and does not constitute legal or other advice.
[1] The Checklist for AI Auditing commission by the EDPB was developed by Dr. Gemma Galdon Clavell.
Share this: