New EU AI Regulation 10,000 ft View
The EU is leading the global charge to regulate AI and has now taken a significant step in realising that vision with the recent publication of its first AI Regulation. Critics claim this is a retrograde step that will see the EU fall further behind in the global race to dominate the AI sector. The EU is backing itself that consumers will affirm its strategy by ultimately demanding and only using AI products that are trustworthy and held to the standards set out in the Regulation.
The Commission promises that this AI Regulation will make sure that Europeans can trust what AI has to offer. Proportionate and flexible rules will address the specific risks posed by AI systems and set the highest standard worldwide. The new rules will be applied directly in the same way across all Member States based on what is claimed to be a future-proof definition of AI. They follow a risk-based approach.
What will be regulated?
The new rules will be applied directly in the same way across all Member States based on what the Commission believes to be a future-proof definition of AI based on software.
‘artificial intelligence system’ (AI system) means software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.
Who is targeted?
Providers, users, importers and distributors will all be subject to the new rules. The place of establishment will not matter, however. What will matter is either the placing on the market or the putting into service or use of an AI system in the EU. A US or UK medical device owner whose product deploys AI and who sells into the EU without an EU establishment will be subject to these rules.
Providers are defined as the product owners/developers and they will bear the bulk of the burden under this Regulation. Importers, distributors and users will need to pay close attention to the Regulation as their obligations will be significant and will also require investment in resources and administration.
Intended use
The EU is keen that we understand that AI technology itself is not the focus of these new laws. The intended purpose of the AI is the target. This means the use for which an AI system is intended by the provider. These include the specific context and conditions of use, as specified in the information supplied by the provider in the instructions for use, promotional or sales materials and statements, as well as in the technical documentation. This design feature of the draft legislation will be both a benefit and a burden to AI providers. It will allow AI products access to the EU market provided at least one compliant intended purpose can be found. However, it will also necessarily exclude other uses which will no doubt restrict the value of the product on the EU market by comparison to an unregulated market like the US.
Prohibited uses
AI systems considered a clear threat to the safety, livelihoods and rights of people will be banned. This includes AI systems or applications that manipulate human behaviour to circumvent users' free will (e.g. toys using voice assistance encouraging dangerous behaviour of minors) and systems that allow ‘social scoring' by governments.
The proposed ban on ‘real-time’ remote biometric identification or facial recognition systems in publicly accessible spaces for the purpose of law enforcement is garnering a lot of press, but perhaps disproportionately so. The challenges with deploying these systems in a generalised manner are already well understood under GDPR.
Risk based approach
The next category of AI in the sliding scale of risk is those identified as high-risk, including AI technology used in:
-
Critical infrastructures, e.g. transport, that could put the life and health of citizens at risk
-
Educational or vocational training, that may determine the access to education and professional course of someone's life, e.g. scoring of exams
-
Safety components of products including AI application in robot-assisted surgery
-
Employment, workers management and access to self-employment, e.g. CV-sorting software for recruitment procedures
-
Essential private and public services including credit scoring denying citizens opportunity to obtain a loan
-
Law enforcement that may interfere with people's fundamental rights, e.g. the evaluation of the reliability of evidence
-
Migration, asylum and border control management, e.g. verification of authenticity of travel documents
-
Administration of justice and democratic processes including the application of the law to a concrete set of facts
These high-risk AI systems will be subject to strict obligations before they can be put on the market:
-
Adequate risk assessment and mitigation systems
-
High quality of the datasets feeding the system to minimise risks and discriminatory outcomes
-
Logging of activity to ensure traceability of results
-
Detailed documentation providing all information necessary on the system and its purpose for authorities to assess its compliance
-
Clear and adequate information to the user
-
Appropriate human oversight measures to minimise risk
-
High level of robustness, security and accuracy
In particular, all permitted remote biometric identification systems are considered high risk and as such are subject to strict requirements. Their live use in publicly accessible spaces for law enforcement purposes is prohibited in principle. Narrow exceptions are strictly defined and regulated, such as where strictly necessary to search for a missing child, to prevent a specific and imminent terrorist threat or to detect, locate, identify or prosecute a perpetrator or suspect of a serious criminal offence. These uses are subject to authorisation by a judicial or other independent body and to appropriate limits in time, geographic reach and the data bases searched.
The third category of AI systems on the sliding scale of risk are those seen as limited risk, e.g. chatbots. These AI systems will be subject to specific transparency obligations: When using AI systems such as chatbots, users should be aware that they are interacting with a machine so they can take an informed decision to continue or step back.
The fourth and final category are those uses of AI systems classed as minimal risk. The legal proposal allows the free use of applications such as AI-enabled video games or spam filters. The view of the EU is that the vast majority of current AI systems fall into this category. The draft Regulation does not intervene here, as these AI systems represent only minimal or no risk for citizens' rights or safety.
Compliance and governance
In terms of governance, the Commission proposes that national competent market surveillance authorities supervise the new rules. The creation of a European Artificial Intelligence Board will facilitate their implementation, as well as drive the development of standards for AI. Additionally, voluntary codes of conduct are proposed for non-high-risk AI, as well as regulatory sandboxes to facilitate responsible innovation.
It is worth noting that existing notified bodies and data privacy supervisory authorities are expected to perform conformity assessments for AI systems that are safety components of products or whose intended use is very much in their domain e.g. remote biometric testing and data privacy supervisory authorities.
Measures for SMEs
Member States are mandated to provide supports for SMEs to provide guidance and respond to queries about the implementation of this Regulation. They will also be treated differently and favourably from a costs perspective when applying for conformity assessments of high-risk AI systems.
Penalties
There is potential for infringements to give rise to maximum fines of up to €30M or, if the offender is a company, up to 6% of its total worldwide annual turnover for the preceding financial year, whichever is higher, for non-compliance with the prohibition of the artificial intelligence practices; or non-compliance of the AI system with data and data governance requirements. A sliding scale of equivalent fines (i.e. €20M/4% of its total worldwide annual turnover and €10M/2% of its total worldwide annual turnover) can be levied for other lesser infringements. The Commission stresses that its standard graduated response of dealing with infringements will apply and these significant fines will be a last resort.
Grandfathering
The Regulation will apply to high-risk AI systems that have been placed on the market or put into service before the date of application of the Regulation only if, from that date, those systems are subject to significant changes in their design or intended purpose.
Next steps
The AI regulation needs to be ratified by both the Council and the Parliament which will take time and will be subject to heavy lobbying. There is a sense in the Commission that the big ticket items like high-risk AI and its regulation will be accepted based on a significant and positive commentary and feedback period last summer. Once ratified it is expected to have legal effect in each Member State within 2 years. In the meantime, those producing high risk AI systems have a lot of work to do!
For more information on this and other topics related to artificial intelligence, contact a member of our Intellectual Property or Technology teams.
The content of this article is provided for information purposes only and does not constitute legal or other advice.
Share this: