A recent report, which sets out the recommendations of members of the European Parliament to the European Commission on how artificial intelligence (AI) should be regulated in the area of civil liability, was adopted by the European Parliament on 20 October 2020 (Report). The Report is hoped to influence the Commission’s forthcoming legislative proposals in the realm of AI regulation. The Report also contains the draft text of a proposal for a regulation on the civil liability regime. We review the key suggestions made by the Report and the main features of the proposed regulation.
Who should be liable?
-
The Report proposes a new regime for civil liability claims of individuals and corporations against so-called “Operators” of AI systems.
-
The Report considers that in situations where there is more than one operator, all operators should be jointly and severally liable, while having the right to recourse proportionately against each other.
Review of the Product Liability Directive and General Safety Framework
-
The Report urges the Commission to assess whether the Product Liability Directive should be transformed into a regulation and to clarify the definition of “products” by determining whether digital content and digital services fall under its scope.
-
It also calls on the Commission to consider adapting concepts such as ”damage”, ”defect” and “producer”, and to consider whether the concept of ”producer” should incorporate manufacturers, developers, programmers, and other service providers
-
The concept of “time when the product was put into circulation” was also considered in the Report. It calls on the Commission to assess possible adjustments to the EU safety framework, in particular whether the concept is fit for purpose for emerging digital technologies, and whether the responsibility and liability of a producer could go beyond this, taking into account that AI-driven products may be changed or altered under the producer's control after they have been placed on the market, which could cause a defect and ensuing damage. This concept has always been to the forefront when considering product liability and AI. From the perspective of certainty, this could be a welcome change for Operators.
High-Risk AI systems
-
One of the most significant features of the Report is the regime of strict liability imposed on Operators of high-risk AI systems such that they will be strictly liable for any harm or damage that was caused by a physical or virtual activity, device or process driven by that AI system. This means that Operators of high-risk AI-systems will be liable for any harm caused by an autonomous activity, device or process driven by their AI system, even if they did not act negligently.
-
The Report defines a high risk as meaning “significant potential in an autonomously operating AI-system to cause harm or damage to one or more persons in a manner that is random and goes beyond what can reasonably be expected; the significance of the potential depends on the interplay between the severity of possible harm or damage, the degree of autonomy of decision-making, the likelihood that the risk materializes and the manner and the context in which the AI-system is being used”.
-
The Report proposes that all high-risk AI systems be exhaustively listed in an Annex to the proposed regulation, and that they be reviewed at least every six months and updated if necessary via a delegated act. This should provide clarity in categorising high risk AI systems, and this is necessary as the consequences of classification are significant.
Compensation & limitation periods for high-risk AI systems
-
In terms of redress, the Report proposes that there should be a maximum compensation of €2 million payable in case of death or harm to a person’s physical health or integrity resulting from an operation of a high-risk AI-system and a maximum of €1 million in case of significant immaterial harm (economic loss or damage to property).
-
The Report proposes lengthy limitation periods, allowing claims to be brought up to 30 years after the event. The Report proposes 30 years for claims concerning harm to life, health or physical integrity and 10 years in cases of property damage or significant immaterial harm that results in a verifiable economic loss.
-
These are very lengthy limitation periods which are far greater than those provided for under the Product Liability Directive.
The current proposal would require providers of AI systems to check whether they fulfil the definition of “Operator” and undertake a risk assessment of their technology. As we have seen, the Report suggests imposing strict liability on those operating high-risk AI if there is damage caused. Therefore the classification of AI systems will be a crucial point of consideration for Operators.
A robust civil liability framework for AI may present challenges to businesses in its implementation but it should also provide legal certainty, protect citizens better and enhance their trust in AI technologies by deterring high-risk activities. In this respect, the Report is welcomed and we await feedback from the European Commission on how the proposal will be taken
The content of this article is provided for information purposes only and does not constitute legal or other advice.
Share this: