Internet Explorer 11 (IE11) is not supported. For the best experience please open using Chrome, Firefox, Safari or MS Edge

The European Commission has published a Report on Liability for Artificial Intelligence and other emerging digital technologies which details the findings of an Expert Group on Liability and New Technologies – New Technologies Formation.

Assessing existing liability regimes

The Group considered “whether and to what extent existing liability schemes are adapted to the emerging market realities following the development of new technologies such as artificial intelligence, advanced robotics, the internet of things and cyber security issues”. The Group also analysed the current liability regimes across the Member States and assessed their suitability and adequacy to deal with damage resulting from the use of emerging digital technologies.

In summary, they found that while the laws of the Member States do ensure basic protection of rights, also referred to as primarily damages in tort and contract, these laws are not specifically applicable to this dynamic, complex and fast developing area. This is as a result of more technical issues which arise with this technology such as complexity, modification through updates, self-learning during operation, limited predictability and vulnerability to cybersecurity threats.

An example of technology given by the Group to highlight the complexities in this field is a smart home system, which has a number of interacting devices and programmes. Someone who has suffered damages as a result of a failure of this system would have a number of financial and technical obstacles to overcome in order to prove causation ie that the software design or algorithm caused the failure of the device or system, such as a technical analysis of the software by an expert. The more systems involved and interacting the more costly this becomes.

10 key findings of the Group

The Group made the following key findings on how liability regimes should be designed and, where necessary, changed to adapt to this evolving area of digital technology:

  1. A person operating a permissible technology that nevertheless carries an increased risk of harm to others, for example AI-driven robots in public spaces, should be subject to strict liability for damage resulting from its operation.

  2. In situations where a service provider ensuring the necessary technical framework has a higher degree of control than the owner or user of an actual product or service equipped with AI, this should be taken into account in determining who primarily operates the technology.

  3. A person using a technology that does not pose an increased risk of harm to others should still be required to abide by duties to properly select, operate, monitor and maintain the technology in use and – failing that – should be liable for breach of these duties if at fault.

  4. A person using a technology which has a certain degree of autonomy should not be less accountable for ensuing harm than if said harm had been caused by a human auxiliary.

  5. Manufacturers of products or digital content incorporating emerging digital technology should be liable for damage caused by defects in their products, even if the defect was caused by changes made to the product under the producer’s control after it had been placed on the market.

  6. For situations exposing third parties to an increased risk of harm, compulsory liability insurance could give victims better access to compensation and protect potential tortfeasors against the risk of liability.

  7. Where a particular technology increases the difficulties of proving the existence of an element of liability beyond what can be reasonably expected, victims should be entitled to facilitation of proof.

  8. Emerging digital technologies should come with logging features, where appropriate in the circumstances, and failure to log, or to provide reasonable access to logged data, should result in a reversal of the burden of proof in order not be to the detriment of the victim

  9. The destruction of the victim’s data should be regarded as damage, compensable under specific conditions.

  10. It is not necessary to give devices or autonomous systems a legal personality, as the harm these may cause can and should be attributable to existing persons or bodies.

Conclusion

While it is possible to apply existing liability regimes to emerging digital technology, as these technological developments are constantly evolving, steps should be taken now to consider how to implement the recommendations of the Group.

The key concern as matters lie is that victims who suffer damage as a result of this technology may be under compensated or not compensated at all due to a lacuna in the law. The EU’s first dedicated AI legislation is expected to be published next month and it will be very interesting to see if any of the above issues are addressed in that draft.

For more information on the impact of emerging digital technologies and the adoption of AI on your business, contact a member of our Product Liability team.

This insight was contributed by Mary Cooney, Senior Associate.


The content of this article is provided for information purposes only and does not constitute legal or other advice.



Share this: