Potential Liability for Chatbot Hallucinations?
Chatbots are often the first point of contact with a company that a customer has on a website when they have a query. While the imminent adoption of the EU’s AI Act has attracted most attention for the regulation of chatbots, a recent small claims tribunal decision from Canada is a cautionary reminder that other areas of law will also apply to a chatbot.
Background
The case saw a chatbot give inaccurate information to a consumer who raised a query about an airline’s bereavement fare policy. This was despite the relevant webpage of the website correctly stating the airline’s bereavement fare policy. Relying on the chatbot’s “hallucination”, the consumer bought two full-price fares to attend their grandmother’s funeral. When the consumer submitted an application for a partial refund, the airline was directed by the tribunal to comply and provide the partial refund.
The tribunal decision found that the airline had made a negligent misrepresentation as it had not taken reasonable care to ensure its chatbot was accurate. As a result, the airline was forced to honour the partial refund. While the airline argued that it was not responsible for information provided by its agents, servants or representatives, including a chatbot, the tribunal decided that this argument did not apply in this situation. This was due to the fact that the chatbot was not a separate legal entity and was instead deemed to be a source of information on the airline’s website.
The airline also argued that its terms and conditions excluded its liability for the chatbot but did not provide a copy of the relevant terms and conditions in the response. Therefore, the tribunal did not substantively consider the argument. In addition, while the chatbot’s response had included a link to the relevant webpage, the tribunal found that the consumer was entitled to rely on the information provided by the chatbot without double checking it against the information at the webpage.
Application in Irish law
Under Irish law, it is possible that a court would reach a similar conclusion, particularly in a consumer dispute. First, it is unlikely that a court would find that a chatbot was a separate entity from the chatbot’s operator. Therefore, it would find that the chatbot constituted information on the company’s website.
Irish law also prohibits misleading commercial practices. This includes the provision of false or misleading information that would cause an average consumer to make a transactional decision that they would not otherwise make. The provision of false information by a chatbot which results in a consumer making a purchase on the trader’s website could therefore be deemed a misleading commercial practice in an Irish court.
While the point was not fully considered in the Canadian decision, a contractual clause which excludes the liability of a company for hallucinations by its chatbot in similar circumstances may not be enforceable in Ireland. Under Irish law, contract terms which are unfair are not enforceable against a consumer. While terms which exclude a company’s liability for chatbots are not uncommon, the fairness of a term such as this, particularly where the consumer has made a purchase from the company relying on the information provided by the chatbot, would be questionable.
Key takeaways
While chatbots are a useful tool for companies to interact with their customers, companies should be aware of the legal risks which arise through their use. While it is unlikely that this single tribunal decision from Canada will make companies liable for all chatbot hallucinations, it is a reminder that their use can lead to unexpected liability for the company operating the chatbot. The risk is more stark in a B2C setting as EU consumer law will generally not allow organisations to make consumers responsible for risks associated with poor product performance.
Companies will also have to consider their potential liability for chatbot hallucinations under the European Commission’s proposed revised Product Liability Directive. The revised Directive will enter into force in 2024 and the new rules will apply to products placed on the market 24 months after its entry into force. The revised Directive will significantly modernise the EU’s product liability regime, including by expanding the definition of a ‘product’ to include software, including standalone software, and digital manufacturing files. Under the new rules, software will be a product for the purposes of applying no-fault liability, irrespective of the mode of its supply or usage and whether it is stored on a device or accessed through a communication network, cloud technologies or supplied through a software-as-a-service model. The revised Directive also seeks to expand the scope of liability beyond when a product was put into circulation to possibly include the time after circulation, including once the product has been placed on the market, if a manufacturer retains control of the product, for example through software updates and upgrades. Manufacturers may also be held liable for software updates and upgrades supplied by a third party, where the manufacturer authorises or consents to their supply, e.g. where a manufacturer consents to the provision by a third party of software updates or where it presents a related service (an integrated or inter-connected digital service) or component as part of its software even though it is supplied by a third party.
Organisations should also be mindful of the EU’s proposed Artificial Intelligence Liability Directive, which is closely linked to and complimented by the revised Product Liability Directive. The proposed AI Liability Directive seeks to harmonise certain aspects of national fault-based civil liability rules for damage caused by AI systems, including high-risk AI systems, as defined under the AI Act. The draft text is currently making its way through the EU’s legislative process. Once adopted, member states will have 2 years from its entry into force to transpose the legislation into their national law.
To reduce potential liability from chatbots, companies should regularly review the performance of their chatbots. In particular, the following could form part of the regular review:
- Reviewing the output of chatbots to ensure that the information they provide aligns with the company’s advertising and sales practices
- Promptly investigating any customer-reported issues associated with their chatbots
When the chatbot has been provided by a third party, ideally organisations should ensure that the contract with the third party affords it sufficient protection. Acceptable protection would include clearly outlining which party bears the liability for misleading/false information, and having appropriate obligations in place for the third party to make corrections to the chatbot in a timely manner. However, chatbot providers will resist very strongly any risk sharing which means organisations need to be vigilant about managing this risk in a practical manner, including by ensuring that related services are covered under their product liability insurance. So, when deploying chatbots with consumers, even for basic apparently benign use cases, thoroughly examine the risks associated with hallucinations and incorrect responses. If those responses cannot be fixed, consider another option or put in place a robust remedy process for your customers.
For more information, please contact a member of our Technology team.
The content of this article is provided for information purposes only and does not constitute legal or other advice.
People also ask
What is the artificial intelligence law in Ireland? |
At present, there is no overarching law which governs the use of artificial intelligence. However, this will change imminently with the adoption of the AI Act, an EU Regulation which will apply across the Member States of the European Union. In addition, artificial intelligence is subject to other areas of law such as the GDPR and consumer protection laws. |
Has the AI Act been passed? |
Agreement has been reached on the EU AI Act and it is expected to be adopted following a European Parliament vote in April. It is expected to enter into force 20 days after it is published in the Official Journal of the European Union. |
Share this: