Blog

Blog

23 Aug

Who is liable when an AI chatbot tells a customer to phone a scammer?

Friday, August 23, 2024Noah Bonis CharancleLitigationTransportation & Logistics, Artificial Intelligence, Chatbot

Use of AI chatbots as an alternative to traditional customer support has dramatically increased over the last few years. Their appeal is understandable in that these artificial intelligence programs are able to provide rapid and coherent answers to client questions thereby cost-effectively reducing a company’s need for human customer support staff. However, and despite their widespread adoption, owner liability for false or misleading statements made by a chatbot remains uncharted territory.

Because AI chatbots are able to offer coherent answers, their mistakes are often difficult to spot unless they are patently ridiculous such as when Google’s AI told users to add glue to pizza to make cheese stick better. These false or inaccurate statement can easily become dangerous where they appear so legitimate that a consumer could easily rely on them to their detriment. This was the case in May of 2024 where CBC news reported that Meta AI, an artificial intelligence chatbot owned by Meta Platforms Inc. (“Meta”), the parent company of Facebook, told a user that a fake customer support phone line operated by scammers was an authentic Facebook phone number.

Mr. Gaudreau, a former Manitoba MPP, was having difficulty transferring his Facebook account over to a new phone he had just purchased. His wife searched for solutions online and ended up finding a phone number purporting to be that of Facebook customer support. Initially weary, Mr. Gaudreau attempted to verify this number by asking Meta AI whether the number was actually associated with Meta to which he received the following response:

The phone number 1.844.457.0520 is indeed a legitimate Facebook support number. Meta, the company that owns Facebook, lists this number as a contact for Meta Support, which includes support for Facebook, Instagram, and more.

Facebook does not have a customer support phone number. Rather than being legitimate, the number Meta AI said belonged to Meta Support was operated by scammers pretending to be from Meta. Accordingly, Mr. Gaudreau had been led straight into a scam.

Relying on the AI chatbot’s statements, Mr. Gaudreau called the number and was convinced by the scammers that his IP address was being hacked and that he needed to install an app onto his phone to rectify this. Although Mr. Gaudreau eventually caught on and hung up, this was not before the scammers were able to use his PayPal account to purchase a $500.00 Apple gift card on a recurring monthly renewal.

After the call, Mr. Gaudreau had to call his bank to cancel his credit cards and lock his bank accounts to prevent further potential losses. Although PayPal refunded the Apple gift card, Mr. Gaudreau still had to file fraud complaints with PayPal and Visa and set up alerts with Equifax and TransUnion.

Although Mr. Gaudreau did not suffer any apparent harm, this incident is a prime example of just how much damage can be caused by a consumer’s reliance on a chatbot’s erroneous statement. Liability for AI chatbot errors is still a novel concept in Canada. In fact, the only common law decision on topic is the BC Civil Resolution Tribunal decision in Moffatt v. Air Canada, 2024 BCCRT 149.

In this case, Mr. Moffat was erroneously told by Air Canada’s AI chatbot that he could apply for a reduced bereavement fare after completing his travel despite Air Canada’s official policy stating that bereavement fare requests could only be made before travelling. Air Canada declined to grant him the bereavement fare stating it was not liable for information provided by its chatbot.  The Tribunal disagreed with Air Canada’s position, stating that Mr. Moffat was entitled to rely on the bot to provide him with accurate information and that Air Canada’s duty of care required it to ensure that the bot’s representations were accurate and not misleading.

If the logic in that decision were to be followed in Mr. Gaudreau’s case, Meta would presumptively be liable for any fallout that could be attributed to his reliance on Meta AI. Although the damages suffered by Mr. Gaudreau were minimal, it is easy to appreciate how a scammer convincing an individual to install malware on their phones could potentially produce devastating effects.  

Companies that use client facing chatbots must understand that their liability for their chatbot’s erroneous or misleading statements is potentially limitless in Canada in that the sole legal decision on this topic states that clients are entitled to rely on an AI Chatbot’s statements. Those who decide to use chatbots should implement systems to prevent their bots from making misleading statements and have a method for flagging any misleading or false statements so that they can be corrected before they are relied upon. A PDF version is available for download here

Noah Bonis Charancle
Noah Bonis Charancle
Associate
T 416.865.6661
nbonischarancle@grllp.com

 

(This blog is provided for educational purposes only, and does not necessarily reflect the views of Gardiner Roberts LLP).

Subscribe Now