Blog

Blog

4 Jul

Is a company liable for its AI chatbot telling a customer to phone a scammer?

Thursday, July 4, 2024Noah Bonis CharancleLitigationTransportation & Logistics, AI, Chatbot, Artificial Intelligence

Client-facing Artificial Intelligence chatbots are being used more and more frequently by corporations to respond to client inquiries. These artificial intelligence programs, able to string together coherent sentences in response to typed human questions, are a cost-effective way to answer client questions. However, and despite their wide-ranging adoption, an open question remains as to who is liable when the AI chatbot mistakenly provides false or misleading answers to a user.

AI chatbot mistakes can range from the ridiculous, such as Google’s AI telling users to add glue to pizza to make cheese stick better, to the dangerous, by their telling users that fake company phone numbers operated by scammers are legitimate. According to a CBC news report in May of 2024, the later was the case when Mr. Dave Gaudreau, a former Manitoba MPP, was led straight into a scam by Meta AI, an artificial intelligence chatbot owned by Meta Platforms Inc. (“Meta”), the parent company of Facebook.

Mr. Gaudreau, who had just purchased a new phone, was facing difficulty transferring his Facebook account over to it. While searching for solutions online, his wife found a phone number purporting to be that of Facebook’s customer support. Initially weary of its authenticity, Mr. Gaudreau attempted to verify it by asking Meta AI whether the number was actually associated with Meta.

Instead of telling Mr. Gaudreau the correct answer- being that Facebook does not have a customer support phone number - Meta AI drafted and responded with the following sentence:

The phone number 1-844-457-0520 is indeed a legitimate Facebook support number. Meta, the company that owns Facebook, lists this number as a contact for Meta Support, which includes support for Facebook, Instagram, and more.

Rather than being legitimate, this phone number was operated by scammers pretending to be from Meta.

Trusting the information provided by the AI chatbot, Mr. Gaudreau called the number, whereupon he was convinced by the scammers to install an app onto his phone to rectify a supposed hacking of his IP address. Although Mr. Gaudreau eventually caught on to the scam and hung up, the scammers were able to use his PayPal account to purchase a $500.00 Apple gift card (which PayPal later refunded).

After the call Mr. Gaudreau called his bank and internet provider to cancel his credit cards and lock his bank accounts to prevent further potential losses. He also filed fraud complaints with PayPal and Visa and set up alerts with Equifax and TransUnion. Lastly, Mr. Gaudreau filed a complaint with Meta.

Although this incident ended without apparent harm, it is a striking example of just how much damage a client’s reliance on an AI chatbot can cause. Liability for AI chatbot errors is still a novel concept in Canada. In fact, the only common law decision relating to whether companies are liable for chatbot statements is the BC Civil Resolution Tribunal decision in Moffatt v. Air Canada, 2024 BCCRT 149. In this case, Mr. Moffat was erroneously told by Air Canada’s AI chatbot that he could apply for a reduced bereavement fare after completing his travel despite Air Canada’s official policy stating that bereavement fare requests could only be made before travelling. Air Canada declined to grant him the bereavement fare stating it was not liable for information provided by its chatbot.  The Tribunal disagreed with Air Canada’s position, stating that Mr. Moffat was entitled to rely on the bot to provide him with accurate information and that Air Canada’s duty of care required it to ensure that the bot’s representations were accurate and not misleading.

If the logic in that decision were to be followed in Mr. Gaudreau’s case, Meta would presumptively be liable for any fallout caused by his reliance on Meta AI. Although the damages suffered by Mr. Gaudreau were minimal, it is easy to appreciate how a scammer convincing an individual to install malware on their phones could potentially produce devastating effects.  

Companies that use client facing chatbots must ensure they understand that their liability for erroneous chatbot statements is potentially limitless in that Canada’s only legal decision on this topic states clients are entitled to rely on AI statements. Chatbot owners and operators should ensure they have safeguards and systems in place to prevent their bots from making misleading statements that might be relied upon by a user, and to immediately rectify any misunderstandings their chatbots have caused. A PDF version is available for download here.

Noah Bonis Charancle
Noah Bonis Charancle
Associate
T 416.865.6661
nbonischarancle@grllp.com

 

(This blog is provided for educational purposes only, and does not necessarily reflect the views of Gardiner Roberts LLP).

Subscribe Now