close
close

Using AI in customer service raises legal challenges that could affect banks

Air Canada plane and Patagonia signage

Air Canada was fined for disinformation generated by its chatbot; Patagonia was accused of allowing the vendor’s AI model to eavesdrop on and analyze customer service conversations without consent. These cases have parallels in banking, where many institutions use AI-powered chatbots and contact center software.

Two recent lawsuits challenge companies that use artificial intelligence in their chatbots and call centers, as many U.S. banks do.

In one case, a customer whose grandmother had just died booked a flight on Air Canada and was assured by a generative AI chatbot that she had 90 days to apply for a bereavement discount. It turned out that this was not the airline’s policy, and the airline refused to grant the large bereavement discount, saying its policy was properly described on its website. Civil Dispute Resolution Court ordered Air Canada to grant the customer a discount and pay the fees.

In the second case, several customers sued Patagonia after discovering that Talkdesk, a contact center technology provider used by Patagonia, was recording and analyzing customer service calls and using them to train its AI model. Customers say they would not have made those calls if they had known Talkdesk was listening, in violation of California privacy law. The complaint was filed July 11 and has not yet gone to court.

The first case challenges the use of AI in contact centers, which many U.S. banks do, primarily to analyze customer sentiment and call center agent performance and summarize calls for their records. The second case challenges any use of a generative AI model directed at customers by a retail company such as a bank.

Hallucinating chatbot

When Jake Moffat’s grandmother died, he visited Air Canada’s website to book a flight from Vancouver to Toronto using bereavement rates. While searching for flights, he used a chatbot on the airline’s website that informed him he could apply for retroactive bereavement rates.

But when Moffatt later applied for the discount, the airline said the chatbot had made a mistake — the request should have been made before the flight — and that it would not provide the discount. The airline said it could not be held liable for information provided by one of its agents, employees or representatives, including the chatbot. The airline said the chatbot is a “separate legal entity that is responsible for its own actions” and that Moffatt should have clicked on a link provided by the chatbot, where he would have seen the correct policy. Air Canada did not respond to an interview request.

“This is an unusual filing,” noted Judge Christopher C. Rivers. “Although the chatbot has an interactive component, it is still just part of Air Canada’s website. It should be obvious that Air Canada is responsible for all of the information on its website. It doesn’t matter whether the information comes from a static page or a chatbot.”

Rivers also found that Air Canada did not exercise due diligence to ensure the accuracy of its chatbot. “While Air Canada claims that Mr. Moffatt could have found correct information elsewhere on its website, it does not explain why the website titled ‘Journey of Mourning’ was inherently more trustworthy than its chatbot,” Rivers wrote in his decision. “It also does not explain why customers should double-check information found on one part of its website with another part of its website.”

The arbitration court awarded the client damages in the amount of $650.88, interest in the amount of $36.14 and court fees in the amount of $125.

The case will likely cause Air Canada to think twice about the chatbots it employs and force AI-based chatbot vendors to improve their feature sets, said Greg Ewing, a senior fellow at law firm Dickinson Wright in Washington.

“For example, you could start introducing exclusions around what a chatbot can talk about,” he said. “So I think that will both drive innovation and motivate companies like Air Canada to be careful about choosing chatbots.”

Humans make these kinds of mistakes too, Ewing said.

“It’s not a unique circumstance,” he said. “It’s unique only because he actually wrote those words.”

Many banks offer AI chatbots on their websites, though most do not currently use generative AI (many said they would like to do so in the future). Bankers interviewed for this article say they are cautious about making generative AI available to customers.

“At Citizens, we have focused our initial use cases internally with human oversight while we actively and securely pursue the adoption of gen AI,” said Krish Swamy, Chief Data and Analytics Officer at Citizens. “We see the potential for gen AI to help us serve our customers while also helping our peers innovate. Smart financial institutions should integrate appropriate safeguards, including human safeguards, protecting customer data, and upholding privacy commitments, to best support the scalable deployment of gen AI.”

According to Sri Ambati, CEO of H2O.ai, AI models can be validated and tested with other AI models.

“Cure is becoming important, and the best way to cure it is with other AI,” he said. His company offers a framework called Eval Studio that lets companies create assessments that test for weaknesses.

Artificial Intelligence Eavesdropping in Patagonia

In a class action lawsuit filed in Ventura County, California Superior Court against Patagonia, customers accused the retailer of violating California privacy law by allowing Talkdesk’s artificial intelligence model to analyze customer service conversations in real time.

“When callers call one of Patagonia’s support lines, they are told that the call ‘may be recorded for training purposes,’” the complaint says. “This informs reasonable consumers that Patagonia itself may use the recording to train its customer service agents or improve its products. It does not inform reasonable consumers that a third party (Talkdesk) will intercept, listen in on, record, and use the call for its own purposes.”

Patagonia and Talkdesk did not respond to requests for interviews.

Ewing said California has a wiretapping law that makes it illegal to record conversations without someone’s consent.

“I think they have a pretty strong case for that because Talkdesk, at Patagonia’s request, is recording these calls, and at least according to the complaint and what I’ve seen, there was no actual consent to record them,” he said. “We’ve all heard those introductions, ‘This call may be recorded for training purposes.’ That doesn’t sound like ‘sending it to a third party’ to me.”

The complaint alleges that Talkdesk uses the data to train its AI model. This raises the issue of potential bias.

“I would be concerned if I were Talkdesk that the complaint would be basically, look, you have Patagonia users who call Patagonia and they’re angry and they have a Southern accent that the AI ​​picks up,” Ewing said. “The next time that person calls, what is the AI ​​going to use to make its recommendations to the customer service agent?”

Ewing said the lawsuit will force Talkdesk and its customers to rethink who they disclose, what consent they seek and how they use AI models in their contact centers.

Many U.S. banks are using AI to analyze customer service conversations, customer sentiment, and agent performance, and even rethink products that customers might complain about. Some are using generative AI to help call center agents provide informed responses.

It’s possible that more disclosure would have prevented such lawsuits. In addition to the standard message about call monitoring, companies could add a line about the fact that the software analyzes calls “to help our agents provide the best customer service,” Ewing said.

As H2O.ai’s Ambati says, over time, customers will own their data.

“If you own the data, you can rent it out,” he said. “You get all the property rights to lend it out to tune a big language model. You can let it be used to fight Alzheimer’s, for example, but not for political purposes.”