close
close

Experts Warn Insurance Industry to Buckle Up on Artificial Intelligence Regulations and Litigation – Insurance News

With more and more AI-related litigation expected, industry experts strongly advise U.S. insurance companies to prepare before this reality becomes a reality.

“Given our litigious society, the fact is that if you’re going to use AI as an insurance company, you’re going to be unfortunately exposing yourself to at least some regulatory litigation and reputational risk,” said Scott Kosnoff, partner at Faegre Drinker Biddle & Reath, LLP.

Kosnoff said it’s “probably unlikely” that insurance companies will be able to completely avoid these risks. But he thinks they can mitigate them by waiting for unified AI regulations, which could be a long time coming.

He said by keeping in mind regulatory expectations and trends; creating and managing risk governance frameworks; testing for algorithmic discrimination; and “having a track record” that demonstrates a commitment to the ethical use of AI, insurers can best position themselves to deal with negative impacts if and when they occur.

“Blood in the Water”

During a webinar hosted by the Washington Legal Foundation, Kosnoff noted that several high-profile cases have already been filed related to the use of artificial intelligence in insurance – and more are expected to follow.

“All of these courts are being defended vigorously, and we’ll just have to wait and see how this plays out in the courts,” Kosnoff said. “But I think it’s fair to say that the plaintiffs’ bar smells blood in the water and thinks that insurers’ use of artificial intelligence is kind of a fertile field.”

While he said many AI-related lawsuits focus on inaccuracies, it’s generally expected that a large portion of them will focus specifically on algorithmic discrimination. This refers to cases where automated systems are believed to contribute to “unjustified differential treatment or impact” that disadvantages people based on real or perceived classifications such as gender, race, ethnicity, etc.

“I would focus on the use of AI that will have the greatest impact on consumers because that’s where plaintiffs’ lawyers will be looking,” Kosnoff suggested.

This could include situations where insurance companies use AI to price products, process claims, deny health insurance recommended by doctors, detect fraud, and any other area where US insurance companies may have incorporated AI into their operations.

The solution is to ease

Despite the risks, Kosnoff said it’s likely impossible to avoid AI altogether, given the competitive pressure on insurers and others to invest heavily in new technologies.

“But if you’re going to use AI, I think it’s important to do it smartly and think not just about what can go right, but also what can go wrong and what you can do to avoid some of that damage,” he said.

To this end, he proposed five steps insurance companies can take to reduce the risk of AI-related litigation:

  1. Pay attention to regulatory expectations
  2. Development of a risk management framework
  3. They periodically update their risk management framework
  4. Algorithmic Discrimination Test
  5. Have a “story”

“I think at least if you’re going to use AI, you need to stay current with evolving regulatory expectations from regulators,” Kosnoff said. “You have to pay attention to the lawsuits that have already been filed, and you have to pay attention to what is being reported in the media, because when you are worried about reputational damage — and I worry about that as much as I worry about lawsuits or actions regulatory – paying attention to what is reported on the front pages is important.”

Like other industry experts, such as those at the law firm Locke Lord, Kosnoff strongly recommended that insurance companies develop their own regulatory frameworks to govern AI use and protect themselves from potential AI-related litigation. They can look to the National Institute of Standards & Technology AI Risk Management Framework, the National Association of Insurance Commissioners’ Model Bulletin and regulations being developed in Colorado and New York for guidance, he said. They should also regularly update their frameworks as new developments emerge and consider testing for algorithmic discrimination.

“I think the best thing we can do right now is have a good story to tell, and by that I mean a story that credibly demonstrates to our regulators, to a judge in a worst-case scenario, to the New York Times — if they call, and they probably will — that our organization appreciates the concerns about AI, we take those concerns seriously, and we are proceeding with due caution in trying to identify, manage and mitigate the risks of downside,” he said.

Kosnoff noted that no compliance program is perfect or has a one-size-fits-all solution, but insurance companies need to design a solution that works for their organization based on its structure, AI usage, risk appetite and components, location and other details specific to each organization.

Faegre Drinker Biddle & Reath LLP is a full-service international law firm. Since its founding in 1849, it has grown to be one of the 100 largest law firms in the United States.

Rayne Morgan is a content marketing manager at PolicyAdvisor.com and a freelance journalist and writer.

© All contents Copyright 2024 by InsuranceNewsNet.com Inc. All rights reserved. No part of this article may be reprinted without the express written consent of InsuranceNewsNet.com.