close
close

Fixing AI regulation in healthcare will require more than patches and fixes

After surveying 55 thought leaders behind closed doors, the Stanford Human-Centered Artificial Intelligence Institute (Stanford HAI) found that only 12% of them believe AI in healthcare should always involve a human.

A large majority, 58%, told organizers that human oversight is unnecessary as long as safeguards are in place. A large third, 31%, took a middle ground, supporting human oversight “in most cases, with a few exceptions.”

HAI’s Healthcare AI Policy Steering Committee convened a select group for a Chatham House Rule workshop in May. The 55 people included leading policymakers, scientists, healthcare providers, ethicists, AI developers, and patient advocates. The organizers aimed to identify urgent gaps in AI policy and build support for changes to AI regulation.

Participants discussed shortcomings in federal health AI policy—and possible solutions—covering three key use cases. Here are excerpts from the workshop report, lead authored by Caroline Meinhardt, director of policy research at Stanford HAI.

1. Artificial intelligence in software as a medical device.

Workshop participants proposed new policy approaches that aim to streamline the market approval process for these multifunctional software systems while ensuring clinical safety, Meinhardt and co-authors report.

“First, public-private partnerships will be critical to managing the evidentiary burden of such approval, with a potential emphasis on developing post-market surveillance,” they write. “Second, participants supported improved information sharing during the device approval process.” More:

“Although the FDA has approved nearly 900 medical devices that incorporate AI or machine learning software, clinical adoption has been slow because healthcare organizations have limited information on which to make purchasing decisions.”

2. Artificial Intelligence in Clinical Operations and Enterprise Administration.

As HAI management recalled, some participants argued for human oversight to ensure safety and reliability, while others warned that requirements for human involvement could increase the administrative burden on physicians and make them feel less accountable for the clinical decisions they make.

“Some have viewed the lab tests as a successful hybrid model, where the device is monitored by a physician and subjected to regular quality checks,” the authors add. “Any out-of-range values ​​are checked by a human.” More:

“Should patients be informed about the use of AI at any stage of their treatment, and if so, how and when? … (A) number of participants felt that in some circumstances, such as an email purporting to come from a healthcare professional, the patient should be informed that AI played a role.”

3. AI applications aimed at patients.

A growing number of patient-facing apps, such as LLM-based mental health chatbots, promise to democratize access to health care or offer patients new services via mobile devices, Meinhardt and colleagues write.

“And yet,” they note, “no targeted safeguards have been implemented to ensure that these patient-facing, LLM-based apps do not share harmful or misleading medical information—even or especially when the chatbots claim not to be providing medical advice, despite sharing information in a way that closely resembles medical advice.” More:

“There is an urgent need to clarify the regulatory status of these patient-facing products. … The needs and perspectives of entire patient populations must be taken into account to ensure that regulatory frameworks address health inequalities caused or exacerbated by AI.”

Also interesting from the point of view of the workshop are the results of two subsequent surveys conducted on the same day.

  • The majority of participants (56%) said that AI applications in healthcare should be managed in the same way as healthcare workerswith accredited training programs, licensing exams, and the like. But a slightly smaller 44% said the models should be regulated like medical devices with a marketing authorization, post-market surveillance, and so on.
  • More than half of respondents (56%) believe that effective management of AI in healthcare will require significant changes to existing regulations. More than a third, 37%, said a new regulatory framework could work. Only a relative fraction, 8%, said all that was needed were minor changes to existing rules.

One participant came up with a colorful word picture to represent the extent of obsolescence of the current AI governance framework in healthcare. Navigating the regulatory landscape is like “driving a 1976 Chevy Impala on the roads of 2024,” the participant said.

To this Meinhardt and co-authors add:

“Traditional regulatory paradigms in healthcare are in urgent need of adaptation to a world of rapid AI development.”

Read the whole thing.