close
close

Solondais

Where news breaks first, every time

Don’t count on Dr. AI: New search engine gives medical advice that could lead to death in one in five cases
sinolod

Don’t count on Dr. AI: New search engine gives medical advice that could lead to death in one in five cases

Frantically searching for our symptoms online and self-diagnosing is something many of us are guilty of.

But Dr AI could provide advice on ‘potentially harmful’ drugs, a worrying study has suggested.

German researchers found that more than a fifth of AI-powered chatbots’ responses to common questions about prescription drugs could “result in death or serious harm.”

Experts have urged patients not to rely on these search engines to provide them with accurate and safe information.

Doctors have also been cautioned against recommending these tools until more “accurate and reliable” alternatives are available.

Don’t count on Dr. AI: New search engine gives medical advice that could lead to death in one in five cases

German researchers found that more than a fifth of AI chatbots’ responses to common questions about prescription drugs could “result in death or serious harm.”

In the study, scientists from the University of Erlangen-Nuremberg identified the 10 most frequently asked questions from patients regarding the 50 most prescribed medications in the United States.

These included side effects of medications, instructions for use, and contraindications, that is, reasons why the medication should not be taken.

Using Bing copilot – a search engine with AI-powered chatbot features developed by Microsoft – the researchers evaluated the 500 responses against responses given by clinical pharmacists and medical pharmacology experts.

Responses were also compared to a peer-reviewed up-to-date drug information website.

They found that the chatbots’ statements did not match the reference data in more than a quarter (26%) of all cases and were completely inconsistent in just over 3%.

But a closer analysis of 20 responses also found that four in ten responses (42 percent) were considered to result in moderate or slight harm and 22 percent saw death or serious harm.

The scientists, who also assessed the readability of all chatbot responses, found that the responses often required a college education to understand.

Writing in the journal BMJ Quality and Safety, the researchers said: “CThe hatbot’s responses were largely difficult to read, and responses repeatedly lacked information or showed inaccuracies, possibly threatening patient and medication safety.

“Despite their potential, it remains crucial that patients consult their healthcare professionals, as chatbots do not always generate error-free information.

“Caution is advised in recommending AI-based search engines until citation engines with higher accuracy rates become available.”

A Microsoft spokesperson said: “Copilot answers complex questions by distilling information from multiple sources into a single answer.

“Copilot provides quotes related to these answers so the user can explore and search further as they would with a traditional search.

“For questions related to medical advice, we always recommend consulting a healthcare professional.”

The scientists, who also assessed the readability of all chatbot responses, found that the responses often required a college education to understand.

The scientists, who also assessed the readability of all chatbot responses, found that the responses often required a college education to understand.

The scientists also acknowledged that the study had “several limitations”, including that it was not based on patients’ actual experiences.

In reality, patients could, for example, ask the chatbot for more information or prompt it to provide answers in a clearer structure, they say.

It comes as doctors were warned last month that they could put patient safety at risk by relying on AI to aid diagnoses.

The researchers sent the survey to a thousand GPs using the largest professional network of UK doctors currently registered with the General Medical Council.

One in five people admitted to using programs such as ChatGPT and Bing AI during their clinical practice, despite a lack of official guidelines on how to work with them.

Experts have warned that issues such as “algorithmic bias” could lead to misdiagnoses and that patient data could also be compromised.

They said doctors need to be made aware of the risks and called for legislation covering their use in health care settings.