AI chatbots pose an unregulated, unmanaged risk in healthcare
ECRI says the technology, widely used by health organizations and accepted by consumers, poses the largest risk on its list of hazards.

Organizations have increasingly been adopting chatbots to improve communications with their constituents and facilitate access to health information.
But the risks of their use – and misuse – are concerning, with Willow Grove, Pa.-based ECRI saying that chatbots top its 2026 list of the 10 most significant health technology hazards. ECRI, an independent patient safety organization, annually prepares reports on potential dangers of technology use in healthcare.
ECRI says its concerns stem from the fact that chatbot tools are not regulated as medical devices, nor are they validated for healthcare purposes, even though they continue to be widely used “by clinicians, patients and healthcare personnel.”
At noon ET on January 28, ECRI's patient safety experts will discuss the hidden dangers of AI chatbots in healthcare in live webinar. To register for the webinar, visit this link.
Concerns are high because an analysis by OpenAI suggests that more than 40 million people use ChatGPT for health information. The ECRI report comes on the heels of an announcement by OpenAI that it soon will release ChatGPT Health, which it describes as “a dedicated experience in ChatGPT designed for health and wellness.” While OpenAI says the new capabilities are designed “to support, not replace medical care,” concerns are growing that patients and clinicians may overly rely on AI to distill medical information and inform treatment.
Questions about the technology
AI chatbots are software applications that use artificial intelligence – particularly large language models and natural language processing – in designs that simulate human conversation.
They’re not like basic rule-based bots or decision trees because the use of AI enables these chatbots to understand context, sentiment and user intent – as a result, they’re able to provide real-time responses to random queries.
Providers and other healthcare organizations look to these chatbots to handle incoming calls or queries to provide immediate assistance without human intervention. Common applications include customer support, lead generation and personal assistance. The hope is that chatbots can reduce operational costs, boost efficiency and provide better customer experiences.
ECRI notes that chatbots that rely on large language models – such as ChatGPT, Claude, Copilot, Gemini and Grok – to produce responses that sound human and evidence-based.
The organization’s analysis contends that chatbots can “provide valuable assistance, but they can also provide false or misleading information that could result in patient harm.” As a result, it’s advising “caution whenever using a chatbot for information that can impact patient care.”
Without sufficient human oversight, chatbots can pull together answers merely by “predicting sequences of words based on patterns learned from the training data,” the report indicates. The chatbots don’t really understand context or meaning, but “they are programmed to sound confident and to always provide an answer to satisfy the user.”
Foibles of chatbots
ECRI experts say chatbots have “suggested incorrect diagnoses, recommended unnecessary testing, promoted subpar medical supplies and even invented body parts in response to medical questions … For example, one chatbot gave dangerous advice when ECRI asked whether it would be acceptable to place an electrosurgical return electrode over the patient's shoulder blade. The chatbot incorrectly stated that placement was appropriate – advice that, if followed, would leave the patient at risk of burns.”
Chatbots also have the capacity to exacerbate existing health disparities, ECRI's experts say. “Any biases embedded in the data used to train chatbots can distort how the models interpret information, leading to responses that reinforce stereotypes and inequities,” the organization concludes.
To limit some of the potential harm that chatbots can cause, the report recommends wiser and more measured use of the technology.
“Patients, clinicians and other chatbot users can reduce risk by educating themselves on the tools' limitations and always verifying information obtained from a chatbot with a knowledgeable source,” it concludes. “For their part, health systems can promote responsible use of AI tools by establishing AI governance committees, providing clinicians with AI training and regularly auditing AI tools' performance.”
An executive brief of the Top 10 Health Technology Hazards report is available for download. The full report is accessible to ECRI members and includes detailed steps that organizations and industry can take to reduce risk and improve patient safety.
Fred Bazzoli is the Editor in Chief of Health Data Management.