Researchers call on regulators to reduce risks of AI in medicine

Continuous monitoring is critical if regulatory agencies such as the Food and Drug Administration are to reduce the risk in artificial and machine learning-based medical technology.


Continuous monitoring is critical if regulatory agencies such as the Food and Drug Administration are to reduce the risk in artificial and machine learning-based medical technology.

That’s the contention of researchers who make the case that, to manage the risks and regulatory challenges of AI and machine learning in medicine, regulators like the FDA should primarily focus on continuous monitoring and risk assessment—and less on planning for future algorithm changes.

In an article published in the journal Science, researchers from INSEAD and Harvard Law School's Petrie-Flom Center for Health Law Policy, Biotechnology and Bioethics point out that regulatory bodies such as the FDA have approved medical AI and machine learning-based software with “locked” algorithms that provide the same result each time and do not change with use.

However, they say the problem is that “as use of artificial intelligence and machine learning in medicine continues to grow, regulators face a fundamental problem: After evaluating a medical AI/ML technology and deeming it safe and effective, should the regulator limit its authorization to market only the version of the algorithm that was submitted, or permit marketing of an algorithm that can learn and adapt to new conditions?”


The article’s authors pose their discussion about “algorithms on regulatory lockdown in medicine” in terms of the treatment of “locked” vs. “adaptive” algorithms.

“For drugs and ordinary medical devices, this problem typically does not arise--but it is this capability to continuously evolve that underlies much of the potential benefit of AI/ML,” warn the authors. “Our goal is to emphasize the risks that can arise from unanticipated changes in how medical AI/ML systems react or adapt to their environments. Subtle, often unrecognized parametric updates or new types of data can cause large and costly mistakes.”

As a result, they contend that the “emphasis of regulators needs to be on whether AI/ML is overall reliable as applied to new data and on treating similar patients similarly.”

In June, the American Medical Informatics Association called on the FDA to refine its Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD).

In particular, AMIA recommended improvements to the regulatory agency’s April discussion paper related to continuously learning vs. locked algorithms and new data inputs’ impact on algorithms’ outputs.

When it comes to learning vs. locked algorithms, AMIA told the FDA that “while the framework acknowledges the two different kinds of algorithms,” it is concerned that the framework is “rooted in a concept that both locked and continuously learning SaMD provides opportunity for periodic, intentional updates.”

In addition, while the FDA’s AI framework accounts for new inputs into a SaMD’s algorithm, AMIA said it is “concerned that a user of SaMD in practice would not have a practical way to know whether the device reasonably applied to their population and, therefore, whether adapting to data on their population would be likely to cause a change based on the SaMD’s learning."

More for you

Loading data for hdm_tax_topic #better-outcomes...