Navigating the ethical AI horizon: Healthcare’s collaborative path forward
Top U.S. and world leaders realize artificial intelligence could remake healthcare, but they recognize the need to infuse caution and protections.
While artificial intelligence holds bright promise for improving healthcare, nagging questions and worries are on the rise.
Those concerns have reached the top administrative levels of U.S. and worldwide organizations, which are looking for assurances that AI will be used ethically, effectively and safely.
Healthcare organizations are reacting as well, looking for ways to ensure that there are protections in place and ways to assess the validity of AI tools and applications.
The public and private focus on the safe use of AI becomes crucial to assure the clinical community and the public that the technology is receiving the same attention that medical devices and other technologies receive.
Presidential attention
President Biden has responded to concerns about the unbridled use of AI, first signing an Executive Order on the safe and secure use of artificial intelligence, noting the risk that “irresponsible use could exacerbate societal harms such as fraud, discrimination, bias and disinformation,” among other risks. “This endeavor demands a society-wide effort that includes government, the private sector, academia and civil society.”
In December, the Biden administration followed up with an AI healthcare initiative, gaining voluntary commitments from 28 healthcare provider and payer organizations “on the safe, secure and trustworthy use, and purchase and use of AI in healthcare.” The commitments seek to align industry action on AI around “the FAVES principles – that AI should lead to healthcare outcomes that are fair, appropriate, valid, effective and safe.” The principles will be applied to generative AI content and seek to apply its use equitably and to augment efforts to achieve the Quadruple Aim.
In response, the National Institute of Standards and Technology has issued a request for information to assist in the implementation of its responsibilities outlined in President Biden’s executive order, which positions NIST to develop guidelines for evaluation, consensus-based standards and testing environments for evaluating AI systems.
Also weighing in on managing the use of AI is the World Health Organization, which in January released new guidance on the ethics and governance of large multi-modal models (LMMs) – a type of fast growing generative artificial intelligence (AI) technology with applications across healthcare. WHO’s guidance contains 40 recommendations for consideration by governments, technology companies and healthcare providers to ensure the appropriate use of LMMs to promote and protect the health of populations.
WHO notes that LMMs use one or more types of data inputs to “generate diverse outputs not limited to the type of data inputted. LMMs have been adopted faster than any consumer application in history, with several platforms – such as ChatGPT, Gemini (previously Bard) and Bert – entering the public consciousness in 2023.”
The recent guidance follows a June 2021 WHO report on ethics and governance of artificial intelligence for use in healthcare.
Congressional interest
Oversight and information gathering also has increased among federal legislators.
In February, the Senate Finance Committee held a hearing on the use of AI in healthcare. Chaired by Sen. Ron Wyden (D-Ore.), witnesses included Mark Sendak, MD, co-lead of the Health AI Partnership; Ziad Obermeyer, MD, associate professor at the University of California-Berkeley; and Katherine Baicker, provost of the University of Chicago.
Also before a Senate panel, the Senate Health, Education, Labor and Pensions Committee conducted a hearing Nov. 8, 2023, on policy considerations for the use of artificial intelligence in healthcare. Testifying at the hearing were Keith Sale, MD, vice president and chief physician executive of ambulatory services at the University of Kansas Health System; Kenneth Mandl, MD, director of the computational health informatics program at Boston Children’s Hospital; and witnesses from the Johns Hopkins Center for Health Security and Greater Wisconsin Agency on Aging Resources.
In December 2023, the House Energy and Commerce Committee had a hearing entitled “Leveraging Agency Expertise to Foster American AI Leadership and Innovation.” The panel was looking for ways to ensure that federal agencies enact policies that help healthcare organizations be better equipped to address benefits and risks associated with AI use in healthcare, while strengthening data security protections.
In testimony before that committee, ONC Coordinator Micky Tripathi notes that the Food and Drug Administration has authorized more than 690 AI-enabled devices to improve medical diagnosis and has established the Digital Health Center of Excellence to better understand how AI impacts medical device regulation. He also explained that ONC proposed a rule in April 2023 to increase transparency into algorithms and adopt “risk management approaches to AI-based technologies to support a dynamic and high-quality market for predictive AI in electronic health records.”
Tripathi also noted that HHS is looking to develop resources and policies to enable the safe, responsible adoption and use of AI, while managing the risks of AI in healthcare, public health and human services. It’s also seeking to advance the quality and safety of AI use in healthcare. ONC also wants to play a role in providing public education across the healthcare ecosystem.
Additionally, the Health Subcommittee of the Energy and Commerce Committee conducted a hearing entitled “Understanding How AI is Changing Health Care” on Nov. 29, 2023, about how to encourage development of AI to bring about positive contributions to healthcare.