Ten steps to ethics-based governance of AI in healthcare
AI’s ability to interpret data relies on processes that are not transparent, making it difficult to verify and trust outputs from AI systems.
Artificial intelligence is transforming healthcare as we know it, enabling healthcare professionals to analyze health data quickly and precisely, and leading to better detection, treatment, and prevention of a multitude of physical and mental health issues. In addition, AI plays an increasingly significant role in the fields of medical research and education.
However, AI’s ability to interpret data relies on processes that are not transparent, making it difficult to verify and trust outputs from AI systems. The use of AI in healthcare raises ethical questions that must be considered to avoid potentially harming patients, creating liability for healthcare providers and undermining public trust in these technologies.
For example, healthcare AI tools have been observed to replicate racial, socioeconomic and gender bias. Even when algorithms are free of structural bias, data interpreted by algorithms may contain bias later replicated in clinical recommendations. Although algorithmic bias is not unique to predictive AI, AI tools are capable of amplifying these biases and compounding existing healthcare inequalities.
Patients are often unaware of the extent to which healthcare AI tools are capable of mining and drawing conclusions from health and non-health data, including from sources they believe to be confidential. Consequently, patients are not fully aware of how AI predictions can be used against them.
If AI predictions about health are included in a patient’s electronic record, anyone with access to that record could discriminate on the basis of speculative forecasts about mental health, cognitive decline risk, cancer risk or potential for opioid abuse. The implications for patient safety, privacy and engagement are profound.
More concerning, these risks have already outpaced the current legal landscape. The Health Insurance Portability and Accountability Act (HIPAA), which requires patient consent for disclosures of certain medical information, does not apply to commercial entities that are not healthcare providers or insurers.
The Americans with Disabilities Act (ADA) does not prohibit discrimination based on future medical problems, and no law prohibits decision-making on the basis of non-genetic predictive data. Traditional malpractice rules of physician liability are also becoming more complex to apply as doctors become increasingly reliant on AI tools.
As large healthcare systems increasingly adopt AI technologies, data governance structures must evolve to ensure that ethical principles are applied to all clinical, information technology, education, and research endeavors. A data governance framework based on the following 10 steps can assist large healthcare systems embrace AI applications in a way that reduces ethical risks to patients, providers and payers. Additionally, it enhances public trust, transforms patient experiences and provides effective ethics-based oversight.
1) Establish ethics-based governing principles. AI initiatives should align to key overarching principles to ensure these efforts are shaped and implemented in an ethical way. Minimally, these principles should affirm that:
- Do no harm: Human beings should exercise reasonable judgement and maintain responsibility for the life cycle of AI algorithms and systems, and healthcare outcomes stemming from those AI algorithms and systems.
- AI tools should be designed and developed using transparent protocols, auditable methodologies and metadata.
- AI systems should collect and treat patient data to reduce biases against population groups.
- Patients should be appraised of the known risks and benefits of AI technologies to make informed medical decisions.
More for you
Loading data for hdm_tax_topic #better-outcomes...