AI presents host of ethical challenges for healthcare
While artificial intelligence has tremendous potential for revolutionizing healthcare delivery, there are many possible pitfalls and ill-intended uses of this powerful technology.
While artificial intelligence has tremendous potential for revolutionizing healthcare delivery, there are many possible pitfalls and ill-intended uses of this powerful technology.
That’s the contention of Georgia Tourassi, director of the Health Data Sciences Institute at the Department of Energy’s Oak Ridge National Laboratory.
“With the great promise of AI comes an even greater responsibility,” Tourassi testified on Wednesday before a House committee hearing on AI’s societal and ethical implications. “There are many ethical questions when applying AI in medicine.”
With respect to ethics, she observed that the massive volumes of health data being leveraged by AI must be carefully protected to preserve privacy.
“The sheer volume, variability and sensitive nature of the personal data being collected require newer, extensive, secure and sustainable computational infrastructure and algorithms,” according to Tourassi’s testimony.
She also told lawmakers that data ownership and use when it comes to AI continues to be a sensitive issue that must be addressed. “The line between research use and commercial use is blurry,” said Tourassi.
To maintain a strong ethical AI framework, Tourassi believes fundamental questions need to be answered such as: Who owns the intellectual property of data-driven AI algorithms in healthcare? The patient or the medical center collecting the data by providing the health services? Or the AI developer?
“We need a federally coordinated conversation involving not only the STEM sciences but also social sciences, economics, law, public policy and patient advocacy stakeholders” to “address the emerging domain-specific complexities of AI use,” she added, noting that the Human Genome Project included a program to address the ethical, legal and social implications of genomic research, which could serve as a model for an AI framework.
According to Tourassi’s testimony, the Human Genome Project’s ethical, legal and social implications (ELSI) program “had a lasting impact on how the entire community, from basic researchers to drug companies to medical workers, used and handled genetic data” and “continuing and expanding this research will help to ensure the responsible use of AI for health.”
Also See: AI technology comes under fire from critics in Senate hearing
“With respect to the ethics of AI development and deployment, we know that AI algorithms are not immune to low-quality data or biased data,” added Tourassi.
Joy Buolamwini, founder of the Algorithmic Justice League, pointed out that in healthcare researchers are looking at how to apply AI-enabled facial analysis systems to detect pain and monitor disease. However, she testified that an investigation of algorithmic bias for clinical populations showed that these systems demonstrated poor performance on older adults with dementia.
“Age and ability should not impede quality of medical treatment, but without care, AI in health can worsen patient outcomes,” added Buolamwini.
Meredith Whittaker, co-founder of New York University’s AI Now Institute, told lawmakers that government agencies are increasingly using AI and algorithmic systems to assess beneficiaries of social services and to manage benefit allocation.
However, according to Whittaker’s testimony, the outcome of these experiments, in many cases, “has been harmful and even deadly to the people such programs are meant to serve.” As an example, she said that several states have turned to automation for Medicaid benefit allocation and that flaws in the system have resulted in serious harm.
“In Arkansas, such a system was used to calculate how much home healthcare chronically ill Medicaid patients would receive,” Whittaker testified. “Due to an error, the system was significantly under-provisioning many people who required such care to survive. Patients were left to sit in their own waste, unable to access food when they were hungry or to turn themselves to prevent bedsores. If Legal Aid of Arkansas had not brought a case and ultimately audited the system, it’s possible that such harm would have persisted unchecked.”
Tourassi concluded that healthcare is one of the industries that will be most impacted by AI in the 21st century. At the same time, she warned that the medical community faces “lots of challenges,” including the fact that the technology has become “overhyped” with “unrealistic expectations of universal benefits.”
That’s the contention of Georgia Tourassi, director of the Health Data Sciences Institute at the Department of Energy’s Oak Ridge National Laboratory.
“With the great promise of AI comes an even greater responsibility,” Tourassi testified on Wednesday before a House committee hearing on AI’s societal and ethical implications. “There are many ethical questions when applying AI in medicine.”
With respect to ethics, she observed that the massive volumes of health data being leveraged by AI must be carefully protected to preserve privacy.
“The sheer volume, variability and sensitive nature of the personal data being collected require newer, extensive, secure and sustainable computational infrastructure and algorithms,” according to Tourassi’s testimony.
She also told lawmakers that data ownership and use when it comes to AI continues to be a sensitive issue that must be addressed. “The line between research use and commercial use is blurry,” said Tourassi.
To maintain a strong ethical AI framework, Tourassi believes fundamental questions need to be answered such as: Who owns the intellectual property of data-driven AI algorithms in healthcare? The patient or the medical center collecting the data by providing the health services? Or the AI developer?
“We need a federally coordinated conversation involving not only the STEM sciences but also social sciences, economics, law, public policy and patient advocacy stakeholders” to “address the emerging domain-specific complexities of AI use,” she added, noting that the Human Genome Project included a program to address the ethical, legal and social implications of genomic research, which could serve as a model for an AI framework.
According to Tourassi’s testimony, the Human Genome Project’s ethical, legal and social implications (ELSI) program “had a lasting impact on how the entire community, from basic researchers to drug companies to medical workers, used and handled genetic data” and “continuing and expanding this research will help to ensure the responsible use of AI for health.”
Also See: AI technology comes under fire from critics in Senate hearing
“With respect to the ethics of AI development and deployment, we know that AI algorithms are not immune to low-quality data or biased data,” added Tourassi.
Joy Buolamwini, founder of the Algorithmic Justice League, pointed out that in healthcare researchers are looking at how to apply AI-enabled facial analysis systems to detect pain and monitor disease. However, she testified that an investigation of algorithmic bias for clinical populations showed that these systems demonstrated poor performance on older adults with dementia.
“Age and ability should not impede quality of medical treatment, but without care, AI in health can worsen patient outcomes,” added Buolamwini.
Meredith Whittaker, co-founder of New York University’s AI Now Institute, told lawmakers that government agencies are increasingly using AI and algorithmic systems to assess beneficiaries of social services and to manage benefit allocation.
However, according to Whittaker’s testimony, the outcome of these experiments, in many cases, “has been harmful and even deadly to the people such programs are meant to serve.” As an example, she said that several states have turned to automation for Medicaid benefit allocation and that flaws in the system have resulted in serious harm.
“In Arkansas, such a system was used to calculate how much home healthcare chronically ill Medicaid patients would receive,” Whittaker testified. “Due to an error, the system was significantly under-provisioning many people who required such care to survive. Patients were left to sit in their own waste, unable to access food when they were hungry or to turn themselves to prevent bedsores. If Legal Aid of Arkansas had not brought a case and ultimately audited the system, it’s possible that such harm would have persisted unchecked.”
Tourassi concluded that healthcare is one of the industries that will be most impacted by AI in the 21st century. At the same time, she warned that the medical community faces “lots of challenges,” including the fact that the technology has become “overhyped” with “unrealistic expectations of universal benefits.”
More for you
Loading data for hdm_tax_topic #better-outcomes...