Experts tell Senate panel of AI’s growing potential
The technology that uses massive amounts of health data to produce insights also brings with it risks for patient care and looming ethical challenges.
Artificial intelligence has tremendous potential for making sense of the big data that is inundating healthcare, creating actionable insights for clinicians. However, the breakthrough technology also brings with it challenges and risks for the industry.
That’s the consensus of AI experts who testified on Tuesday before the Senate Science, Commerce and Transportation Committee’s Subcommittee on Communications, Technology, Innovation and the Internet.
Edward Felten, professor of computer science and public affairs at Princeton University, told lawmakers that AI is already creating huge benefits in healthcare and that its potential will only grow as the technology advances.
“AI is a key enabler of precision medicine,” said Felten. “AI systems can learn from data about a great many patients, their treatments and outcomes to enable better choices about how to personalize treatment for the particular needs, history and genetic makeup of each future patient.”
Victoria Espinel, president and CEO of BSA|The Software Alliance, pointed to a 2016 Frost & Sullivan report that predicts AI has the potential to improve health outcomes by 30 to 40 percent, while making the case that the AI is leading to improved outcomes—not by replacing the decision-making of healthcare professionals, but by giving them new insights into the vast amount of health data.
“AI tools are powering machine-assisted diagnosis, and surgical applications are being used to improve treatment options and outcomes,” testified Espinel. “Image recognition algorithms are helping pathologists more effectively interpret patient data, thereby helping physicians form a better picture of patients’ prognosis. The ability of AI to process and find patterns in vast amounts of data from disparate sources is also driving important progress in biomedical and epidemiological research.”
Espinel provided an anecdote about a 60-year-old woman who was initially diagnosed with a conventional form of leukemia and went through chemotherapy treatment, only to experience a very slow recovery from the disease. After several frustrating months of not knowing how to address the problem, she said the woman’s physicians turned to an AI-powered, cloud-based system capable of cross-referencing the patient’s genetic data with insights gleaned from tens of millions of studies.
“Within minutes, the doctors learned that the patient might be suffering from an extremely rare form of leukemia that required a unique course of treatment,” according to Espinel. “The doctors were able to quickly update her treatment plan and watch her condition improve significantly. This is AI—it’s innovative, it’s powerful, it’s lifesaving.”
Likewise, Daniel Castro, vice president of the Information Technology and Innovation Foundation think tank, made the case that AI adds a layer of analytics that uncovers actionable insights that clinicians would be incapable of providing on their own, while boosting quality.
“Researchers at Stanford have used machine learning techniques to develop software that can analyze lung tissue biopsies with significantly more accuracy than a top human pathologist and at a much faster rate,” Castro said. “By analyzing large volumes of data, researchers can train their computer models to reliably recognize known indicators of specific cancer types as well as discover new predictors.”
Also See: Why healthcare wants to crack the ‘black box’ surrounding AI
However, trust that AI will not make a mistake is a serious challenge in healthcare where patient lives are literally on the line, particularly when it comes to leveraging the technology for clinical decision support.
“In some cases, users of AI systems will need to justify why an AI system produced its recommendation,” testified Dario Gil, vice president of AI and quantum computing at IBM. “For example, doctors and clinicians using AI systems to support medical decision-making may be required to provide specific explanations for a diagnosis or course of treatment, both for regulatory and liability reasons. Thus, in these cases, the system will need to provide the reasoning and motivations behind the recommendation, in line with existing regulatory requirements specific to that industry.”
Nonetheless, Castro argued that there are many misconceptions about AI—particularly its potential harms to patients and patient safety.
“Apple’s Siri virtual assistant is capable of interpreting voice commands, but the algorithms that power Siri cannot drive a car, predict weather patterns or analyze medical records,” testified Castro. “While other algorithms exist that can accomplish those tasks, they too are narrowly constrained—the AI used for an autonomous vehicle will not be able predict a hurricane’s trajectory or help doctors diagnose a patient with cancer.”
Cindy Bethel, associate professor of computer science and engineering at Mississippi State University, confirmed that when it comes to AI making a life-critical decision, humans must remain in the loop.
“There are many ethical hurdles that will need to be decided at some point as to who is responsible if an AI system makes an incorrect decision,” according to Bethel. “The current state often requires a human to be involved at some level of the final decision-making process unless it is low risk or well validated that the system will always make a ‘right’ decision.”
That’s the consensus of AI experts who testified on Tuesday before the Senate Science, Commerce and Transportation Committee’s Subcommittee on Communications, Technology, Innovation and the Internet.
Edward Felten, professor of computer science and public affairs at Princeton University, told lawmakers that AI is already creating huge benefits in healthcare and that its potential will only grow as the technology advances.
“AI is a key enabler of precision medicine,” said Felten. “AI systems can learn from data about a great many patients, their treatments and outcomes to enable better choices about how to personalize treatment for the particular needs, history and genetic makeup of each future patient.”
Victoria Espinel, president and CEO of BSA|The Software Alliance, pointed to a 2016 Frost & Sullivan report that predicts AI has the potential to improve health outcomes by 30 to 40 percent, while making the case that the AI is leading to improved outcomes—not by replacing the decision-making of healthcare professionals, but by giving them new insights into the vast amount of health data.
“AI tools are powering machine-assisted diagnosis, and surgical applications are being used to improve treatment options and outcomes,” testified Espinel. “Image recognition algorithms are helping pathologists more effectively interpret patient data, thereby helping physicians form a better picture of patients’ prognosis. The ability of AI to process and find patterns in vast amounts of data from disparate sources is also driving important progress in biomedical and epidemiological research.”
Espinel provided an anecdote about a 60-year-old woman who was initially diagnosed with a conventional form of leukemia and went through chemotherapy treatment, only to experience a very slow recovery from the disease. After several frustrating months of not knowing how to address the problem, she said the woman’s physicians turned to an AI-powered, cloud-based system capable of cross-referencing the patient’s genetic data with insights gleaned from tens of millions of studies.
“Within minutes, the doctors learned that the patient might be suffering from an extremely rare form of leukemia that required a unique course of treatment,” according to Espinel. “The doctors were able to quickly update her treatment plan and watch her condition improve significantly. This is AI—it’s innovative, it’s powerful, it’s lifesaving.”
Likewise, Daniel Castro, vice president of the Information Technology and Innovation Foundation think tank, made the case that AI adds a layer of analytics that uncovers actionable insights that clinicians would be incapable of providing on their own, while boosting quality.
“Researchers at Stanford have used machine learning techniques to develop software that can analyze lung tissue biopsies with significantly more accuracy than a top human pathologist and at a much faster rate,” Castro said. “By analyzing large volumes of data, researchers can train their computer models to reliably recognize known indicators of specific cancer types as well as discover new predictors.”
Also See: Why healthcare wants to crack the ‘black box’ surrounding AI
However, trust that AI will not make a mistake is a serious challenge in healthcare where patient lives are literally on the line, particularly when it comes to leveraging the technology for clinical decision support.
“In some cases, users of AI systems will need to justify why an AI system produced its recommendation,” testified Dario Gil, vice president of AI and quantum computing at IBM. “For example, doctors and clinicians using AI systems to support medical decision-making may be required to provide specific explanations for a diagnosis or course of treatment, both for regulatory and liability reasons. Thus, in these cases, the system will need to provide the reasoning and motivations behind the recommendation, in line with existing regulatory requirements specific to that industry.”
Nonetheless, Castro argued that there are many misconceptions about AI—particularly its potential harms to patients and patient safety.
“Apple’s Siri virtual assistant is capable of interpreting voice commands, but the algorithms that power Siri cannot drive a car, predict weather patterns or analyze medical records,” testified Castro. “While other algorithms exist that can accomplish those tasks, they too are narrowly constrained—the AI used for an autonomous vehicle will not be able predict a hurricane’s trajectory or help doctors diagnose a patient with cancer.”
Cindy Bethel, associate professor of computer science and engineering at Mississippi State University, confirmed that when it comes to AI making a life-critical decision, humans must remain in the loop.
“There are many ethical hurdles that will need to be decided at some point as to who is responsible if an AI system makes an incorrect decision,” according to Bethel. “The current state often requires a human to be involved at some level of the final decision-making process unless it is low risk or well validated that the system will always make a ‘right’ decision.”
More for you
Loading data for hdm_tax_topic #better-outcomes...