Why AI needs a reality check
The use of artificial intelligence is overhyped—some experts believe that IA or “information augmentation” is the proper first step.
Visionary Elon Musk fears it. Astrophysicist Stephen Hawking worried about it. Microsoft’s Bill Gates embraces it. Science fiction writer Phillip K. Dick wrote about androids having the capacity to dream because of it. At HIMSS 2019, everyone talked about it.
So, what is artificial intelligence? According to Wikipedia, researchers Kaplan and Haenlein define AI as “a system’s ability to correctly interpret external data, to learn from such data and to use those learnings to achieve specific goals and tasks through flexible adaptation.”
Except for computer scientists immersed in AI research, most experts generally equate AI with machine learning and include natural language processing (NLP) as a tool used within AI research.
At HIMSS 2019, we assembled our Innovation Council membersrepresenting physicians, CIOs, data scientists, public health experts and informaticiststo discuss and debate their views on AI and its promised impact on patient care. In preparation for that meeting, we shared several academic papers on AI taken from Journal of the American Medical Association and the British Medical Journal, which the attendees reviewed prior to the event.
Although our Council expressed slightly different views on AI, they agreed that AI in healthcare is an overhyped concept inappropriately attributed to programs that do not fit any reasonable definition of AI tools. They described many instances where operational clinical decision support tools touted as AI were, in reality, expert systems driven by algorithms built by human experts.
Our Council also worried about “black box” AI. In this instance, clinical or operational decision support tools touted as AI solutions deliver results built from opaque processes hidden from users. Without transparency into the processes, organizations using these tools are unable to evaluate the quality and reliability of these “AI” systems. In addition, they cannot determine if they are based upon AI principles or more simplistic, static, rule-based algorithms.
Rather than seeking to use AI to deliver care, our Council believes that IA or “information augmentation” is the proper first step in using emerging AI capabilities. Using IA-driven applications, administrators, clinical staff and other decisionmakers can access critical information when it is needed, presented in a format that is easy to digest.
IA systems comb through the available data, identify what information within the data is most important, and then deliver it in dynamic visualizations and dashboards that help the user see and understand the message within the data. With continued use, IA systems learn what is important in the data, providing users with relevant, actionable insights.
Informaticists have much work to do to understand how AI can be applied in care delivery. Our Council believes that AI should initially be used to narrow the data presented to clinicians to only the most important information, rather than take over the task of directing clinical care.
So, what is artificial intelligence? According to Wikipedia, researchers Kaplan and Haenlein define AI as “a system’s ability to correctly interpret external data, to learn from such data and to use those learnings to achieve specific goals and tasks through flexible adaptation.”
Except for computer scientists immersed in AI research, most experts generally equate AI with machine learning and include natural language processing (NLP) as a tool used within AI research.
At HIMSS 2019, we assembled our Innovation Council membersrepresenting physicians, CIOs, data scientists, public health experts and informaticiststo discuss and debate their views on AI and its promised impact on patient care. In preparation for that meeting, we shared several academic papers on AI taken from Journal of the American Medical Association and the British Medical Journal, which the attendees reviewed prior to the event.
Although our Council expressed slightly different views on AI, they agreed that AI in healthcare is an overhyped concept inappropriately attributed to programs that do not fit any reasonable definition of AI tools. They described many instances where operational clinical decision support tools touted as AI were, in reality, expert systems driven by algorithms built by human experts.
Our Council also worried about “black box” AI. In this instance, clinical or operational decision support tools touted as AI solutions deliver results built from opaque processes hidden from users. Without transparency into the processes, organizations using these tools are unable to evaluate the quality and reliability of these “AI” systems. In addition, they cannot determine if they are based upon AI principles or more simplistic, static, rule-based algorithms.
Rather than seeking to use AI to deliver care, our Council believes that IA or “information augmentation” is the proper first step in using emerging AI capabilities. Using IA-driven applications, administrators, clinical staff and other decisionmakers can access critical information when it is needed, presented in a format that is easy to digest.
IA systems comb through the available data, identify what information within the data is most important, and then deliver it in dynamic visualizations and dashboards that help the user see and understand the message within the data. With continued use, IA systems learn what is important in the data, providing users with relevant, actionable insights.
Informaticists have much work to do to understand how AI can be applied in care delivery. Our Council believes that AI should initially be used to narrow the data presented to clinicians to only the most important information, rather than take over the task of directing clinical care.
More for you
Loading data for hdm_tax_topic #better-outcomes...