HL7 brings standardization prowess to aid wider use of trustworthy AI
Standards organization creates an artificial intelligence office to use its industry connections to build ‘the rails on which AI will run.’

HL7 is ramping up efforts to bring its standardization prowess to enable wider, more efficient and safer use of artificial intelligence in healthcare.
While other AI initiatives are also underway, the standards development organization is working toward the construction of an AI-ready interoperability “stack for healthcare” by building on existing standards, such as the Fast Healthcare Interoperability Resource (FHIR) and others to provide a vendor-neutral community that has existing connections with multiple industry segments.
In mid-summer, the Ann Arbor, Mich.-based organization launched its own artificial intelligence office to “create foundational standards for safe, trustworthy AI in healthcare and convene the global community driving this transformation.”
While this marks HL7’s formal move into the burgeoning use of AI in healthcare, it’s hoping to carve out a space focused on homogenizing standards and ensuring that AI works well – with trust and transparency – across all organizations’ platforms. Its initial work is particularly looking at the contribution that AI can make in improving payment integrity and reducing fraud, waste and abuse.
HL7’s sweet spot
HL7 has done foundational work in creating and promulgating healthcare data standards that have enabled exchange of healthcare information to support business office and clinical operations.
In the last 15 years, its work to develop FHIR as a data exchange standard has gained prominence in health exchange initiatives, facilitating the exchange of health information and supporting data exchange functions over the Internet. The organization has expanded its scope by implementing various accelerators to create and fine-tune use cases for specific niches, such as value-based care, oncology, social determinants of healthcare, and more.
Its AI initiative will build on HL7’s existing standards, including FHIR, SMART App Launch, CDS Hooks and Clinical Quality Language (CQL), says Daniel Vreeman, HL7’s first chief AI officer, a title he added this summer in addition to his role as HL7’s chief standards development officer.
FHIR can provide important insight into data, Vreeman says, because of its enhanced provenance capabilities and the AI/ML data lifecycle standard. The organization will launch a new project on AI Transparency on FHIR “to provide guidance on representing AI inferences in FHIR data structures and patterns for representing FHIR-defined operations that use AI to execute them.”
Vreeman also says the new office has launched a cross-paradigm data quality framework “that has applications in many use cases, but is particularly relevant to AI model development.” And its beginning exploratory work in conversational interoperability involving AI agent-mediated and data-mediated workflows.
The new office is also working as a “trusted global convener for standards-powered AI and a contributor to existing health AI initiatives,” he adds. HL7 also is using AI to improve the organization’s internal operations and it’s building out an AI learning series for members and partners.
Long-term goals and immediate efforts
HL7 identifies four strategic initiatives for the AI office.
Standards. Building the AI-ready interoperability stack for safe, explainable AI with provenance capabilities.
Global leadership and partnerships. Convening the AI-health community to align standards, shape policy and accelerate responsible innovation.
The AI Innovation Lab. Incubating AI solutions to enhance member experiences, accelerate standards development and pioneer new ways of working.
Community excellence. Empowering implementers with tools and best practices for responsible standards-powered AI development.
Earlier this year, the organization released a report detailing the impact that AI could have on improving payment integrity and concurrently reducing fraud, waste and abuse.
The report, entitled Reducing Fraud and Improving Payment Integrity in Healthcare Through the Use of AI compiles insights from payers, providers and technology experts “to define AI opportunities and emerging solutions, current challenges, implementation strategies, and standards needed to increase transparency and trust.”
Many of the report’s findings presage the reasons underlying the creation of the new AI office. The recommendations include developing standards with transparency requirements and bias mitigation protocols; enabling standards that ensure trust and verification of AI-generated results; enabling frameworks so humans remain in the loop to validate results; and implementing pilot programs that focus on provider-payer collaboration.
HL7 is calling on healthcare stakeholders “to engage in cross-sector working groups and participate in pilots that demonstrate how AI applications, powered by interoperable data standards, can deliver faster, more accurate payment decisions.”
How is it different?
There’s no shortage of efforts to inject reliability, trust, ethics and oversight for the use of AI in healthcare. Increasingly, the various initiatives are narrowing down their goals and working collaboratively.
Vreeman sees a distinct difference for HL7’s initiative, which he describes as standards-based and vendor-neutral – historically a strength of HL7 within the industry.
“Rather than building AI tools or models directly, our focus is on creating the interoperability infrastructure that enables trustworthy, scalable AI across the healthcare ecosystem,” he contends. “And HL7 provides neutral ground where everyone … can collaborate on standards that benefit the entire ecosystem. Finally, we are focused on enabling techniques for trust and transparency at the infrastructure level.”
There’s plenty of work to be done, he adds. “Other groups are creating guidance and best practices for healthcare professionals and organizations using AI,” he adds. “HL7 is helping create open specifications that applications and IT systems can use to fulfill those best practices.”
HL7’s effort is thus focusing on the infrastructure layer that can enable interoperability, transparency and scalability. “We’re building the rails on which the AI trains will run,” Vreeman concludes.
Fred Bazzoli is the Editor in Chief of Health Data Management.