Building trust in healthcare AI: Key insights from HIMSS25 on deployment, responsibility, and innovation
Trust is the foundation of AI in healthcare. Healthcare leaders at HIMSS25 are building trust in AI through responsible deployment, transparency, and human-in-the-loop oversight.

At HIMSS 2025, the panel discussion “Navigating the Subtleties of Generative AI in Healthcare: Deployment, Trust, and Responsible Innovation” brought together some of the most influential voices in healthcare AI. Moderated by Mitchell Josephson, CEO of Health Data Management, the session featured:
The conversation focused on how healthcare institutions, EHR providers, and AI developers are deploying generative AI, where trust plays a role in adoption, and how AI governance and implementation must evolve to keep pace with the rapid advancements in large language models (LLMs).
AI’s role in healthcare: The reality vs. the hype
The panel kicked off by acknowledging the explosive growth of AI in healthcare and the overwhelming speed at which new models are emerging. Dr. Clardy of Google set the stage, noting that the past 12 to 18 months have seen a dramatic decrease in the expertise required to deploy AI tools.
“We’re seeing a rapid democratization of AI in healthcare,” Clardy explained. “The barriers to adoption have dropped significantly—meaning more institutions, even those without deep technical expertise, can start implementing these tools.”
But as AI becomes more accessible, questions of trust, governance, and real-world application become even more critical. While Google is pioneering foundational AI technologies, Meditech and Frederick Health are working to translate that innovation into real-world, clinician-friendly workflows.
Rachel Wilkes of MEDITECH emphasized the role of EHR providers as intermediaries, helping to bridge the gap between AI developers and healthcare providers.
“Our role is to bring cutting-edge AI into clinician workflows in a way that’s seamless, efficient, and safe,” Wilkes explained. “That means thinking about how providers interact with AI, what guardrails need to be in place, and how we maintain transparency in AI-driven insights.”
From AI experimentation to scalable solutions
Frederick Health, an independent community hospital system in Maryland, has been at the forefront of early AI adoption. Jackie Rice, the system’s CIO and Vice President, detailed how the organization has approached AI integration in a deliberate, phased manner.
“We can’t afford to sit back and wait,” Rice stated. “But we also can’t implement AI blindly. We have to ensure our providers trust the tools we introduce.”
Frederick Health has taken a measured approach to AI deployment:
“We aren’t just flipping a switch and saying, ‘Here’s AI—go use it,’” Rice emphasized. “We are starting small, testing in real workflows, and refining based on clinician feedback.”
The AI trust equation: How transparency and usability influence adoption
A key theme of the discussion was how trust influences AI adoption. For providers to embrace AI-driven decision support, they need transparency into how these tools generate insights.
Dr. Clardy pointed out that one of the biggest challenges in AI-driven summarization is that it can leave out critical details clinicians need to make informed decisions.
“Summarizing 1,000 pages of a patient’s history into one paragraph is great—but only if that summary is accurate, unbiased, and properly cited,” he said. “If we can’t show the source of the information, how can a provider trust it?”
Google has tackled this by ensuring that AI-generated summaries clearly indicate where information comes from and cite their sources within the EHR environment.
Balancing innovation with caution: The speed of AI vs. clinical governance
One of the panel’s most pressing topics was how healthcare institutions can keep up with the staggering pace of AI development.
“We’ve never seen innovation move this fast,” Josephson remarked. “AI models are improving almost monthly, and healthcare providers need to figure out how to balance adoption with careful governance.”
Dr. Clardy echoed this concern, explaining that Google must balance innovation with responsibility, making it clear which AI tools are ready for clinical deployment and which are still in the experimental phase.
“There’s a big difference between what’s ‘cutting-edge’ and what’s ‘clinic-ready,’” Clardy said. “We have to be honest about what AI can do today—and what it shouldn’t be doing yet.”
Human in the loop: AI as an assistant, not a replacement
Throughout the discussion, all three panelists stressed that AI in healthcare must be used to augment human decision-making—not replace it.
Rachel Wilkes outlined MEDITECH’s feedback-driven AI implementation strategy, which ensures that clinicians remain in control of AI-generated outputs.
“We don’t want AI to make decisions—we want it to provide insights that make clinician decision-making more efficient and accurate,” Wilkes said. “We’ve built our AI systems with clear disclaimers, human oversight, and feedback loops to refine the technology in real time.”
This human-in-the-loop approach has been particularly important for AI-driven clinical documentation tools. Wilkes described how MEDITECH has developed a feedback mechanism where clinicians can rate AI-generated summaries, suggest edits, and report inaccuracies.
“We’re actively improving AI based on real-world clinical use,” she noted. “This isn’t a ‘set it and forget it’ system—it evolves based on what clinicians need.”
Final takeaways: The three principles of AI trust in healthcare
As the discussion wrapped up, the panel outlined three core principles for building trust in healthcare AI:
1. Manage Expectations – AI isn’t perfect, and it shouldn’t be expected to solve everything overnight. Providers need to understand what AI can and can’t do before adopting it.
2. Accept Feedback and Iterate – AI is constantly evolving, and its implementation should be too. Institutions need clear feedback loops to continuously refine AI tools based on real-world clinician input.
3. Trust but Verify – No AI-generated insight should be taken at face value. Clinicians should be empowered to review, edit, and validate AI-driven outputs before acting on them.
Looking ahead: Responsible AI deployment in healthcare
The overarching message from the panel was clear: AI will play a transformative role in healthcare, but its success hinges on responsible deployment, transparency, and trust.
“AI isn’t a magic wand—it’s a tool,” Wilkes said. “Its value depends entirely on how we implement it, govern it, and refine it based on real clinical needs.”
For healthcare leaders, the challenge moving forward will be striking the right balance between innovation and caution, ensuring that AI improves patient care while maintaining the highest standards of accuracy, privacy, and trust.
With organizations like Google, MEDITECH, and Frederick Health leading the charge, the future of AI in healthcare is promising—but only if done responsibly.
Katrina Fryar, MBA, FACHDM is the Vice President and COO of Health Sciences South Carolina