Dave Wessinger, Cofounder & CEO of PointClickCare, a leading healthcare technology platform enabling meaningful collaboration.
As in every industry, AI is making waves in healthcare, promising more efficient workflows, better patient outcomes and improved decision-making. But I think AI can only be effective if clinicians trust it.
In recent months, AI in healthcare has come under heightened regulatory scrutiny, with many policymakers and industry leaders calling for clearer guidelines to ensure transparency, security and compliance. While these policy conversations are critical, they don’t answer the most important question for healthcare organizations: What does responsible AI adoption look like in practice?
The reality is that AI’s success or failure in healthcare will be determined within the walls of hospitals, health systems and post-acute care facilities. Leaders must ensure that the technology they’re adopting is transparent, trustworthy and aligned with clinical decision-making. This also requires a strong foundation of governance—not as an afterthought, but as a framework for responsible implementation.
I’ve always been driven by two things: a passion for technology and a deep desire to improve care. After 30 years working at the intersection of healthcare and innovation, I’ve seen both the potential and real challenges of bringing tech into care settings. For AI to truly make a difference, I think it has to be responsibly governed from day one.
So, what does it take for AI to truly drive impact in healthcare? Here are the key priorities leaders should focus on.
1. AI can’t be a black box—it has to be transparent.
At a recent healthcare leadership summit, one of the biggest concerns I heard was this: “AI sounds great, but how do we actually use it?” The answer? AI has to make clinicians’ jobs easier, not harder. Clinicians need to understand why AI is surfacing a recommendation and how it arrived at that decision.
Take predictive analytics for hospital readmissions as an example. My company has found the most effective models provide insights into a patient’s risk and also offer transparency into how the model arrived at its conclusions. Without this level of clarity, AI can feel like a black box, and that’s a nonstarter in healthcare.
2. Fix the ‘intelligence on paper’ problem.
One of the biggest barriers to AI adoption isn’t the technology itself—it’s the fragmented data on which it relies. In many hospitals and post-acute care facilities, patient data is still trapped in paper records or siloed across disconnected systems.
A simple reality I often come back to in discussions with healthcare leaders is: You can’t put intelligence on paper. But the problem isn’t just about digitization. Even when data is digital, if it’s inconsistent, incomplete or not shared across care settings, AI models can’t generate meaningful insights.
And it’s not just structured, standardized data that’s critical for AI. Unstructured data, such as nurse notes or practitioner narratives, capture nuances that are essential for accuracy but may be missed in structured formats. Combined, both data types can lead to more accurate insights, as I’ve seen in my company’s own models. Without this level of integration, AI can look more like guesswork than intelligence.
3. Align AI strategy with business and clinical goals.
AI adoption in healthcare often fails because it’s implemented in silos. Piloting AI in isolation, without a clear connection to patient care and financial objectives, can lead to great ideas that never scale.
When AI is integrated effectively, it can help reduce preventable hospitalizations through early risk detection, improve care coordination by facilitating real-time data sharing and enhance operational efficiency by automating time-consuming administrative tasks.
To get AI adoption right, I suggest organizations tie it directly to financial and clinical outcomes—this mindset can make all the difference.
4. Build AI governance into the process from day one.
Governance is what ensures AI is used responsibly, consistently and in a way that aligns with regulatory and ethical standards. But governance shouldn’t be a roadblock—it should be an enabler of trust and adoption.
A strong AI governance framework should include:
• Transparency requirements to ensure AI-generated insights are clear and actionable (i.e., ONC-HTI-1 guidelines)
• Regular audits and performance monitoring to prevent bias and unintended consequences
• Alignment with industry best practices and regulatory requirements (i.e., HIPAA and evolving federal and state oversight) for data security, compliance and interoperability
And organizations don’t have to start from scratch. Initiatives like the Coalition for Health AI (CHAI) and the Health AI Partnership (HAIP) provide guidance on responsible AI adoption, offering frameworks and best practices to help healthcare leaders establish governance structures that align with regulatory and ethical standards. (Disclosure: My company joined CHAI.)
The future of AI in healthcare depends on responsible implementation.
AI in healthcare is advancing rapidly, but effective implementation requires trust. I think the organizations that prioritize transparency, structured data, strategic alignment and governance will be able to harness AI’s full potential while building clinician confidence. The key lies in establishing a solid foundation and establishing proper execution from the outset.
Forbes Business Council is the foremost growth and networking organization for business owners and leaders. Do I qualify?
Read the full article here