The Health Management Academy

How Governance Can Make or Break AI’s Promise in Healthcare

Article

In October, The Academy, in partnership with Nuance, a Microsoft company, hosted the inaugural AI Collaborative – a new program for clinical and operational executives leaders whose organizations invest in AI as a strategic initiative.

Informed by discussions from the AI Collaborative, we review the key questions for health systems looking to establish an AI governance model.

Learn More About the AI Collaborative

AI-enabled technology solutions hold the promise of transforming health care, from operations to clinical care delivery, including applications that could help alleviate workforce shortages and burnout and deliver more accurate and timely clinical diagnoses and treatments. The possibilities are endless.

But a lack of strong governance is making it difficult for health systems to advance a clear AI strategy and move from subscale point solutions to a robust AI infrastructure that drives value system-wide. AI governance shares foundational principals with other technology- and data-related functions, but its unique role in health care warrants special considerations.

Informed by discussions at The Academy’s recent AI Collaborative in partnership with Microsoft, here are three starter questions for Leading Health Systems to answer when establishing an AI governance model.

1. What is the right governance structure?

Many health system technology enterprises seek to balance centralized and decentralized decision-making. Centralization of key functions helps to optimize AI expertise and efficiency, but individual service lines and operational teams will expect some control over AI model development and deployment to ensure solutions meet their unique needs.

Advanced health systems start with centralized oversight to drive decisions related to AI platform technologies, standards for assessing potential AI algorithms, and model validation. They also allocate team member capacity (e.g., data scientists) for business unit-embedded support, including guidance (but not necessarily ownership of) model implementation.

Centralized oversight also helps promote data capture and transparency so systems can audit the universe of solutions “in the wild”, which has been a challenge for many health systems

2. How do we foster innovation AND implementation?

While governance is critical, LHS are wary about establishing too much bureaucracy—which can inhibit innovation and rapid scaling of promising AI solutions. But an overly-accommodating approach to AI can lead to paralyzingly large numbers of interesting but not necessarily useful or executable models. Some health systems report a ratio of AI algorithms created to those implemented as high as 100:1.

The key is having a process to identify the successful models and embed them across the system. Two strategies can help improve the yield of promising models.

First, holding all AI innovators to defining clear, succinct parameters related to their models. Advanced analytics leaders characterize these as “nutrition labels,” which help governing bodies sort signal from noise. Key information captured should include: model description., data set requirements, algorithm methodology, use cases, bias mitigation, ROI model, etc.

Another strategy is to deploy pilots in strategic locations across the system, with clear embedded champions and running them adjacent to the original process, which helps highlight the benefits of adoption.

3. How are we accounting for the ethical and legal implications of AI?

AI solutions introduce a variety of complex issues related to patient privacy, data ownership, health equity, bias and data drift (which happens when the properties of the input data change).

Ambient listening technology to support clinical documentation is a prominent example. Some health systems are making the decision that’s its necessary and ultimately beneficial, to share data with vendors to improve the technology after receiving patient consent, while others are still unsure. Physicians are caught between a desire to protect patient confidentiality and ensure patients feel comfortable sharing information, but also need relief from burdensome documentation.

Regardless of where health systems are on their AI journey, building a strong governance model for AI is top-of-mind for IT leaders. An early analysis of Academy data on Leading Health System priorities for 2023 indicates that optimizing AI remains a substantial area of opportunity for most health systems and an area where we’ll continue to see growth and opportunity for partnership