This insight was featured in the February 25th, 2026 edition of the AI Catalyst Pulse.
Health systems are deploying AI faster than they're deciding who's accountable for it. A LinkedIn scan of governance roles across U.S. health systems shows new titles appearing - Chief AI Officer, Director of AI Governance, Responsible AI Lead - but the authority attached to those roles varies wildly. More importantly, many systems are operating AI in production without resolving the fundamental question: who owns this when something goes wrong?
2025 has forced a reckoning on AI governance for health systems in 2026
Three forces converged in 2025 that make governance focus an imperative for AI adoption among health systems.
State regulation exploded: 47 states introduced over 250 AI bills in 2025, with 33 enacted. California's AI Transparency Act and Texas's TRAIGA (Responsible Artificial Intelligence Governance Act) went live January 1, 2026.
Federal enforcement shifted: The Office of the National Coordinator for Health Information Technology (ONC) HTI-1 Final Rule (Health Data, Technology, and Interoperability) moved algorithm transparency to enforcement; FDA shifted to continuous surveillance.
GenAI hit scale: AI tools for clinical documentation are moving "from isolated pilot programs to full enterprise-scale deployment.
But why isn’t an AI governance committee enough?
Increasingly, committee-based governance is breaking down and it’s showing in two ways. First, pace. AI adoption is accelerating: new models go live, vendors push updates, GenAI tools embed into workflows continuously, not quarterly. When a clinical team wants to deploy an ambient scribe next week and governance doesn't meet until month-end, the tool either launches unreviewed or stalls. Either way, governance fails.
Second, monitoring. Watching for model drift, performance degradation, and bias signals requires dedicated, continuous oversight. Monthly committee meetings can't catch when an imaging algorithm performs differently across patient populations or when a GenAI scribe hallucinates clinical details. THMA's AI Maturity Matrix shows organizations with committee structures that have governance on paper, but that can’t keep up.
AI governance investments beyond the committee: We're seeing a fork in the road
Some organizations are outsourcing governance. They’re relying on vendors like Epic or specialized startups like Onboard AI to handle their monitoring and oversight, but it’s a bet that external partners will catch problems before they happen.
The other path is building in-house governance roles dedicated full-time to AI oversight. These aren't committee members meeting quarterly, but rather full-time operators watching systems round-the-clock. We’re focusing on that second path: what it looks like when health systems build dedicated AI governance capacity. The models look different, but the question is the same. Can your governance structure catch up to what you've already deployed?
Attributes of in-house governance roles
Based on our LinkedIn scan of 15 profiles and job postings across health systems and regulated industries, we identified a new category of roles emerging: the dedicated AI governance operator. Although titles are different, these aren’t committee members or part-time oversight; they're full-time jobs with distinct titles, skillsets, and responsibilities. Here’s what we are seeing.
Examples of titles appearing:
The work converges around four areas:
Own enterprise AI governance –design intake, run approvals, enforce registration.
Set expectations for documentation, validation, monitoring.
Translate technical AI issues into business language and facilitate hard conversations.
Build repeatable processes that scale, not one-off reviews.
The people doing it came from risk, compliance, privacy, quality, safety, or analytics–regulated environments where accountability already mattered. Strong judgment about tradeoffs, comfort with ambiguity, bias toward building systems. They know when to slow things down.
Where they sit: Rarely inside AI or engineering teams. Most report into Data/Analytics, Risk/Compliance, or Legal/Privacy, operating in a matrix with dotted lines to clinical and product leadership.
The case for dedicated roles: The argument for setting up these positions centers on continuous oversight. Dedicated operators provide monitoring instead of quarterly reviews, enforce the "no go-live without registration" rule because someone owns the registry, and build repeatable processes that scale with portfolio growth. When incidents occur, there's a single point of accountability. They translate between technical, clinical, and regulatory languages full-time, which has long been a coordination function that doesn't fit into anyone else's job description.
The tradeoffs are real. These roles add headcount and budget when margins are tight. They risk becoming governance theater if decision rights aren't clearly defined. Positioned as gatekeepers rather than enablers, they can slow innovation. And the skillset blend –clinical fluency plus risk judgment plus technical governance capability–is difficult to hire for, especially in competitive markets
What the ‘governance operator’ owns: Synthesized from NIST, AMA guidance, CHAI's Applied Model Card, the Joint Commission, and THMA research, these five areas require dedicated ownership.
Maintain the AI registry of products and enforce the 'no go-live without registration' rule.
Coordinate validation reviews and ensure local testing happens before deployment.
Monitor systems continuously for drift, bias, and performance degradation.
Manage escalation when incidents occur –know who to call, when.
Own vendor due diligence and contract oversight for purchased AI.
The choice ahead: AI governance is a full-time job. Committees can't govern technology that moves faster than they meet. The organizations getting this right didn't just pick a better org chart –they funded the governance operator layer. That's the ‘Director of AI Governance’ who maintains the registry, coordinates validation, monitors continuously, and manages escalation when things go wrong. Without that role, you have governance on paper. With it, you have someone accountable 24/7.
The question isn't whether to formalize governance. Regulators and boards already decided that. The question is whether you're building external dependence or internal capability. If you're choosing in-house, the commitment is clear: stand up the operator role, give them decision rights, and fund the function to scale with your portfolio. Your AI is growing faster than your governance calendar.
Questions to consider:
If an AI tool in your system failed tomorrow, who would detect it, contain it, and report to the board and how long would that take?
Does your organization have someone watching AI systems full-time, or is governance something that happens in meetings?