1. insights
  2. AI Catalyst
  3. health technology
  4. the readiness gap why health systems arent prepared for agentic ai yet
Newsletter | the-strategist

The Readiness Gap: Why Health Systems Aren’t Prepared for Agentic AI (Yet)

Illustrated graphic showing a winding road labeled “The Road to Agentic AI,” with healthcare professionals reviewing a tablet on one side, road signs and construction barriers along the path, and a clinician standing thoughtfully at the end of the road, symbolizing the journey toward agentic AI in healthcare.

Despite the surge of interest and investment in advanced AI across health care, most systems are surprisingly unprepared to safely deploy agentic AI, the autonomous, multi-step AI systems that can make decisions or take actions without constant human prompts. That’s the key insight unearthed from a recent report in NEJM AI, conducted in partnership by THMA and Microsoft.

Health system executives, technology leaders, and boards increasingly see AI not just as a tool for automating documentation or improving predictions, but as a catalyst for fundamental workflow transformation. Yet, while the promise of agentic AI is compelling, especially in contexts where these systems can coordinate complex tasks like triaging patients, optimizing scheduling, or autonomously initiating clinical alerts, the reality on the ground is that most organizations lack the organizational muscle to deploy agentic AI tools at scale.

Where health systems really stand

Only 4% of health systems have actively deployed (beyond pilot-phase) agentic AI in workflows. As a result, the study’s high-level (and likely not terribly surprising) finding: most health systems are still in early experimentation mode with agentic AI. Adoption remains fragmented, pilots abound, but very few organizations have the enterprise-wide governance, infrastructure, or workforce readiness needed to move beyond isolated use cases.

This readiness gap isn’t primarily technical. Health systems are generally capable of integrating predictive models and AI-assisted workflows into point solutions. But moving to systems capable of autonomous action with measurable operational or clinical impact requires foundational capabilities few organizations have built: clear accountability frameworks, real-time performance monitoring, cross-functional governance, and robust ethical guardrails.

Keys barriers to scale

1. Governance and risk oversight aren’t mature. Early signals like the creation of centralized AI or agent operations teams and new roles (e.g., agent managers) underscore that agentic AI governance requires a fundamentally different operating model, not incremental policy updates. Traditional technology governance relies on periodic review, but agentic AI demands continuous oversight, real-time monitoring, and clear escalation and accountability structures for AI-influenced decisions. Without this shift, health systems risk governance blind spots or unintentionally shifting liability to clinicians without adequate visibility or control.

2. Workforce capability is both critical and challenging. Executives view workforce upskilling as essential; yet, it’s also their biggest implementation challenge. Leaders repeatedly flag the need for new roles, training programs, and embedded AI fluency across clinical, operational, and IT teams. When survey data show 1 in 5 organizations is discussing human to agent ratios, it suggests that people, not the algorithms themselves, are the limiting factor.

3. Data and technology infrastructure lags behind expectations. Legacy systems that can’t support unified, cloud-native data platforms or real-time interference pipelines make scaling agentic AI hard. Even where predictive AI shows early wins, systems struggle to operationalize unified data governance and monitoring needed for autonomous workflows.

What the data shows for health systems leaders
  1. Stop treating AI as a point solution. AI pilots won’t move the needle unless they’re embedded into enterprise strategy – with governance, infrastructure, and workforce planning as core components, not afterthoughts. Leaders need to shift their vision from technology adoption to organizational transformation.

  2. Autonomy without accountability will fail. Accountability cannot be an add-on. If AI can make or trigger clinical or operational decisions, someone must be able to override, audit, and interpret those decisions in real time. Organizations should start with governance frameworks modeled on high-reliability clinical processes.

  3. Hiring data scientists is necessary—and insufficient. Beyond data scientists, top AI deployments call for clinical informaticists, workflow engineers, and AI ethicists embedded in operational teams. This multi-disciplinary capacity is what moves systems from pilots to production with confidence.

  4. Prioritize safety and trust over hype. Agentic AI isn’t just about what’s possible. It’s about what’s responsible. Health systems that rush to autonomy without robust monitoring and clinical alignment risk undermining clinician trust and patient safety. The study emphasizes that readiness is as much ethical and cultural as it is technological.

  5. Temper hype by aligning agentic AI needs with strategic partners’ roadmaps: Nearly half (47%) of all surveyed health systems believe that their AI transformation will be heavily reliant on core strategic partnerships like Epic, Microsoft, or Google. Overall, only 14% report minimal to no reliance. This signals a growing realism: for most organizations, agentic AI will scale through existing platforms, not bespoke builds, making partner roadmap alignment a strategic necessity, not a technical detail.

Questions to consider

  • Which decisions or workflows in your organization could realistically shift from “AI-assisted” to “AI-initiated” in the next 12-24 months? And who is explicitly accountable when those systems act?

  • Where does your current AI governance model break down if an AI system takes autonomous action between formal review cycles?

  • Which AI use cases are you delaying not because of technical limits, but because of unresolved questions around liability, trust, or workforce impact? What would it take to resolve those barriers?

  • Have you asked your EMR vendor when the same agentic AI tools or point solutions you’re evaluating will be integrated? Is their response sufficient or timely enough?