This insight was featured in the March 25th, 2026 edition of the AI Catalyst Pulse.
Health systems do not have a coding labor problem alone. They have an expert-capacity allocation problem.
That may sound like semantics, but it’s not.
The market still likes to talk about medical coding as if the central question is whether AI can replace coders, but that framing misses what is happening inside most revenue cycle organizations right now. Coding teams are already stretched. Work is getting more complex. Backlogs are not just annoying. They create denial risk, timely filing risk, rework, and revenue leakage.
The real question is not whether AI can code. It is whether AI can keep scarce expert talent from being swallowed by the wrong work.
Transforming your rev cycle shop from cost-center to expertise-center
Our recent AI Catalyst Executive Deep Dive on the AI-Driven Revenue Cycle summed it up well: growing volumes overwhelm coding, denials, and payment variance teams; backlog forces prioritization based on time instead of value; and cost-to-collect rises as rework becomes routine.
And once that dynamic starts, it compounds. The work that gets delayed is often the hardest, most specialized, or most financially sensitive (especially in terms of having the highest reimbursement values). Experienced coders spend time clearing repetitive first-pass volume while higher-risk cases, denial-prone areas, and specialty backlogs wait their turn – the net result being delayed payment.
That is a bad use of expert labor even before AI enters the picture.
So what is the best use of time for an AI-enabled medical coder?
One of the ways AI Catalyst approached this question was through its custom GPT for job redesign in the AI era. Instead of asking generically whether a role is “safe” from AI, the tool goes line by line through a job description, breaks the work into tasks, and asks a harder question: which parts are repetitive and rules-based, which parts are judgment-heavy, and what should this job look like if it were redesigned for the AI era?
Our team looked at multiple real job postings for medical coders across LinkedIn and fed them into our custom GPT for analysis. For medical coders, that analysis landed on something important. The role is not disappearing. But its premium value is moving.
In the old model, value came from manual chart review, code assignment, abstraction, edit checks, and individual expertise.
In the AI-assisted model, the coder increasingly reviews AI-summarized charts, validates AI-suggested codes, and governs automated logic. In short, AI handles repetitive and rules-based work; the coder verifies and governs.
And then, in the AI-transformed model, the coder becomes the person who handles complex cases, controls quality, manages exceptions, supports audit defense, and helps build the workflows that let the system scale.

That is the shift executives should be planning around now.
Because once AI takes on more first-pass work, the obvious question becomes: what do you want your best coders doing instead?
Calculating true ROI from AI adoption for rev cycle means moving beyond the automation rate
A good example came from University of Vermont Health Network’s diagnostic radiology workflow. Casey Webb, System Director, Professional Billing at UVMHN, described how the organization used autonomous coding in that domain and reached a straight-to-bill rate of about 76%.
Those numbers are impressive enough to make a slide, but the more important point is what UVM did with the capacity it created. Webb explained that once coders no longer had to review 100% of those encounters, the team took the opportunity to move two FTEs to more valuable work. Those resources were redirected to coding groups with the highest backlog, including specialty and primary care areas where the operational risk of delay was greater.
That is the kind of detail executives should pay attention to. The most strategic value of coding AI may not be that it automates a chunk of radiology. It may be that it lets you redeploy scarce coding expertise into the places where backlog hurts most.
Webb made the principle explicit: “Using the ability of the coders to their highest skill set is something that I think is extremely important. If you can have technology as a tool, and allow your teams, whatever role they’re in, to function to their highest ability, you will see the ROI on that.”
AI’s most credible role in coding is prioritization and selective trust
This is where many conversations about coding AI get sloppy. The debate is often framed as a binary – either AI can code autonomously, or humans remain indispensable. Either it replaces the labor, or it does not matter. That is the wrong frame.
There is now a growing set of coding tasks that software can help with meaningfully: summarizing records, suggesting codes, populating abstracting fields, running edits, and flagging likely documentation gaps. There is also a clear set of tasks that still belong with experienced humans: complex-case interpretation, modifier nuance, compliance-sensitive escalation, denial defense, and judgment in the face of conflicting or incomplete documentation. This model reframes AI as a “co-pilot” and a “second validating set of eyes” for coding decisions with reimbursement implications.
The win is not that AI suddenly becomes trustworthy everywhere. The win is that teams stop spending scarce expert attention on routine work that can be handled, or at least narrowed, by software first.
In other words, AI matters not because it removes the need for coders, but because it changes who should be reviewing what.
What health system leaders should do next
Stop treating coding AI as a labor-reduction story. It is a capacity-allocation story. Find where your best coders are still buried in repetitive first-pass work, then ask where that expertise would create more value if it were freed up.
Redesign the role on purpose. If coders are moving from producer to reviewer to automation overseer, then job descriptions, quality metrics, training pathways, and productivity expectations should change too. If they do not, your organization will install AI into a workflow still built for manual production.
Decide who owns the validation layer. As AI takes on more repetitive revenue cycle work, someone has to own the oversight: who reviews outputs, who escalates exceptions, who monitors drift, and who is accountable when the machine is wrong. AI Catalyst’s broader research on governance has
These steps point to a simple truth.
The future of coding is not a future without coders. It is a future where the best coders stop spending their days doing work that software can meaningfully assist with. And the health systems that move first on AI-enabled job redesign will not just automate more charts. They will make better use of the expertise they already cannot afford to waste.
Questions for consideration
If coding volume rose 10–15% tomorrow, where would backlog and revenue leakage show up first?
What portion of your strongest coding talent is still consumed by repetitive first-pass work?
If AI freed 1–2 FTEs worth of coding capacity, where would you redeploy that expertise first?
Are your coder job descriptions and productivity metrics still built for a manual-production world?
Who in your organization owns validation and governance as coding workflows become more AI-assisted?