1. insights
  2. all access
  3. artificial intelligence
  4. himss26 highlight cms embraces ai for medicare navigation
Strategist | all-access

HIMSS26 Highlight: CMS embraces AI for Medicare navigation

Visual banner featuring the text "The Strategist" and a graphic evoking a football play diagram.

On a panel at this year’s annual Healthcare Information and Management Systems Society conference, CMS Administrator Dr. Mehmet Oz revealed plans to introduce AI agents to Medicare beneficiaries that could help them find providers or select MA plans.

While details are scarce, Oz and acting DOGE administrator Amy Gleason suggested that such innovations might emerge from the Health Tech Ecosystem initiative, which has encouraged participating companies to launch dedicated AI chatbots designed to answer health queries.

CMS leaders also discussed using AI to identify fraud in Medicare and Medicaid, with algorithms trained on data from providers that the agency has previously taken enforcement actions against.

So What?

The prospect of AI agents navigating the healthcare system on Medicare beneficiaries’ behalf raises many of the same concerns that health systems are confronting with the rise of app-based commercial insurance navigator services like UHG’s Surest, Garner Health, and Transcarent. Here are some of our initial thoughts:

1. Seniors might not trust AI, but that might not matter if these services and recommendations are pushed on them.

While CMS leaders have acknowledged a trust gap, that skepticism might not matter if the tools are embedded directly into the channels that shape their care decisions. If health plans, caregiver platforms, and provider referral workflows increasingly present AI-generated guidance as the default option, seniors may end up following those recommendations passively rather than actively opting in. In that scenario, the real driver of adoption would be institutional distribution and choice architecture, not consumer enthusiasm.

Ultimately, this could create a new class of gatekeeper for Medicare dollars that extends not just to MA, but to traditional Medicare as well.

2. Unlike patients, AI navigator agents might not care so much about nonprofit health systems’ brand halo.

Many health systems’ patient retention strategies are built around reputations developed over the long term through positive experiences at critical care junctures, but AI agents could take a more dispassionate approach that emphasizes more objective metrics like cost and value-based quality milestones. Complicating matters further, their priorities and decision making might be relatively opaque to health systems, making it harder to know what exactly you need to target to continue winning patient volumes.

Our suspicion with commercial insurance equivalents like Surest is that cost is probably the most influential component of how they score providers behind the scenes. Ultimately, this could divert greater patient volumes to lower-cost settings like ASCs, retail clinics, and high-efficiency outpatient facilities. If health systems don’t own these assets themselves, they might find themselves increasingly cut out of the loop. It’s potentially a double whammy as site-neutral reforms continue to push the industry in this direction.

3. More robust fraud-detection tools could boost confidence in legitimate providers—unless innocent bystanders get caught in the crosshairs.

AI-based fraud detection could, in theory, be a net positive for legitimate providers by weeding out bad actors and reinforcing trust in the system. But the risk hinges on how these algorithms are trained and how broadly they're applied. CMS indicated the models would learn from data on providers that have already faced enforcement actions—but that training set may bake in historical biases or patterns that don't cleanly distinguish between actual fraud and, say, coding irregularities common among safety-net hospitals or smaller practices with fewer billing resources. If the algorithms cast too wide a net, providers could find themselves flagged, audited, or excluded from networks based on pattern-matching rather than genuine wrongdoing.

It's also worth noting that claims of widespread fraud have already been wielded as political justification for cutting Medicare and Medicaid spending, which means AI tools that surface more suspected fraud—rightly or wrongly—could end up providing ammunition for further reductions rather than simply cleaning up the system.

For health systems, the concern is less about being mistaken for a fraud ring and more about the downstream operational burden and reputational risk that even a false positive can create. An erroneous flag could trigger payment delays, trigger-happy plan exclusions, or public reporting that's difficult to walk back—particularly if CMS or MA plans begin automating enforcement decisions on the back end. Providers would be wise to get ahead of this by investing in billing compliance infrastructure and data transparency now, rather than waiting to contest an algorithm's conclusions after the fact.