1. insights
  2. the academy 360
  3. health technology
  4. what health systems need to know about ai regulation
Academy 360 | ai-catalyst

What Health Systems Need to Know About AI Regulation

Graphic with a deep blue gradient background displaying the word “Insights” and “360°” alongside The Health Management Academy logo in the bottom left corner.

During AI Catalyst’s “AI Policy Essentials for Healthcare Executives” webinar, legal and policy leaders from Intermountain, Sutter, and Manatt broke down key developments in the fast-changing AI regulatory space.

A sweeping 10-year federal moratorium on state AI laws is under debate: Congress is considering a proposal that would block states from enforcing most AI regulations for a decade. States that opt to regulate AI anyway could lose federal broadband funding.

The backlash is bipartisan: Lawmakers on both sides of the aisle—including Republicans from Florida, Utah, Tennessee, and Missouri—are pushing back, calling the plan too extreme.

State laws in limbo: If passed, the moratorium would cast immediate doubt on existing state-level AI rules, leaving healthcare leaders unsure what compliance will look like in the near term

Forty-five 45 states have introduced healthcare AI bills (250+ bills total). Panelists identified several emerging requirements across states:

  • Mandatory AI disclosure in patient care: Over 70 bills require systems to inform patients when AI is used.

    Utah mandates disclosure of AI-generated communications, while pending Colorado law requires details on AI capabilities and limits.

  • Human oversight of AI coverage decisions: Twenty states want physicians to review AI-driven coverage decisions. Arizona, Maryland, and Nebraska have passed laws ensuring doctors make the final call, especially for prior authorizations.

  • Stricter rules for mental health AI: Utah requires chatbots to identify as AI and bans the use of patient chats for ads. New York mandates suicide detection, and Illinois may prohibit therapists from using AI to read emotions.

  • No “AI made the mistake” defense for legal violations: Utah bars providers from blaming generative AI for violations to avoid liability in consumer protection cases.

  • Most healthcare AI labeled as “high-risk:” Colorado’s law—now influencing others—treats nearly all healthcare AI as high-risk unless it performs narrow tasks, triggering requirements like bias testing and ongoing monitoring.

Despite uncertainty around AI policy—whether a federal moratorium passes or not—health systems have a chance to lead by adopting governance practices already emerging as state-level best practices.

Key Action Steps:

1. Let state-level themes guide AI governance priorities.

  • State priorities hint at future policy and signal what matters to your patients and community.

  • Document what AI you use, why, how it was trained, and your safeguards

  • Apply extra caution to mental health applications

  • Assume you’ll be liable if your AI tools make mistakes

2. Don't let regulatory uncertainty paralyze decision-making. Create adaptive frameworks that can evolve with changing rules because tech will always be years ahead of policy.

3. Push vendors to be true governance partners. Seek information about how their models evolve, what data sources they use, and what risks you face. Consider tools like UVM Health’s vendor vetting questionnaire to assess transparency.

4. Identify high-risk AI use cases – push lawmakers for more differentiated risk-tiering. While Colorado's law treats all healthcare AI as high-risk, scheduling algorithms differ from diagnostic tools. Build your own risk framework and use it to advocate for more nuanced regulations.

5. Engage with local lawmakers, not just at state capitals or in D.C. Sometimes, these relationships can be more influential than a lobbyist.

Questions to ask:

  • Which AI governance practices are safe to adopt now, and which should wait due to policy uncertainty?

  • If you're using AI in mental health use cases, how can you align with emerging laws that require added oversight?

  • Which local relationships could you strengthen now to influence future AI policy?

  • How can you make your AI adoption flexible enough to stay compliant as regulations evolve?