Discover 2023's top healthcare AI trends: Generative AI became mainstream with the rise of ChatGPT and innovations like Google's Med-PaLM 2 achieved key milestones. Despite excitement, AI faces bias and regulatory challenges. Healthcare leaders seek effective AI strategies amidst rapid innovation and evolving regulations.
1. Generative AI frenzy captures the public’s imagination
Following ChatGPT’s release in late 2022, generative AI leaped to mainstream attention. As shown in the graphic below, from Jan. 1, 2023, to a peak in early June, Google Trends recorded a meteoric 1330% increase in searches for “generative AI.” Similarly, ChatGPT was Wikipedia’s most viewed article of 2023 with 49.5M views.
Generative AI’s transformative potential was embraced by a variety of stakeholders: business tycoons like Bill Gates called it as revolutionary as the mobile phone and personal computer, while leading healthcare thinkers suggested gen AI was poised to imminently impact provider operations. Meanwhile, as consumer awareness of gen AI increased, so did high hopes for its transformative potential in healthcare: 71% percent of consumers currently using generative AI said they believed it could revolutionize health care delivery.
2. It’s not all hype: continued speed and strength of AI innovation at heart of banner year
Let’s also take a moment to acknowledge the incredible speed and breadth of generative AI innovation in 2023. There were a host of general Large Language Models (LLMs) released this year. (LLMs are the underlying data networks that power generative AI tools.) The most recent of these is Google’s Gemini, released earlier this month to mixed reviews. In addition to general LLMs, 2023 also saw the release or update of several healthcare-specific LLMs, including Google’s Med-PaLM 2—which powers a new suite of healthcare gen AI models released by Google last week called MedLM—as well as Microsoft and Epic’s partnership to integrate Azure’s OpenAI technology into Epic’s EMR, Hippocratic AI, and Harman’s HealthGPT.
These models drove several notable milestones for generative AI in healthcare. ChatGPT passed all three parts of the U.S. Medical Licensing Examination (USMLE), and Med-PaLM 2 achieved an accuracy of 86.5% on USMLE-style questions. A study published in JAMA Internal Medicine found that gen AI-powered chatbots may have better bedside manner than physicians. The performance and empathy displayed by these tools sparked excitement about their potential to aid in clinical care.
And in a year where the overall financial landscape for tech start-ups was rocky, we saw AI strongly outperform the overall market. Microsoft kicked off the year with a $10B investment in ChatGPT maker Open AI, while General Catalyst and Andreessen Horowitz announced a $50 M investment in the continued development of Hippocratic AI, a healthcare-specific generative AI model. In Q3, while start-up investments generally fell 31% to $73 billion total, AI startup investments reached $17.9 billion. (This article, while a few months old, has one of the better takes on the overall vendor and investment landscape for generative AI in healthcare that we’ve seen and is worth a read for those interested.)
3. With the rise in hype around generative AI comes a parallel increased level of concern
For all of the positive coverage about generative AI, others were not as optimistic about the coming revolution. Some investors compared the hype around AI to the dot-com bubble, and others pointed out that in healthcare, numerous barriers—both in terms of health system infrastructure and the regulatory environment—will likely keep AI from living up to sky-high expectations, at least anytime soon. Also slowing generative AI’s roll are concerns about its tendency to “hallucinate” (e.g., a recent JAMA study finding that ChatGPT quickly wrote 100 blogs perpetuating health disinformation) and its potential for bias (e.g., studies finding that chatbots perpetuate racist medical misinformation and that AI image generators tend to depict surgeons as white males), obviously both risky in any setting but especially in healthcare. And despite consumers’ excitement about generative AI’s potential, an overwhelming 4 in 5 patients say they still have lingering concerns about their doctors using generative AI to make diagnoses or develop treatment plans.
As a result, while several health systems are experimenting with generative AI, few healthcare use cases—with the possible exception of clinical documentation tools and some small administrative functionalities—are anywhere near widespread. As a New York Times headline from June put it, “AI May Someday Work Medical Miracles. For Now, It Helps Do the Paperwork.”
Meanwhile, on the industry side, generative AI was not entirely immune to investment declines. According to Pitchbook analysis, putting aside a few big deals like those noted above, venture capital investment in generative AI for 2023 is only on pace to match the total raised in 2021 by early gen AI leaders—an unexpected stagnation given how much gen AI’s public profile has climbed in the ensuing two years. And this year also saw the demise of some previous healthcare “unicorns,” most notably Olive AI, which collapsed after failing to deliver on its promise to significantly cut health systems’ administrative costs through automation.
4. AI’s rapid emergence leaves health system execs grasping for an AI strategy
Despite concerns about hype and hallucination, most health system leaders seem to recognize that AI is not simply a passing fad but here to stay, with the potential to concretely change how the healthcare space operates. Yet AI’s sudden emergence has also caught healthcare executives somewhat by surprise: according to a Bain & Company poll, while 75% of health system leaders surveyed believe that AI has the potential to reshape the industry, only 6% have established a plan for how to tackle this transformative technology.
The rapid pace of innovation is only partly to blame for this state of strategic paralysis. The novelty of AI has left many healthcare providers searching for frameworks and templates they can use to assess the impact of adoption and help prioritize a diverse set of potential use cases. Among execs at the HIMSS conference, there was a general feeling that AI operations were jumping ahead of strategy: while everyone is discussing user experience with different vendors and ROI from specific use case implementation, few are taking the time to first identify more systematically what problems they can/should solve with AI.
And leaders aren’t just looking within healthcare for guidance on AI strategy. With concerns about ethical use, consumer protection, and product effectiveness, providers are increasingly calling for support and structure from government entities. As Congress evaluates proposals ranging from calls for national data privacy standards to government oversight of AI risk mitigation in healthcare, health systems must balance the desire to move forward on AI—and the outcomes-improving, cost-saving potential it carries—against uncertainty about the pace and resolution of federal action.
5. On AI regulation, 2023 was a year filled with movement—but little real action
It’s not just healthcare executives who are recognizing that the days of the “AI wild west” are coming to a close; it’s AI’s biggest proponents and business leads as well. In May 2023, Sam Altman, CEO of OpenAI and perhaps the human most responsible for the generative AI hype, practically pleaded in front of Congress for regulation of the AI space, saying, “We think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models.”Even venture capitalists, who historically have resisted regulation, are coming round; in November, a consortium of 35 leading VC firms announced in partnership with the Commerce Department that they would adhere to a voluntary AI governance protocol.
The consensus for regulation of AI is international. In a marathon 36-hour series of negotiations, EU leaders earlier this month agreed for the need to bring an Artificial Intelligence Act into force, the first governmental commitment in the world to legislate acceptable AI use. Nonetheless, no law has yet been passed and months (or years) of negotiation remain. Similarly, in the healthcare space, words have yet to transform into action: the World Health Organization (WHO) released a massive white paper outlining six areas of regulation for the use of AI in healthcare, but adherence to these policies remains voluntary.
Nonetheless, some concrete steps may have a direct impact on healthcare providers sooner rather than later. In October, the White House announced a sweeping executive order for all federal agencies, including the Department of Health and Human Services (covering all federal health agencies such as the FDA, NIH, and CMS). The order includes a wide range of requirements designed to set standards for AI safety and security; protect Americans’ privacy; advance equity and civil rights; protect consumer benefits; promote innovation and competition; advance American leadership on responsible AI utilization abroad; and ensure responsible and effective government use of AI. While the guidelines are fairly broad, the work that healthcare agencies do will likely not only serve as a template for AI adoption and implementation across LHS, but also may itself become a framework for legislative action. Last week, more than two dozen leading health care providers made voluntary safety, security, and transparency commitments to the White House in line with the executive order.
What’s more, accrediting bodies are also advancing the goalposts for AI regulation. In December, the Joint Commission (JC) launched a new certification program for hospitals, the Responsible Use of Health Data (RUHD) Certification. Its goal is to provide a standardized tool for hospitals to demonstrate their commitment to secure and responsible patient data handling for non-clinical purposes. It includes requirements for de-identification, data controls, limitations on use, algorithm validation, patient transparency, and oversight structure. While certification remains voluntary, its inauguration serves as a strong hint to health systems that JC accreditation (the industry gold standard) will increasingly focus not just on clinical excellence, but rigorous data excellence and ethical usage as well.