Generative AI in Healthcare: Looking To the (Near) Future

Data Tunnel. Cybersecurity Technology Inside Concept
Jul 24, 2023

Generative AI in Healthcare: Looking To the (Near) Future

Written by Impact Advisors

Category: AI/Artificial Intelligence

Impact Advisors will continue to share our insights on the constantly changing nature of Generative AI in healthcare

The healthcare delivery industry is quickly approaching a key turning point where organizations who can realize value from Generative AI products will have a distinct competitive advantage.  Generative AI tools and capabilities are accelerating rapidly – and there are still many misconceptions in the market among providers and vendors. To succeed, hospitals and health systems will need to carefully navigate and address the novel risks posed by Generative AI, identify use cases that will deliver tangible value, and most importantly, establish the right governance.

What is – and What is Not – Generative AI?

Machine learning (ML), natural language processing (NLP), and computer vision (CV) have been used by hospitals and health systems in some cases going back several decades. Generative AI is not “new,” with healthcare-specific use cases dating back to the mid-2010s.  What is new – and what has been the primary catalyst for the recent explosion of interest in Generative AI – are the large language models (LLMs) that have been emerging in the 2020s.  Large language models represent a subset of Generative AI that is capable of generating human-like output based on massive training data sets. Publicly available large language models debuted in December 2022.

AI Graphic Updated 7.21

Impact Advisors defines Generative AI as a type of AI that is specifically focused on leveraging large volumes of existing data to generate new or modified synthetic data and/or content in the form of text, images, or other types of media.

Generative AI is NOT:

  • Capable of predicting the future with certainty. A large language model provides the most probable output it can find in its training data. It does not evaluate for factual accuracy. If training data regarding the input is limited, the model will substitute a similar-sounding output regardless of accuracy. Additionally, because large language models require an enormous amount of training data to function well, how a given model arrives at a particular output is a complete “black box.”
  • Able to access to real-time data, studies, or statistics. At present all Generative AI products are limited to their training data and no Generative AI product is connected to the internet in real time yet. (For example, ChatGPT’s training data currently extends through September 2021, however the full data set is proprietary.)
  • Limited to just ChatGPT. Although ChatGPT continues to receive a considerable amount of press, the reality is there are many Generative AI products on the market today, including general purpose large language models and image generators and healthcare-specific Generative AI applications.

 

Large Language Models Pose Novel Risks for Healthcare

There is inherent and serious risk with any AI product, particularly when used in health delivery (e.g., the potential to produce “bad” information on which clinical or business decisions are based). Large language models can amplify the level of risk in some cases.  For example, large language models are highly prompt-sensitive. Generated content and/or “answers” can vary widely depending on how the input is phrased – and even vary widely when the exact same input is entered twice. “Hallucination” – which, put simply, is the tendency to “make things up” – is also inherent to large language models. Given that many large language models are now publicly available, individual users can experiment with potential use cases on their own without any help or permission – a reality which poses obvious and fundamental risks in an industry like health delivery. Compounding these risks even further is the fact that output from large language models “seems human,” which can lead to users putting a higher level of trust in “answers” and/or generated content than is warranted.

Examples of Healthcare-Focused Generative AI Use Cases

One of the factors that makes Generative AI – particularly large language models – so different from the way artificial intelligence has traditionally been used in healthcare is that publicly available tools (ChatGPT, Google Bard, etc.) are general purpose rather than task specific.  There are countless potential use cases for Generative AI and large language models in health delivery, but it is worth noting that many of those theoretical use cases will simply not be feasible at this time or could be fraught with risk (especially in the absence of the right oversight and governance).

Examples of realistic near-term use cases for Generative AI in healthcare include:

  • Generating images of human organs (e.g., heart, lungs, etc.) with specific defects that can be used to train clinicians or AI models.
  • Creation of new documentation that can be reviewed by health system staff or clinicians such as medical necessity and/or preauthorization documentation and training materials to reduce administrative burdens.
  • Interactive chatbot or interactive voice response (IVR) tools that can help answer patients’ questions about their bills, assist with appointment scheduling, etc.
  • Creation of new synthetic data and/or content, such as:
    • Generating new synthetic data sets with specific parameters (e.g., simulated patients with a specific rare disease) that can be used for training clinicians or AI models. Note: synthetic data sets do not contain any PHI because the patients are not “real.”
    • Generating large volumes of data that can be used by IT for load testing, etc.
  • Support programming and software / application development such as checking existing code for errors and writing new code based on defined parameters.

Unrealistic near-term use cases include any clinical use case that does not involve a human intermediary.

It is important to note that the demand and risk associated with specific Generative AI use cases will change – potentially rapidly – over time.  See graphic below, which represents one specific snapshot in time.

Female developers using AI writes the code for data analytics

Generative AI Project Assessment Matrix

Healthcare organizations are starting to explore potential generative AI projects and it is important to balance the Demand and Risk and assess each generative AI use case carefully, considering ethical considerations, patient safety, and regulatory compliance before implementation. The following matrix is useful for considering the variables together.

Note: The categorization of demand and risk in this matrix is subjective and may vary based on your organizational strategy, industry trends, regulatory environments, and technological advancements. The following use cases are for illustration purposes.

AI Graphic FINAL
Driving in the Digital Network concept

Impact Advisors POV: Looking to the (Near) Future

Although recently released, publicly available large language models have generated enormous public interest and unprecedented hype, the rapid evolution and adoption of these products is creating a chaotic marketplace and uncertain regulatory environment that will likely last for at least a few years.

Most large HIT vendors are – or will soon be – pursuing use cases for Generative AI. Established, healthcare-specific companies such as enterprise EHR vendors will likely take a somewhat cautious approach, partnering with tech giants and developing use cases with clients. There is already no shortage of Generative AI-focused healthcare startups, and hospitals and health systems will likely soon be overwhelmed by the sheer number of Generative AI vendors pitching solutions. Some niche vendors will have good ideas in theory, but their products may not be feasible in practice due to a lack of understanding of the healthcare space. Other niche vendors may not offer true Generative AI products (despite their claims) and will also fail in execution.

As with any new technology, the reaction to Generative AI from health delivery organizations has been mixed. Some hospitals and health systems are embracing the opportunity to dramatically ease administrative burdens through automation, while others are solely focusing on the risks and potential liability posed by Generative AI. The problem is that Generative AI is not “any new technology.”  The massive interest that publicly available large language models generated in a matter of months – coupled with the very real potential for those tools to drive novel and fundamental near-term change – is without precedent in health delivery. Most hospitals and health systems will feel significant pressure to act soon and deliver real value from Generative AI, but many will move forward before they are ready. Success requires carefully navigating and addressing the unique risks posed by Generative AI, identifying use cases that will deliver tangible value, and most importantly of all, establishing the right governance.

Best practices for use of AI in healthcare, including Generative AI

  • Governance and accountability – Including stakeholders from many parts of the organization

  • Thoughtful use case evaluation (value/feasibility) – Is AI the best solution for our need?

  • Data and infrastructure – Can what we have be leveraged for a specific AI use case?

  • Robust privacy and security controls

  • Explainability – Can the model explain its output, so people will trust it, and use it?

  • Bias, ethics, equity – AI can highlight human bias, and can perpetuate biases inherent in the training data

  • Regulatory awareness – The FDA and other bodies have regulatory frameworks for AI in healthcare, which are updated as technology evolves