Skip to main content

Generative AI isn’t as scary as some make it out to be.

Yes, the use cases are wide, security is a concern, and the margin for error is slim for member-facing technology.

But at the heart of it, Large Language Models (LLMs) take your data and turn it into something more useful for your health plan. In the rapidly evolving landscape of artificial intelligence, generative AI promises flexibility and transformative opportunities across various industries. So why not health insurance?

For health plans, generative AI offers innovative ways to enhance customer engagement, streamline processes, and improve decision-making.

Unlike traditional AI, which operates based on predefined rules and patterns, generative AI can create new content across various modalities, including text, images, videos, and 3D representations. The ability to produce new and better information opens new avenues for engaging with members and health plan staff.

However, venturing into the realm of generative AI also brings along its fair share of uncertainties and challenges.

According to a report published by Bain & Company, about 75% of health system executives say generative AI can reshape the industry. But only 6% of health systems have a generative AI strategy in place.

There’s a reason why health system executives have been slow to move on generative AI despite the excitement: not knowing where to start.

Such a broad and developing technology can be overwhelming. Is generative AI a passing trend? Do we have the technology in place to act on it? What business areas should I start with?

All questions are valid, and the answers depend entirely on the reality of your health plan. Here are the top questions that health plans need to consider before investing in generative AI.

1. What Barriers Need to Be Overcome First?

Alignment and fear of the unknown. While health plans may feel the pressure to move fast, several internal barriers need to be overcome.

First, achieve internal alignment across various stakeholders. The involvement of IT, operations, C-suite executives, analytics teams, member engagement specialists, CSRs, and others is crucial for the successful implementation of generative AI. This isn’t a side-of-the-desk effort; it demands collaboration and commitment from all corners of the organization. Health plans and government agencies have recognized the significance of crafting a coherent internal business case for AI implementation. This topic was discussed during Softheon’s 2023 Executive Advisory Council, where leaders emphasized the keystone role of internal alignment in ensuring the success of generative AI initiatives.

Another barrier to overcome is the fear of the unknown.

Overcome fear with knowledge. Health plans need to take best practices from others in similar lines of business and markets.

Health plan and government agency leadership engaged in discussions focused on key objectives and strategies for effectively using AI to enhance member engagement and care outcomes. This exchange of experiences and best practices empowers healthcare organizations to make informed decisions and embark on their AI journey with greater confidence.

2. What Risks Does Generative AI Bring to the Healthcare Industry?

Breaches in Data Security and Privacy

Ensuring the privacy and security of sensitive data must remain a top priority. Data leaving your system always poses a risk, and it’s crucial to exercise caution when feeding data into both public and private LLMs.

Hold LLMs to the same security standards as your current data warehouses. This includes the anonymization and encryption of all sensitive information, from priority operational documents to member engagement reports.

Even better than putting the data under lock and key is making sure that the wrong people do not have access. Especially as generative AI continues to develop, users should prohibit the input of Protective Health Information (PHI) into the model.

Strict internal user roles should also be enforced. Customer service representatives (CSRs) and analytic personnel do not need the same type of information from an LLM. Training the model exclusively on the data relevant to that business function both optimizes the performance of generative AI while keeping security risks at a minimal.

Inaccurate Information Being Conveyed to Members

Generative AI systems sometimes produce content that is incorrect or nonsensical, known as “hallucinations.”

This can be particularly challenging in niche industries where technical, internal documentation holds much more relevance than information accessible to the public.

Open models like ChatGPT refine and edit answers based on the response of the public. Ironically, hallucinations are largely caused by human inputs that cause the model to modify their original data pull.

Hallucinations can be reduced by lowering the temperature parameter, this reduces the randomness of the generation. Additionally, implementing feedback loops such as reinforcement learning can mitigate degradation of AI-generated content.

Because accurate generative AIs need a robust data collection to pull from, many health plan’s worry and the quality of their internal knowledge source.

Softheon layers 3 domains of knowledge to support organizations still building their internal database to train generative AI. Data used in the LLM:

      1. Industry-specific knowledge and best practices
      2. Softheon’s internal processes and product infrastructure
      3. Your data

By cross-referencing the following 3 domains, both hallucinations and data degradation are mitigated.

3. Should You Consider a Buying, Building, or a Hybrid Approach to Generative AI?

You need to decide whether to purchase a ready-made generative AI system, build your own, or opt for a hybrid approach.

Each approach has pros and cons, and understanding the nuances is essential for making an informed decision that aligns with your goals and resources.

Building an Internal System

Building an internal generative AI system allows for it to directly address the needs of health plans. This approach enables fine-tuning AI models on the health plan’s specific data, ensuring a high degree of relevance and accuracy.

However, this path is expensive and time consuming. Generative AI systems are inherently complex, and their underpinnings require a profound understanding of both AI and the healthcare domain. The development process is time-consuming, resource-intensive, and demands expertise that might not be readily available.

The significant investment of time, cost, and technical expertise can be particularly challenging for smaller health plans.

Buying a Stand-Alone Product

The option to purchase a stand-alone generative AI product may appear to be a quicker and potentially cost-effective solution. However, it’s important to note that many public large language models (LLMs) available on the market are not specifically tuned for healthcare contexts.

These LLMs may generate responses based on a broad array of data sources, which can lead to answer degradation due to irrelevant or inaccurate information. Additionally, feedback loops from external users may influence the quality of generated content. While purchasing a product might offer convenience, the lack of specificity and the potential for performance limitations raise concerns about the long-term usability of the solution.

The Hybrid Approach: Tuning and Fine-Tuning

The hybrid approach is adaptable, combining the tech and knowledge of both buying and building.

By utilizing a foundational generative AI system, such as a chat-GPT with extensive text data, organizations can leverage its broad capabilities. The hybrid approach involves tuning this system by training it on industry-specific, operational, and member data. This process refines the AI’s outputs to be more aligned with the organization’s context, mitigating the influence of public data sources and improving the quality of generated content.

Unlike training foundation models, which demands substantial resources, fine-tuning can be achieved with comparatively less data, reduced costs, and in shorter timeframes.

4. What is the Next Step for AI Investment?

Start by conducting an internal audit of your health plan’s strategic priorities and identifying current operational gaps. This evaluation will provide valuable insights into where generative AI holds the greatest potential to drive improved outcomes.

Next is determining your target audience for AI-driven initiatives. Assess where the greatest value addition lies—whether it’s enhancing member experiences, optimizing staff workflows, or both. Many healthcare organizations choose to prioritize staff adaptation in the early stages to minimize potential negative impact on members during the development phase.

Recognize that the deployment of LLMs will initially have a significant impact in areas where simple design patterns exist and where there is a higher tolerance for error and correction. This strategic approach ensures a smoother integration and allows for learning and improvement as the system adapts to more complex tasks over time.

5. What Processes See the Greatest Adaption of AI?

Generative AI initiatives focused on enhancing data literacy, facilitating analytics self-service, and fostering data-driven decision-making are paving the way for increased interest and investment in chat-based interfaces.

Chat interface stands out as a pivotal medium, bridging the gap between AI capabilities and human interaction. To unlock the full potential of this interface, training is paramount. Empower your staff to understand the nuances of the AI product and its capabilities, allowing them to effectively interact with and harness the power of AI-generated responses.

Here are some examples of the diverse range of potential use cases for generative, chat-based AI:

  • Member Engagement: AI can proactively address member queries, ensuring that a significant portion of inquiries are resolved even before reaching a live agent. This not only enhances member satisfaction but also optimizes agent resources.
  • Agent Assist: By equipping live agents with precise AI-generated answers to member queries, healthcare organizations can ensure that customer interactions are accurate, efficient, and well-informed.
  • Knowledge Repository/Management: Integrate AI into your preferred document repository to offer members and staff accurate and straightforward answers to a wide array of queries, bolstering knowledge sharing and accessibility.
  • Data Analytics: Generative AI can empower non-technical users to harness the capabilities of experts. Utilize AI to query complex datasets and present the findings in a user-friendly format, driving data-driven decision-making.

Generative AI holds immense promise for health plans, enabling innovative solutions that enhance member engagement and operational efficiency. With strategic planning and a clear focus on value, health plans can harness the transformative power of generative AI to stay ahead in the ever-evolving landscape of healthcare.

To learn more about how generative AI can benefit your business, check out AIME (Artificial Intelligence Management for Enterprise).