Generative AI in Your Desk Drawer: How to Get There

Previous articles in this series have shown how generative AI can be used for administrative and back-office functions in health care. Now we’ll look at how models are trained for these specific purposes.

Training Generative AI Models

Every industry has to develop domain-specific models, and health care has the extra burden of protecting personally identifying data. These requirements raise the question of when the general-purpose solutions offered by major tech companies are appropriate, where health care organizations can get large enough data sets to develop models, and how vanilla LLMs—or foundational LLMS—can be enhanced by narrow data sets from a health care provider.

Harman Dhawan, founder and CEO of Bikham, says that now there are “fairly cheap” LLMs that providers can build on and customize. Not only are there well-known options from OpenAI and Google, but some LLMs are open source.

Jean-Claude Saghbini, President of the Lumeris Value-Based Care Enablement business, says, “Vanilla LLMs can certainly be used within specific solution designs that allow you to constrain and control the output. But in all cases, the use of AI for back-office work requires organizational guardrails. Team members have to be trained on how to use AI responsibly, and that means deploying a change management process to train and adopt this technology safely and effectively. Privacy concerns are an important consideration, particularly when using publicly facing AI platforms.”

Seek AI helps customers connect their structured data to LLMs, according to founder and CEO Sarah Nagy. She says that “training data does not necessarily need to be large to be effective.”

She adds, “It is best to start small, employing just the most important datasets, when working with LLMs. One reason for this is to get used to the novel workflows resulting from LLMs. Once acquainted with these workflows, the organization can expand to additional datasets.”

Iodine Software, according to chief product and technology officer Priti Shah, has the necessary business associate agreements (BAAs) to get patient data from their customers. She says that 27% of all U.S. patient admissions flow through Iodine solutions, including real-time data.

When you remember that the now-discredited IBM Watson was trained on research papers, you can understand why using actual patient data is crucial.

Melvin Lai, senior associate at Silicon Foundry, says that use cases vary, but that “training on a dataset ranging from hundreds of gigabytes to several terabytes of text data should yield a well-functioning LLM. ChatGPT-3 was trained on approximately 45 terabytes of text data. Models focused on specific tasks or domains typically require less data to develop, but this raises the importance of curating the quality of input.”

Nick Stepro, chief product and technology officer of Arcadia, says, “As one example, a patient’s A1C may be formatted in many ways within an EHR. Training a model to identify those variables and consistently map them correctly ensures the most valuable and useful output. Programmers should train models to deliver an output in a specific format every time. This makes the application more reliable and dependable, providing the consistency users expect.”

SS&C Blue Prism, according to Anna Twomey, senior director of healthcare, develops a generational AI model as follows: They start with either a vanilla foundational model or one based only on medical records. In traditional machine learning parlance, the results of the models are called vector tables and consist of rules such as “six percent of the decision depends on age, eight percent on the presence of diabetes,” etc. So SS&C Blue Prism analyzes the clients’ own data to apply a customized vector table.

For audits and compliance, the tool can calculate metrics from the Healthcare Effectiveness Data and Information Set (HEDIS). These help an organization track how well it’s carrying out treatment, identify gaps in patient communications, and fill these gaps. Figure 1 shows a typical screen from SS&C Blue Prism.

A laptop with a view into a flowchart in SS&C Blue Prism.
Figure 1. SS&C Blue Prism interface.

Erik Barnett, North America Advisory Healthcare & Life Sciences Lead at Avanade, says that their clients normally run the service on internal data. For instance, staff can create a presentation by searching existing company documents, optionally accepting data from the Web as well.

Abhishek Sharma, principal of business transformation at Sagility, says they use generative AI to generate synthetic data for use cases around specific machine learning models for payers and providers when data is lacking. He advises health care institutions to combine generative AI with other digital assets and deep domain expertise to create a holistic solution.

Vignesh Ravikumar, partner at Sierra Ventures, predicts that industries will move over time to smaller, more specialized LLMs.

Chief customer officer Deirdre Leone at ContractPodAi believes that success for generative AI in contract development depends on domain-specific models, where specialized LLMs are trained to understand complex legal situations while also protecting sensitive patient information to avoid inaccuracies and misuse. “With this specialized information, a legal team can confidently draw up contracts and oversee them throughout their life cycle in more productive and efficient ways than before.”

Cameron Andrews, founder and CEO of Sirona Medical, writes to me, “Choosing LLMs is like hiring people: Some are smarter, some are more specialized, and some are more expensive than others. Health care organizations should focus on their IT infrastructure first, to ensure that they have the tools to pick, swap, combine, and tune LLMs easily and quickly or identify vendors and partners that do.”

Akshay Sharma, chief AI officer at Lyric, says they use a combination—what he calls an “orchestra”—of relatively Small Language Models (SMLs) that they can fine-tune and run on cheaper GPUs and even CPUs. Using their own data as input, they can develop special models, such as to reason and understand fraud, waste, and abuse, for coordination of benefits, and for other tasks in payment integrity.

By analyzing claims data and identifying patterns that may indicate fraudulent activities, healthcare providers can reduce the risk of financial loss and improve billing accuracy.

David Kereiakes, managing partner at Windham Venture Partners, says that organizations should include the end-users in the design process, using them for testing and opinions.

CitiusTech has recently announced a testing platform to evaluate generative AI quality, the CitiusTech Gen AI Quality & Trust Solutions. Sridhar Turaga, senior vice president, data and analytics, noted that, “Up to now, there have been no established technology-agnostic and platform-agnostic solutions that measure the quality and trust of healthcare generative AI, end-to-end. Approaches used in building and evaluating LLMs and foundation models are useful, but have not been designed specifically for healthcare.”

The CitiusTech solution enables clients to measure their models for accuracy, calibration, robustness, fairness, bias, toxicity, and efficiency. Multiple health care innovators beta-tested the approach, which can be integrated into existing MLOps, DataOps, and quality management solutions.

The final article in this series will take on the crucial issue of helping small providers, already strained past their limits to meet current patient needs, derive the benefits that this series has ascribed to generative AI.

Get Fresh Healthcare & IT Stories Delivered Daily

Join thousands of your healthcare & HealthIT peers who subscribe to our daily newsletter.