Health

What’s Special About AI Risks and Remedies in Health Care?

We’re all saturated by now with dire warnings about AI. Health care can benefit greatly from AI, which could help systems around the world that need to compensate for an alarming shortage of staff. But health care also faces particular challenges in using AI.

It’s worth noting that President Biden’s massive October 2023 Executive Order on AI devoted several sections— 5.2(e), 5.2(f), and 8(b)—to research on the benefits and risks of AI in health care.

Sara Shanti, a partner specializing in health care technology at law firm Sheppard Mullin, says that the Executive Order and recent FDA guidance “start to provide real direction on how to navigate patient risk. The Executive Order requires the Department of Health and Human Services to establish an AI Task Force (in consultation with the Department of Defense and the Department of Veterans Affairs) to protect AI deployment. The FDA specifically promotes the monitoring of AI-generated models in medical devices and software for accuracy, risk, and objective success.”

This article looks at what’s special about AI in health care, and some ways to address its needs. For this article, I spoke just to lawyers, because we’ve heard plenty from the technologists about AI’s potential and what they’re doing to minimize risk. The perspective of attorneys with an expertise in health care is illuminating.

An anthology from STAT also contains some relevant analyses and case studies.

Liability and Licensing

Are clinicians facing the risk of lawsuits if they heed the advice of AI engines? Where do the new systems differ from older forms of clinical decision support—or the simple act of calling an expert colleague on the phone for a second opinion?

Digital apps and services for a long time have sidestepped liability by claiming that they are not treating patients or providing diagnoses. Erica Kraus, a partner specializing in value-based care at Sheppard Mullin, points out that AI has aided with diagnoses for a long time: for instance, by examining pictures of moles and suggesting which ones indicate cancer. And some protection from liability is important to leave space for innovation.

Dr. Michael Levinson, who is both a physician and a healthcare attorney with Berger Singerman LLP, describes the general process of diagnosis as a decision tree, a concept familiar to programmers. As the physician (or the AI engine) asks questions, the possibilities are narrowed down until there is a likely diagnosis or a referral for a test.

Because AI has access to a database of symptoms and possible diseases much bigger than any human being can remember, it can speed up diagnosis and find possible causes that the physician might not think of. For instance, if someone returning from a distant country reports unusual symptoms, AI can search the set of common diseases in that region. The doctor makes the final assessment.

Kraus points out the the recent introduction of generative AI (of which ChatGPT is the most famous example) opens up possibilities for the AI-based system to answer questions just as a doctor does.

A recent study found that an AI chatbot achieved a higher rate of correct diagnoses than human clinicians on certain complex conditions. Both the chatbot and the clinicians were presented only with a clinical summary. How much better could both have performed if they could have interviewed patients?

Kraus suggests that states (the jurisdictions where licensing occurs) adopt guidance about the use of AI in making judgments. The potential, she says, is exciting, but regulators have to step in to ensure consistency and safety. The FDA, unfortunately, has lots of other things to regulate and is somewhat overwhelmed. On the other hand, the FDA did carve out special rules for software in health care (which they bureaucratically classify as “devices”) and have not neglected AI.

Kraus says that court cases might also establish yard lines.

According to her, rules about supervision (which determines the doctor’s responsibility as nurses, technicians, and front desk staff interact with patients) don’t map well to AI systems. “What does it mean to contract out a service to AI?” she asks.

Levinson summarizes the use of AI in health care by saying, “AI is a tool.” And it can vastly expand access to accurate diagnoses across the world. However, he says we need a lot more research into the effectiveness and accuracy of current tools.

Legal guide posts are not the only developments that will determine the adoption of AI; Kraus says that the responses of the doctors and the public also affect the spread of new systems.

A recent study by UserTesting in the U.S., Britain and Australia found a striking divide in patients’ trust in AI, with U.S. consumers more willing to trust AI than those in the other two countries. Of course, privacy preservation is another concern of doctors and patients.

Equity

The question of bias arises across all AI platforms and use cases. Levinson says that the question ultimately comes down to “Who is the gatekeeper?” The people who choose the data on which AI is trained, and how often the model is updated, determine whether bias creeps in.

He says, “Technology should not deliver ideology.” In other words, the developers of the system need to examine and overcome their own tendencies toward bias.

What Do the Chatbots Say?

I felt that an article on AI also needed to interview generative AI tools. So—after I finished this draft—I fed the following prompt to ChatGPT and Google’s experimental chatbot, Bard:

List the risks of using AI and ways to overcome these risks specifically in medical, clinical situations. Explain the problem in practical terms that a clinician or computer programmer could use to make AI safer and more effective in medical settings.

The chatbots did quite well, identifying problems that health care workers should be concerned about, such as transparency, privacy protections, and the risk of relying too much on AI’s output. Both chatbots indicated that AI tools must provide clinicians reasons to trust them.

ChatGPT directly raised the question of liability, the central issue in this article. Bard did not mention liability explicitly, but touched on it by citing as a risk, “Clinicians may become overly reliant on AI recommendations, neglecting their own clinical judgment and potentially missing critical information.”

ChatGPT also gets a plus mark for explicitly calling for regulation: “Establish clear guidelines and regulations for the ethical use of AI in healthcare. Foster collaboration between legal experts, policymakers, and healthcare professionals to address legal and ethical challenges.” In contrast, Bard listed recommendations for clinicians and programmers, but not for governments or regulators.

Do these robust answers mean that AI can correct its own weaknesses? Perhaps if we can train AI with ethical values, it can take on some of the tasks assigned to humans in this article.

AI is currently under a legal and regulatory magnifying glass, which is a difficult position but probably necessary. We need to understand how AI works in health care without burning it to a crisp.

Get Fresh Healthcare & IT Stories Delivered Daily

Join thousands of your healthcare & HealthIT peers who subscribe to our daily newsletter.