When Health AI Makes a Mistake, Who's Liable?

A few weeks ago I got into an exchange on X about liability for AI health products. Someone suggested that liability for medical AI was still unclear. I realized that I had never actually really looked into that. I’d never read the various terms of service, the regulatory agreements, or the FDA guidance. So I decided to get myself smarter on exactly what's happening here.

Part of what had prompted that exchange was that two significant AI health stories had all hit in the first week of January 2026 — new tools, each taking a very different liability approach.

  1. Utah became the first state to let an AI autonomously renew prescriptions.

  2. OpenAI launched ChatGPT Health, connecting the chatbot to your medical records and Apple Health data.

I've been writing about the migration of healthcare tasks from exam rooms to living rooms for a while now. What's new isn't that AI is doing health-related work. What's new is that the legal and regulatory frameworks are now being actively constructed, and the approaches are strikingly different.

Let me walk through what I found.

Doctronic: the cutting edge

Let’s start with the one that went furthest.

In January, Utah partnered with Doctronic to launch the first AI system in the country legally authorized to make clinical decisions about prescription renewals — 192 commonly prescribed drugs, $4 per refill. To make this possible, Utah's Office of Artificial Intelligence Policy signed a Regulatory Mitigation Agreement (RMA) with Doctronic in October 2025, under which the Division of Professional Licensing agreed to forgo enforcement actions for what would otherwise require medical licensure — specifically for this pilot, for 12 months.

We’re in the Department of Commerce... which is the entity that draws and redraws those [regulatory] lines all the time.
— Zach Boyd, director of Utah's Office of AI Policy

This is the most forthright regulatory approach of the two. The state openly acknowledges that the AI is performing a clinical function. Doctronic has secured malpractice insurance holding its AI to the same standard of care as a human physician. And the RMA explicitly preserves patients' legal remedies: Section 4.D states that the agreement "does not waive or modify any legal remedies available to any individual harmed by any action of Participant's AI technology."

Doctronic also operates a separate consumer product — a health information chatbot available in all 50 states — with its own Terms of Service that take a very different posture: "Doctronic is an AI doctor, not a licensed doctor, does not practice medicine." The Utah pilot operates under the RMA, on an entirely different legal footing.

Basically, Doctronic evolved from a chatbot disclaiming medical practice to a state-authorized AI prescriber in a matter of months. But business moves quickly; as Zach Boyd, director of Utah's Office of AI Policy, put it: "We're in the Department of Commerce ... which is the entity that draws and redraws those lines all the time."

OpenAI: the disclaimer approach

OpenAI takes a very different tack. To understand the legal picture, it helps to know that OpenAI has three health-related products — and to understand exactly where the legal documents overlap and where they don't.

Start with what's the same. OpenAI uses a unified policy structure, meaning the safety standards and liability terms for a standard ChatGPT conversation and the dedicated ChatGPT Health feature are governed by the same core documents. The Usage Policies are identical: you cannot use either one for "tailored medical advice" or "high-stakes" clinical decisions without a human professional. The Terms of Use are identical: both carry the same disclaimer ("not a medical device," "not for diagnosis," "informational only"), the same $100 liability cap, and the same assumption of risk. If you follow a suggestion from the Health tab that leads to a health issue, OpenAI's legal protection is exactly the same as if it happened in a regular chat.

The only document that is any different is the Health Privacy Notice, which applies exclusively to the Health feature. It promises that health data is segmented from your general chat history, sets a strict default that health content is never used to train foundational models, and governs the legalities of syncing with third-party sources like Apple Health and b.well (which aggregates data from 2.2 million U.S. providers).

In other words: you aren't "allowed" to do more medical things in the Health tab than in regular ChatGPT. The Health feature provides better tools (medical record syncing, lab result uploads) and stricter privacy (no model training) for the things you were already allowed to do. But the liability framework is unchanged.

This matters because real harm has already been documented from regular ChatGPT — not the new Health feature. A man was hospitalized for three weeks with bromism after ChatGPT recommended sodium bromide as a table salt substitute. Warren Tierney, a 37-year-old Irish father, delayed cancer care after ChatGPT repeatedly assured him it was "highly unlikely." Neither of these has resulted in litigation, but separately, seven wrongful death lawsuits were filed against OpenAI in November 2025, including Raine v. OpenAI, alleging that ChatGPT encouraged a teenager's suicide. Those cases are testing whether a $100 liability cap and a disclaimer can hold when the product increasingly functions as a health advisor. And now the product can ingest your full medical record — under the same legal terms.

It's also worth noting that in October 2025, OpenAI updated its usage policies to prohibit "provision of tailored advice that requires a license, such as legal or medical advice, without appropriate involvement by a licensed professional." Then two months later, they launched ChatGPT Health — a product specifically designed to provide personalized health information by ingesting your medical records. Make it make sense.

[Note: ChatGPT for Healthcare is a separate enterprise product for health systems governed by OpenAI's business Services Agreement with a Business Associate Agreement for HIPAA compliance. Different product, different market — hospitals and clinics, not consumers.]

Comparing two product approaches

Doctronic (Utah) ChatGPT / ChatGPT Health
What it does AI autonomously renews prescriptions Interprets medical records and lab results
Regulatory framework State RMA waiving licensure laws Consumer TOS; no health-specific regulation
Acknowledges clinical role? Yes — explicitly No — same "not for diagnosis" disclaimer for both versions
Liability structure Malpractice insurance; patient remedies preserved $100 cap; identical for regular ChatGPT and Health feature
What differs for health use? Entirely separate legal framework (RMA) Only the Health Privacy Notice (data handling, not liability)

The bigger picture: no size fits all

What you're looking at is an evolving field where AI health tools are emerging faster than any single regulatory framework can accommodate — and different actors are responding in very different ways. Doctronic sought state authorization and malpractice insurance. OpenAI added clinical capabilities without adding liability terms.

And the landscape is expanding. Doctronic is already in talks with Arizona and Texas. H.R. 238 (introduced in January 2025) would create a federal framework recognizing AI as a "practitioner licensed by law to administer drugs." The wrongful death cases against OpenAI are testing the disclaimer approach in court.

What's notable is how differently these two companies are responding — Utah's proactive sandbox versus OpenAI's unchanged liability terms.

So who's liable when health AI makes a mistake? It depends entirely on which product you're using. If Doctronic's AI renews the wrong prescription in Utah, there's malpractice insurance and the RMA preserves your right to sue. If ChatGPT Health steers you wrong after ingesting your full medical record, your recourse is capped at $100 — the same as if you'd asked it about a recipe (if that holds up in court). The answer to the title question isn't "nobody" across the board. It's that the answer varies enormously depending on which product — and that's the landscape we're in right now.


This post draws on themes from my keynote talks.

→ Follow new essays via RSS or on LinkedIn.

→ Follow new essays via RSS or on LinkedIn.

Next
Next

The Continuous "Dr. You": OpenClaw and the Future of Personal Health Management