Thoughts
Start Here: Core Ideas
This site explores how information, technology, and crisis are reshaping healthcare and other institutions.
If you're new here, these essays lay out the central themes:
Disruption for Doctors, Part 1: What Is Disruption?
A clear explanation of Clayton Christensen’s disruption theory, applied to medicine.Disruption for Doctors, Part 2: Examples in Healthcare
How Walmart Health, AI diagnostics, and automation are already changing the system.Disruption for Doctors, Part 3: The Rise of Self-Care
The biggest disruptions bypass doctors entirely — and move care into consumers’ hands.
You can browse all posts below.
AI in Coverage Decisions: We Need Guardrails, Not Prohibition
Lawmakers across the U.S. are moving to ban AI-only insurance denials, insisting that every case get a human sign-off. It sounds compassionate, but it risks locking us into the slow, opaque, and costly system patients and doctors already despise. The real opportunity is to build AI systems with guardrails — requiring transparency, audits, and contestable rationales — so denials are faster, clearer, and more accountable.
States are moving to quickly ban AI — and the press is cheering them on
After the U.S. Senate removed a proposed national moratorium that would have blocked states from regulating AI, 29 legislatures introduced 130+ health-AI bills, many targeting claim denials and prior authorization.
Arizona state legislature
More specifically, legislators are trying to prevent insurers from letting AI deny claims without the review of a physician. A good example is Arizona's H2175, which states that AI “may not be used to deny a claim or a prior authorization” and that “a health care provider shall individually review each claim” if medical judgment is involved.
This process has been remarkably bipartisan: Arizona's bill passed unanimously in both houses, with roughly equal numbers of Democrats and Republicans supporting.
Press coverage, too, has been consistently supportive. As an example, a recent NBC News article on Arizona's bill provides supportive statements from the Arizona Medical Association, the sponsoring lawmaker, the American Medical Association, and a Harvard Law School professor. From the Arizona Medical Association:
“Patients deserve healthcare delivered by humans with compassionate medical expertise, not pattern-based computer algorithms designed by insurance companies.”
Across NBC, PBS, Fox, and local outlets, coverage was uniformly supportive — with not a single dissenting view. The unanimity is striking — perhaps the first time PBS and Fox News have found themselves on the same side of a health policy debate.
But I get this overwhelming support, I do. On the surface a universal human sign-off seems compassionate and smart. But the effort is misguided, and risks prolonging the same problems that make the existing, non-AI systems a real problem for patients, doctors, and insurers.
I understand why these laws exist
Before I argue against these bills, let me be clear: the concerns driving them are legitimate. Several current lawsuits suggest that insurers have deployed AI systems badly, and that patients have been harmed.
The most notorious example came to light in 2023, when a lawsuit revealed that UnitedHealthcare was using an AI algorithm called nH Predict to determine how long elderly patients should remain in post-acute care facilities. According to the complaint, the system was known to have a staggering 90% error rate, yet the insurer continued denying claims based on its output, knowing that only a tiny fraction of patients would contest the denial. Patients were allegedly forced to leave rehabilitation facilities before they were ready, or their families faced crushing out-of-pocket costs to continue care. And in February of this year, a federal judge allowed the lawsuit to proceed.
Similarly, Cigna is currently being sued for using an automated system called PXDX that allowed doctors to deny claims in bulk---reportedly reviewing cases at a rate of one every two seconds. While Cigna claimed doctors were making the final decisions, the speed pretty convincingly suggests the "review" was perfunctory at best.
These aren't hypothetical risks. If supported by the courts, they represent real failures that may have caused real harm. And they explain why lawmakers---and the public---are skeptical of letting insurers use AI without mandatory human oversight.
But prohibition just brings us back to square one
Here's where I part ways with the legislative response: these alleged failures appear to have been the result of bad faith, poor implementation, and inadequate oversight — not proof that AI is inherently unfit for coverage decisions. In both cases above, the problem wasn't that AI made the determination---it was that faulty systems were deployed without transparency or meaningful accountability.
Banning AI-only denials doesn't fix any of that. It just forces insurers to add a human signature to the process — a signature that, as the Cigna example shows, can be just as perfunctory and meaningless as an automated one. And we'll still have the same lack of transparency about how decisions are made, the same difficulty appealing denials, the same abysmal speeds, and the same high cost.
In other words, banning AI-only denials condemns us to the existing system that everyone already hates.
If we truly want to improve the system, the question isn’t whether to use AI, but how
Frustrations with the existing human-mediated claims systems come down to
Speed — The universal frustration. Claims decisions take too long, slowing down care, disrupting doctors’ workflow, and delaying payment.
Transparency — Denial letters are vague, with unclear reasons for denial, making it impossible to know if the decision is medically sound.
Cost — The insurer’s primary concern. Manual processes and human review are expensive, and those costs ripple through the system to premiums and taxpayers.
The one thing doctors, patients, and insurers can all agree on: the current systems are too slow
What the legislators (and the media) are missing is that we can use well-designed AI to address all three issues.
Reviewing medical necessity determinations is a narrow, highly-structured task: reading documentation, checking it against coverage rules and practice standards. Modern LLMs already handle similar structured tasks in finance and logistics — and can do the same for claims.
An AI system can also promote transparency. Automated systems are better than people at producing plain-language, structured, reproducible rationales, and they can do it 24/7/365. They can log which policy clause applied, what evidence was missing, and how the decision could be overturned. That's far more detail than a cursory physician denial letter. The solution isn't to remove AI; it's to specify the outputs we want and require AI systems to provide them.
Further, we can engineer quality into these systems much more easily and cheaply than with people. We can build-in audits, sampling, and benchmarks, that ensure verifiable fairness and reproducibility. We can also create HIPAA-compliant ways to allow third-party audits including human audit, but probably AI reviewing AI.
And this last point isn't theoretical: existing tools like Claimable, Fight Health Insurance, and the completely free Counterforce Health already exist to help patients generate appeal letters and contest denials in minutes. The Guardian has described this as an emerging "AI vs AI" arms race, where patients' tools counter insurers' tools. The key is to remove barriers to using these oversight AIs by standardizing access to decision packets and rationale and allowing these developers to battle it out to improve their products.
Free AI-generated claims denial appeals from Counterforce Health
A better path: automation with the right guardrails
The future of our slow, opaque, and costly system shouldn’t be to burden doctors with more administrative tasks that pull them from patient care. Modern LLMs can handle the narrow, bureaucratic task of reviewing claims against coverage rules. By requiring AI to provide plain-language denial reasons, instantly accessible to doctors and patients, and enforcing continuous audits by regulators, we can ensure fairness and transparency. This lets doctors focus on care, patients get clear answers, and insurers cut costs — a win for all.
Build, don’t ban
This is no different from how we’ve approached autonomous vehicles. Few argue we should ban self-driving cars outright — because that would eliminate the chance to improve a technology with huge potential to save lives. Instead, we regulate and monitor them, creating safety benchmarks and oversight while allowing the technology to evolve. Insurance coverage decisions deserve the same approach: guardrails, not prohibition.
Who will be first with instant, contestable, transparent denials?
Banning AI-only denials feels humane, but it entrenches the worst features of the old system: delay, opacity, and cost. The real solution is not to put more doctors into the administrative loop, but to design AI systems that are fast, transparent, and auditable Who will be first with instant, contestable, transparent denials?
Banning AI-only denials feels humane, but it entrenches the worst features of the old system: delay, opacity, and cost. The real solution is not to put more doctors into the administrative loop, but to design AI systems that are fast, transparent, and auditable and to guarantee that patients and doctors have the documentation and tools needed to appeal effectively, including with AI.
Patients don't need slower denials with a human rubber stamp. They need faster, clearer, and more contestable decisions — and for the first time, AI makes that achievable. The first insurer to deliver instant, contestable, transparent denials will win trust and market advantage — and regulators should be enabling that, not blocking it.
A Closer Look at FDA's AI Medical Device Approvals (2022)
FDA approvals of AI-enabled medical devices are accelerating—but not in the way you might expect. While new startups are entering the space, the real winners remain legacy giants like GE and Siemens. An analysis of the latest FDA data reveals a classic case of sustaining innovation, not disruption, as established players integrate AI to reinforce their dominance.
On October 5th, 2022, the FDA added 178 new devices to its list of (approved) “Artificial Intelligence and Machine Learning (AI/ML)-Enabled Medical Devices.” The full list is conveniently available at that page for download as an Excel file, and I decided to take advantage of this to dig a little deeper into the data. Mostly I was interested in the pace of FDA approvals — which of course reflects both the number of applications and the speed of the FDA process. What I found was interesting, and not exactly what I expected.
Picking up the pace of approvals
Not too surprisingly considering the hype around AI — and its potential — we can see that the pace of FDA approvals has picked up substantially in the last 10 years. While 2022 is not yet complete as of this writing, if it continues at the current pace it should just exceed 2021:
FDA AI-Powered Medical Device Approvals by Year
Lonely at the top
With a bit more digging, we can also see that the number of companies receiving at least one approval is on the upswing, as well:
Number of companies receiving at least one FDA approval by year
Despite the increase in the number of companies playing in the space, success in FDA medical AI approvals also seems to be becoming more concentrated.
For example, in 2022 so far, just 10 of the 55 companies winning at least one approval (18%) were responsible for more than half of total approvals. Five years ago, that percentage was 55%:
Percent of Companies Winning Approval Each Year Getting 50% of Approvals
Your grandfather’s medical AI?
The most surprising point hidden in the dataset was that a lot of the action in medical AI isn’t coming from Big Tech companies known for AI like Apple or Amazon or Google/Verily but from much much older Big Med Tech industrial companies, with GE (founded 1892) and Siemens (founded 1847) taking the number 1 and 2 spots, respectively.
[Note that I’ve counted related company divisions as a single unit. For example: GE Medical Systems, GE Healthcare Japan, and GE Healthcare are just counted as “GE”.]
Here are the companies taking the top 5 spots for “number of approvals in 2022 so far,” with number of approvals in parentheses (and note that Philips and Canon tied for 4th place with 4 approvals):
Top 6 Companies for FDA Al Approvals in 2022 (so far)
The average age of these top six companies is 89 years old! Leaving out Hyperfine and Aidoc, that figure rises to 130. Not exactly startups.
Nonetheless, the oldsters are being crowded out gradually as the median age of companies getting approvals goes down:
Mean and mean age of companies receiving FDA approval
Sustaining innovation rules medical AI
So we’ve established that some pretty ancient companies are among the dominant players in medical AI. But is the tide shifting towards the startups (i.e. towards disruption)?
I’ll define an “older” company as >= 20 years old.
Certainly we can see that the percentage of companies winning at least one approval each year that are older is decreasing:
Percentage of companies winning at least one approval that are >=20 years old vs younger
This may reflect the increasing simplicity and economy of AI tools available, and the increasing number of coders who understand how to use them: the cost in time and money of playing in the AI space is going down, and that means younger and smaller companies can join the game.
At the same time, older companies gain the benefit of those decreasing costs, too, and they have more resources with which to deploy them. Here’s a chart showing the percentage of approvals going to older vs younger companies:
Percentage of FDA approvals going to companies >= 20 years old vs younger
What the last two graphs shows us is that while younger companies are increasingly in the game, the older companies (with presumably greater resources) are more than holding their own in terms of productivity. They make up a smaller percentage of companies winning at least one approval each year, but are taking an increasing share of those approvals.
So using FDA approvals as the metric, it doesn’t look like there’s much disruptive innovation going on in AI-powered med tech: we don’t see established medical device makers like GE being pushed aside (at least not yet). Instead we see:
increasing numbers of new companies entering the space and winning FDA approvals
continued dominance by established med tech players as they add AI on to their existing products to improve performance and appeal to their best existing customers.
In other words, classic sustaining innovation.
Does this mean that AI isn’t causing disruption related to healthcare? No, not at all. But, as I’ve explained in previous articles on disruption within healthcare and disruption to healthcare, it shows that the same technology (AI) can be used to
sustain current healthcare practices (e.g. GE and Siemens and Canon, along with some youngsters, making hospital-based imaging cheaper, more useful, and more accurate)
enable players outside traditional healthcare to shift low-hanging fruit to another part of the healthcare system (e.g. Walmart Health)
allow companies completely outside healthcare to shift other low-hanging fruit completely outside healthcare (e.g. ResApp)
Thoughts about the above? Inside knowledge you’d like to share? Feel free to contact me privately.
Further Reading
If you’re interested in learning more about the FDA approval process for AI medical devices, I strongly recommend “How medical AI devices are evaluated: limitations and recommendations from an analysis of FDA approvals” by Eric Wu et al in Nature Medicine (April 2021).
A good, but now somewhat out-of-date discussion of the types of AI medical devices, can be found in “The state of artificial intelligence-based FDA-approved medical devices and algorithms: an online database” by Stan Benjamens et al in NPJ Digital Medicine (September 2020).
Nice discussion of current FDA rules for software-as-a-medical-device (SaaMD) at Emerge (June 2022).