Thoughts

Start Here: Core Ideas

This site explores how information, technology, and crisis are reshaping healthcare and other institutions.

If you're new here, these essays lay out the central themes:

You can browse all posts below.


Joel Selanikio Joel Selanikio

AI in Coverage Decisions: We Need Guardrails, Not Prohibition

Lawmakers across the U.S. are moving to ban AI-only insurance denials, insisting that every case get a human sign-off. It sounds compassionate, but it risks locking us into the slow, opaque, and costly system patients and doctors already despise. The real opportunity is to build AI systems with guardrails — requiring transparency, audits, and contestable rationales — so denials are faster, clearer, and more accountable.

States are moving to quickly ban AI — and the press is cheering them on

After the U.S. Senate removed a proposed national moratorium that would have blocked states from regulating AI, 29 legislatures introduced 130+ health-AI bills, many targeting claim denials and prior authorization.

Photo of Arizona state legislature

Arizona state legislature

More specifically, legislators are trying to prevent insurers from letting AI deny claims without the review of a physician. A good example is Arizona's H2175, which states that AI “may not be used to deny a claim or a prior authorization” and that “a health care provider shall individually review each claim” if medical judgment is involved.

This process has been remarkably bipartisan: Arizona's bill passed unanimously in both houses, with roughly equal numbers of Democrats and Republicans supporting.

Press coverage, too, has been consistently supportive. As an example, a recent NBC News article on Arizona's bill provides supportive statements from the Arizona Medical Association, the sponsoring lawmaker, the American Medical Association, and a Harvard Law School professor. From the Arizona Medical Association:

Patients deserve healthcare delivered by humans with compassionate medical expertise, not pattern-based computer algorithms designed by insurance companies.
— Shelby Job, Communications Director, Arizona Medical Association

Across NBC, PBS, Fox, and local outlets, coverage was uniformly supportive — with not a single dissenting view. The unanimity is striking — perhaps the first time PBS and Fox News have found themselves on the same side of a health policy debate.

But I get this overwhelming support, I do. On the surface a universal human sign-off seems compassionate and smart. But the effort is misguided, and risks prolonging the same problems that make the existing, non-AI systems a real problem for patients, doctors, and insurers.

I understand why these laws exist

Before I argue against these bills, let me be clear: the concerns driving them are legitimate. Several current lawsuits suggest that insurers have deployed AI systems badly, and that patients have been harmed.

The most notorious example came to light in 2023, when a lawsuit revealed that UnitedHealthcare was using an AI algorithm called nH Predict to determine how long elderly patients should remain in post-acute care facilities. According to the complaint, the system was known to have a staggering 90% error rate, yet the insurer continued denying claims based on its output, knowing that only a tiny fraction of patients would contest the denial. Patients were allegedly forced to leave rehabilitation facilities before they were ready, or their families faced crushing out-of-pocket costs to continue care. And in February of this year, a federal judge allowed the lawsuit to proceed.

Similarly, Cigna is currently being sued for using an automated system called PXDX that allowed doctors to deny claims in bulk---reportedly reviewing cases at a rate of one every two seconds. While Cigna claimed doctors were making the final decisions, the speed pretty convincingly suggests the "review" was perfunctory at best.

These aren't hypothetical risks. If supported by the courts, they represent real failures that may have caused real harm. And they explain why lawmakers---and the public---are skeptical of letting insurers use AI without mandatory human oversight.

But prohibition just brings us back to square one

Here's where I part ways with the legislative response: these alleged failures appear to have been the result of bad faith, poor implementation, and inadequate oversight — not proof that AI is inherently unfit for coverage decisions. In both cases above, the problem wasn't that AI made the determination---it was that faulty systems were deployed without transparency or meaningful accountability. 

Banning AI-only denials doesn't fix any of that. It just forces insurers to add a human signature to the process — a signature that, as the Cigna example shows, can be just as perfunctory and meaningless as an automated one. And we'll still have the same lack of transparency about how decisions are made, the same difficulty appealing denials, the same abysmal speeds, and the same high cost.

In other words, banning AI-only denials condemns us to the existing system that everyone already hates.

If we truly want to improve the system, the question isn’t whether to use AI, but how

Frustrations with the existing human-mediated claims systems come down to

  1. Speed — The universal frustration. Claims decisions take too long, slowing down care, disrupting doctors’ workflow, and delaying payment.

  2. Transparency — Denial letters are vague, with unclear reasons for denial, making it impossible to know if the decision is medically sound.

  3. Cost — The insurer’s primary concern. Manual processes and human review are expensive, and those costs ripple through the system to premiums and taxpayers.

table showing claims approval pain points for doctors, patients, and insurers

The one thing doctors, patients, and insurers can all agree on: the current systems are too slow

What the legislators (and the media) are missing is that we can use well-designed AI to address all three issues.

Reviewing medical necessity determinations is a narrow, highly-structured task: reading documentation, checking it against coverage rules and practice standards. Modern LLMs already handle similar structured tasks in finance and logistics — and can do the same for claims.
An AI system can also promote transparency. Automated systems are better than people at producing plain-language, structured, reproducible rationales, and they can do it 24/7/365. They can log which policy clause applied, what evidence was missing, and how the decision could be overturned. That's far more detail than a cursory physician denial letter. The solution isn't to remove AI; it's to specify the outputs we want  and require AI systems to provide them.
Further, we can engineer quality into these systems much more easily and cheaply than with people. We can build-in audits, sampling, and benchmarks, that ensure verifiable fairness and reproducibility. We can also create HIPAA-compliant ways to allow third-party audits including human audit, but probably AI reviewing AI.

And this last point isn't theoretical: existing tools like Claimable, Fight Health Insurance, and the completely free Counterforce Health already exist to help patients generate appeal letters and contest denials in minutes. The Guardian has described this as an emerging "AI vs AI" arms race, where patients' tools counter insurers' tools. The key is to remove barriers to using these oversight AIs by standardizing access to decision packets and rationale and allowing these developers to battle it out to improve their products.

 

Free AI-generated claims denial appeals from Counterforce Health

 

A better path: automation with the right guardrails 

The future of our slow, opaque, and costly system shouldn’t be to burden doctors with more administrative tasks that pull them from patient care. Modern LLMs can handle the narrow, bureaucratic task of reviewing claims against coverage rules. By requiring AI to provide plain-language denial reasons, instantly accessible to doctors and patients, and enforcing continuous audits by regulators, we can ensure fairness and transparency. This lets doctors focus on care, patients get clear answers, and insurers cut costs — a win for all.

Build, don’t ban

This is no different from how we’ve approached autonomous vehicles. Few argue we should ban self-driving cars outright — because that would eliminate the chance to improve a technology with huge potential to save lives. Instead, we regulate and monitor them, creating safety benchmarks and oversight while allowing the technology to evolve. Insurance coverage decisions deserve the same approach: guardrails, not prohibition.

Who will be first with instant, contestable, transparent denials?

Banning AI-only denials feels humane, but it entrenches the worst features of the old system: delay, opacity, and cost. The real solution is not to put more doctors into the administrative loop, but to design AI systems that are fast, transparent, and auditable Who will be first with instant, contestable, transparent denials?

Banning AI-only denials feels humane, but it entrenches the worst features of the old system: delay, opacity, and cost. The real solution is not to put more doctors into the administrative loop, but to design AI systems that are fast, transparent, and auditable  and to guarantee that patients and doctors have the documentation and tools needed to appeal effectively, including with AI.

Patients don't need slower denials with a human rubber stamp. They need faster, clearer, and more contestable decisions — and for the first time, AI makes that achievable. The first insurer to deliver instant, contestable, transparent denials will win trust and market advantage — and regulators should be enabling that, not blocking it.
Read More
Joel Selanikio Joel Selanikio

Hidden Connections: What John Muir Can Teach Us About Apple’s New Hypertension Notifications

The naturalist John Muir saw how everything in nature is connected — and today AI is showing us the same truth inside the body. From Apple Watch studies on atrial fibrillation to new hypertension alerts, hidden links in long-collected data are transforming how we understand health.

When we try to pick out anything by itself, we find it hitched to everything else in the universe.
— John Muir, My First Summer in the Sierra (1911)
Image of John Muir, circa 1902, by Helen Lukens Gaut

John Muir, circa 1902

When the naturalist John Muir penned that quote, he was writing about the natural world after working as a shepherd in California’s Sierra Nevada, immersed in the mutual dependence of mountains, forests, rivers, and animals.

More than a century later, his insight is proving just as true for the wilderness inside of us. We’re discovering that nothing in our physiology exists in isolation. The eye, the heart, the skin, the voice — each carries hidden connections to other systems. And increasingly, it is AI that is uncovering those links.

An Early Example: Apple Heart Study

image of iPhone and Apple watch from Apple Heart Study website

In 2017, Apple and Stanford launched the Apple Heart Study, enrolling more than 400,000 participants. Researchers asked a simple question: could the Apple Watch’s pulse sensor — a photoplethysmograph (PPG) LED originally designed just to measure heart rate — be used to detect something far more serious?

The answer was yes. By applying machine learning, Apple showed that irregular pulse patterns captured by PPG could flag atrial fibrillation (AFib), a condition often silent until it causes a stroke. The connection between a consumer watch’s pulse sensor and a dangerous cardiac arrhythmia was a hidden connection — one not previously recognized or used at scale. And gaining the valuable additional information didn’t involve any new hardware at all.

More Hidden Connections Revealed by AI

Since the Apple study, machine learning has surfaced hidden links across medicine, and not just with watches and phones and other mobile devices:

Abnormal retinal image showing papilledema. By Jonathan Trobe, M.D. - University of Michigan Kellogg Eye Center - The Eyes Have It, CC BY 3.0, https://commons.wikimedia.org/w/index.php?curid=16115920
  • Retinal images — long used to check eye health — can reveal sex, race, smoking status, blood pressure, and even cardiovascular risk.

  • Gait and voice patterns — captured casually by smartphones or microphones — can detect the earliest signs of neurological disease without a visit to a neurologist

  • Electrocardiograms (ECGs) — the same tracings used for decades to diagnose heart rhythm — now allow AI to screen non-invasively for anemia, electrolyte imbalance, and left ventricular dysfunction

  • CT scans — often ordered for one purpose — can be re-analyzed at minimal cost to estimate bone density or coronary calcium, each tied to long-term risk — without needing additional scans (or radiation)

The information was always there. What’s changed is that AI gives us the ability to see it.

The Apple Watch and Hypertension

Apple Watch screen displaying "possible hypertension" message.

Perhaps the end of the beginning

This is particularly true for Apple’s new hypertension notifications, because they don’t use any kind of new sensor in the Apple Watch. The same green LEDs still measure blood oxygen levels. The electrodes still capture heart rhythm. The accelerometer still tracks movement. What changed was Apple’s recognition that these existing signals already carried the imprint of something new: hypertension.

For years, millions of Apple Watch users have been streaming this data. But it’s only now — after training AI on the right relationships — that the watch can flag blood pressure risk with confidence.

Another hidden connection uncovered.

Beyond the Watch

We can expect that this pattern will repeat itself everywhere: applying AI to old data streams to produce new insights. Combined, of course, with new data streams — very likely including non-invasive blood glucose monitoring in the near future.

  • Credit card purchases — will allow us to understand not just spending, but nutrition and mental health

  • Browsing histories — can understand cognition and promote education

  • Medical images — will be repurposed by AI to surface cardiovascular, skeletal, or metabolic risk factors unrelated to the original reason for the scan

Muir’s observation was ecological, but it applies just as well here. Health is not a set of isolated numbers. It is a network of hidden links — and we are only beginning to trace them.

Closing Thought

Apple’s hypertension notifications are not the end of something — they are the beginning. They can help us imagine how every stream of data we collect may reveal more hidden connections between body systems than we ever imagined. And the future of health will be built on our ability to recognize those links connecting what we already measure to what we most need to know.

Read More