Shallow Dives

The Prosecutor's Fallacy: Why 'One in a Million' Doesn't Mean What You Think

The Hook

In 1999, British solicitor Sally Clark was convicted of murdering her two infant sons based on expert testimony that the odds of two children in one family dying from Sudden Infant Death Syndrome (SIDS) were "one in 73 million." The statistic seemed damning: surely such an astronomically unlikely event must mean murder. But Clark was innocent, and the statistical reasoning that convicted her was fundamentally flawed. She spent three years in prison before the conviction was overturned—a victim of one of probability's most dangerous illusions.

The Core Concept

The prosecutor's fallacy inverts conditional probability in a way that feels intuitive but is mathematically backwards. It confuses two very different questions:

  1. "What's the probability of this evidence if the defendant is innocent?" (often very low)
  2. "What's the probability the defendant is innocent given this evidence?" (what we actually need to know)

These are not the same thing. To see why, imagine a disease affects 1 in 10,000 people, and a test is 99% accurate. If you test positive, what's the chance you actually have the disease? Your intuition probably screams "99%!" But the correct answer is only about 1%.

Here's why: In a population of 10,000 people, only 1 person truly has the disease. The test correctly identifies them. But the test also produces false positives for 1% of the 9,999 healthy people—that's about 100 false alarms. So when you test positive, you're one of 101 positive results, but only 1 of those is a true case. Your probability of having the disease is roughly 1/101, or less than 1%.

This is Bayes' Theorem in action: to properly interpret evidence, you must consider not just how rare the evidence is for innocent people, but also how common innocence is to begin with—the base rate.

Real-World Consequences

The Sally Clark case exemplifies the real-world stakes. The expert witness calculated the probability of two SIDS deaths in one family by squaring the individual probability (about 1 in 8,500), arriving at 1 in 73 million. This calculation assumed the deaths were independent events—already questionable given genetic and environmental factors that run in families.

But the deeper error was treating this statistic as the probability of Clark's innocence. The prosecution presented "one in 73 million" as if it meant there was only a one-in-73-million chance the deaths were natural. This ignored the crucial question: What's the probability of the evidence (two infant deaths) if Clark were guilty versus innocent?

Double infant murders are also extraordinarily rare. When statisticians later analyzed the case properly, considering base rates for both double SIDS and double infant homicide, they found the evidence slightly favored innocence over guilt. The impressive-sounding statistic had been turned on its head.

Key Takeaways

Rare evidence doesn't equal rare innocence. When you hear "there's only a 1% chance this would happen if they were innocent," remember that doesn't mean "there's a 99% chance they're guilty." You need to know how likely the evidence is under both scenarios.

Base rates matter enormously. Before updating your beliefs based on new evidence, consider how common each explanation is to begin with. If false positives outnumber true positives because the underlying condition is rare, even "accurate" tests mislead.

Your dinner party version: "You know how prosecutors say 'the odds of coincidence are a million to one'? That's backwards. Unlikely coincidences happen all the time—what matters is whether the alternative explanation is even more unlikely."

The Deeper Pattern

The prosecutor's fallacy thrives because we're wired to find patterns and assign agency to randomness. When something improbable happens, we intuitively assume something—or someone—must have caused it. But in a world of billions of people living billions of moments, million-to-one events happen thousands of times a day. The question isn't whether the evidence is rare, but whether it's rarer than the alternative explanation.

Next time someone cites astronomical odds to prove a point, ask yourself: are they telling you the probability of the evidence given innocence, or the probability of innocence given the evidence? The difference could be someone's freedom—or your own judgment.

References

  • Nobles, R., & Schiff, D. (2005). "Misleading Statistics Within Criminal Trials: The Sally Clark Case" Significance, Royal Statistical Society
  • Dawid, A. P., & Mortera, J. (2008). "Forensic Identification and the Prosecutor's Fallacy" Cambridge Law Journal
  • Thompson, W. C., & Schumann, E. L. (1987). "Interpretation of Statistical Evidence in Criminal Trials: The Prosecutor's Fallacy and the Defense Attorney's Fallacy" Law and Human Behavior

Further Reading