At its simplest, Bayes’s theorem describes the probability of an event, based on prior knowledge of conditions that might be related to the event. But in Everything Is Predictable, Tom Chivers lays out how it affects every aspect of our lives. He explains why highly accurate screening tests can lead to false positives and how a failure to account for it in court has put innocent people in jail. A cornerstone of rational thought, many argue that Bayes’s theorem is a description of almost everything.
But who was the man who lent his name to this theorem? How did an 18th-century Presbyterian minister and amateur mathematician uncover a theorem that would affect fields as diverse as medicine, law, and artificial intelligence? Fusing biography and intellectual history, Everything Is Predictable is an entertaining tour of Bayes’s theorem and its impact on modern life, showing how a single compelling idea can have far reaching consequences.
Tom Chivers is an author and the award-winning science writer for Semafor. Previously he was the science editor at UnHerd.com and BuzzFeed UK. His writing has appeared in The Times (London), The Guardian, New Scientist, Wired, CNN, and more. He was awarded the Royal Statistical Society’s “Statistical Excellence in Journalism” awards in 2018 and 2020, and was declared the science writer of the year by the Association of British Science Writers in 2021. His books include The Rationalist’s Guide to the Galaxy: Superintelligent AI and the Geeks Who Are Trying to Save Humanity’s Future, and How to Read Numbers: A Guide to Stats in the News (and Knowing When to Trust Them). His new book is Everything Is Predictable: How Bayesian Statistics Explain Our World.
Shermer and Chivers discuss:
- Who was Thomas Bayes, what was his equation, and what problem did it solve?
- Bayesian decision theory vs. statistical decision theory
- Popperian falsification vs. Bayesian estimation
- Sagan’s ECREE principle
- Bayesian epistemology and family resemblance
- Paradox of the heap
- Bayesian brain
- Reality as controlled hallucination
- Bayesian prediction errors and why we can’t tickle ourselves
- Bayes and human irrationality
- Superforecasting
- Types of truth
- Mystical experiences and religious truths
- Replication Crisis in science
- Statistical Detection Theory and Signal Detection Theory
- Medical diagnosis problem and why most people get it wrong.
Show Notes
Medical Diagnosis Problem and Why Most People Get It Wrong
You go to the doctor not feeling well and they run some diagnostic tests, which indicate that you might have cancer. They tell you that this disease happens to 1 out of 100 people, or a 1% prevalence rate. The test sensitivity for this type of cancer is 90%, that is, the test will be right 90% of the time. The false positive rate of the test is 9%, that is, the test will be wrong 9% of the time. What is the percent likelihood that you have cancer?
When people are presented with this problem, the most common answer given is between 80% and 90%. The correct answer is 9%. This problem is so counterintuitive that not only do most people get it wrong, most medical professionals get it wrong. Think about that: a physician whom you trust runs some diagnostic tests and informs you that you have a 90% chance of having cancer when, in fact, it’s only 9%. That is a huge difference determining what decision you should make about treatment or not. What has gone wrong here? Let’s reframe the problem on a group of people tested for cancer and see how that cashes out on the diagnosis problem:
- In a sample size of 1,000 people, 10 have cancer (the base rate of 1%).
- Of these 10 people, 9 will test positive (the 90% sensitivity of the test).
- Of the 990 people without cancer, 89 will test positive (the 9% false-positive rate).
- A person tests positive. Does this person have cancer or not?
Here is how we compute the answer:
- Out of 1,000 people tested for cancer
- 98 of them test positive in all (9 + 89)
- 9 of them have cancer
- 9 divided by 98 = 0.091 or ~ 9%
Why do most people get such problems wrong in the original framing? The answer is threefold: (1) base rate neglect, that is, the rate of the phenomenon happening is ignored or discounted, in this case a low 1% base rate means the cancer is rare (in Bayesian language, the prior probability was low); (2) probabilities are counterintuitive, that is, they apply to populations of people, not to one person; (3) cognitive heuristics, that is, we’re not naturally Bayesian in our reasoning and instead we use cognitive shortcuts or rules of thumb. The Linda problem:
Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations.
Pinker on Blindness to Base Rates
Why can’t we predict who will attempt suicide? Why don’t we have an early-warning system for school shooters? Why can’t we profile terrorists or rampage shooters and detain them preventively? The answer comes out of Bayes’s rule: a less-than-perfect test for a rare trait will mainly turn out false positives. The heart of the problem is that only a tiny proportion of the population are thieves, suicides, terrorists, or rampage shooters (the base rate). Until the day that social scientists can predict misbehavior as accurately as astronomers predict eclipses, their best tests would mostly finger the innocent and harmless.
Bayesian Reasoning About UFOs
Leslie Kean’s 2010 book UFOs: Generals, Pilots and Government Officials Go on the Record, in which the UFOlogist admitted that “roughly 90 to 95 percent of UFO sightings can be explained” as:
…weather balloons, flares, sky lanterns, planes flying in formation, secret military aircraft, birds reflecting the sun, planes reflecting the sun, blimps, helicopters, the planets Venus or Mars, meteors or meteorites, space junk, satellites, swamp gas, spinning eddies, sundogs, ball lightning, ice crystals, reflected light off clouds, lights on the ground or lights reflected on a cockpit window, temperature inversions, hole-punch clouds, and the list goes on.
How The Light Gets In
At the 2023 HowTheLightGetsIn festival in London (sponsored by IAI, the Institute of Art and Ideas), during a panel discussion on the role of spiritual experience in our lives, in which I shared the stage with psychologist John Vervaeke and philosopher Sophie-Grace Chappell. Both quoted the noted philosopher Ludwig Wittgenstein at length, while I quoted Douglas Adams:
Isn’t it enough to see that a garden is beautiful without having to believe that there are fairies at the bottom of it too?
IAI News editor Ricky Williamson nevertheless makes my point:
This final argument from Shermer is a typical anti-spiritual retort. “Show us the evidence.” Well Michael, here it is: The mystical experience. The mystical experience, much like any other type of experience, offers clear evidence of spiritual reality. But what is the mystical experience?
The mystical experience is evidence of spiritual reality. Philosophical arguments for spirituality, or even for God, are of far less value in my estimation when compared to the empirical evidence of the mystical experience. Spiritual reality can be well-hidden when in a “normal” frame of mind, not much about regular reality hints at the presence of this possible, radical other, but when you see it, when you have a mystical experience, the experience is undeniable.
Feynman
If it disagrees with experiment, it’s wrong. In that simple statement is the key to science. It doesn’t make any difference how beautiful your guess is, how smart you are, who made the guess, or what his name is. If it disagrees with experiment, it’s wrong. That’s all there is to it.
Hume’s Maxim
The plain consequence is (and it is a general maxim worthy of our attention), “That no testimony is sufficient to establish a miracle, unless the testimony be of such a kind, that its falsehood would be more miraculous than the fact which it endeavours to establish.” When anyone tells me that he saw a dead man restored to life, I immediately consider with myself whether it be more probable, that this person should either deceive or be deceived, or that the fact, which he relates, should really have happened. I weigh the one miracle against the other; and according to the superiority, which I discover, I pronounce my decision, and always reject the greater miracle. If the falsehood of his testimony would be more miraculous than the event which he relates; then, and not till then, can he pretend to command my belief or opinion.
If you enjoy the podcast, please show your support by making a $5 or $10 monthly donation.
This episode was released on May 7, 2024.