Preview
  • Bernoulli's Fallacy

  • Statistical Illogic and the Crisis of Modern Science
  • By: Aubrey Clayton
  • Narrated by: Tim H. Dixon
  • Length: 15 hrs and 14 mins
  • 4.5 out of 5 stars (123 ratings)

Prime logo Prime members: New to Audible?
Get 2 free audiobooks during trial.
Pick 1 audiobook a month from our unmatched collection.
Listen all you want to thousands of included audiobooks, Originals, and podcasts.
Access exclusive sales and deals.
Premium Plus auto-renews for $14.95/mo after 30 days. Cancel anytime.

Bernoulli's Fallacy

By: Aubrey Clayton
Narrated by: Tim H. Dixon
Try for $0.00

$14.95/month after 30 days. Cancel anytime.

Buy for $29.95

Buy for $29.95

Pay using card ending in
By confirming your purchase, you agree to Audible's Conditions of Use and Amazon's Privacy Notice. Taxes where applicable.
activate_Holiday_promo_in_buybox_DT_T2

Publisher's summary

There is a logical flaw in the statistical methods used across experimental science. This fault is not a minor academic quibble: It underlies a reproducibility crisis now threatening entire disciplines. In an increasingly statistics-reliant society, this same deeply rooted error shapes decisions in medicine, law, and public policy, with profound consequences. The foundation of the problem is a misunderstanding of probability and its role in making inferences from observations.

Aubrey Clayton traces the history of how statistics went astray, beginning with the groundbreaking work of the 17th-century mathematician Jacob Bernoulli and winding through gambling, astronomy, and genetics. Clayton recounts the feuds among rival schools of statistics, exploring the surprisingly human problems that gave rise to the discipline and the all-too-human shortcomings that derailed it. He highlights how influential 19th- and 20th-century figures developed a statistical methodology they claimed was purely objective in order to silence critics of their political agendas, including eugenics.

Clayton provides a clear account of the mathematics and logic of probability, conveying complex concepts accessibly for listeners interested in the statistical methods that frame our understanding of the world. He contends that we need to take a Bayesian approach - that is, to incorporate prior knowledge when reasoning with incomplete information - in order to resolve the crisis. Ranging across math, philosophy, and culture, Bernoulli’s Fallacy explains why something has gone wrong with how we use data - and how to fix it.

PLEASE NOTE: When you purchase this title, the accompanying PDF will be available in your Audible Library along with the audio.

©2021 Aubrey Clayton (P)2021 Audible, Inc.

What listeners say about Bernoulli's Fallacy

Average customer ratings
Overall
  • 4.5 out of 5 stars
  • 5 Stars
    87
  • 4 Stars
    21
  • 3 Stars
    9
  • 2 Stars
    1
  • 1 Stars
    5
Performance
  • 4.5 out of 5 stars
  • 5 Stars
    82
  • 4 Stars
    20
  • 3 Stars
    1
  • 2 Stars
    4
  • 1 Stars
    2
Story
  • 4.5 out of 5 stars
  • 5 Stars
    76
  • 4 Stars
    20
  • 3 Stars
    6
  • 2 Stars
    3
  • 1 Stars
    4

Reviews - Please select the tabs below to change the source of reviews.

Sort by:
Filter by:
  • Overall
    5 out of 5 stars
  • Performance
    5 out of 5 stars
  • Story
    5 out of 5 stars

A strong case for Bayes

Good intro to Bayesian statistics but the descriptions of equations and graphs were distracting. I bought the book for those.

Something went wrong. Please try again in a few minutes.

You voted on this review!

You reported this review!

  • Overall
    5 out of 5 stars
  • Performance
    4 out of 5 stars
  • Story
    5 out of 5 stars

Statistical method based upon Racist Justification

Immersed in the labyrinthine realms of statistical theory, I found myself captivated by the nuanced debate between the frequentist and Bayesian schools of thought. In the book I had the pleasure of reviewing, Clayton masterfully illuminates the stark incompatibilities that lie at the heart of these two methodologies. His adept critique of frequentist assertions, which he then artfully deconstructs, proved both enlightening and accessible, demanding no more than a foundational understanding of undergraduate statistics.

My intellectual voyage through this domain was profoundly enriched by Clayton's work, which bestowed upon me the essential historical context of the Bayesian versus frequentist discourse, underscoring Jaynes' work as a pivotal intellectual achievement.

Entitled "Bernoulli’s Fallacy," the book adeptly traces the trajectory of statistical thought, journeying from Bernoulli's pioneering efforts to the unsettling application of statistics in the pursuit of eugenic agendas. It also confronts the contemporary "crisis of replication" afflicting various research fields, a crisis stemming from an excessive dependence on statistical significance and p-values in hypothesis evaluation.

In its initial chapters, the book articulates its core concepts, which, though not revolutionary, remain critical and frequently misunderstood in modern discussions. These concepts pivot around the idea of probability as a subjective belief informed by available knowledge, the imperative of articulating assumptions in probability statements, and the transformation of prior probabilities into posterior probabilities via observation. The book underscores that data alone cannot yield inferences; rather, it reshapes our existing narratives based on their plausibility.

A pivotal insight from the book is the acknowledgment that improbable events do indeed transpire. This realization challenges the practice of deducing the veracity or fallacy of hypotheses solely based on the likelihood of observations. Instead, it advocates for adjusting our subjective belief in the plausibility of a hypothesis in relation to other competing hypotheses.

Moreover, the book elucidates a critical distinction: Bayesian and frequentist methods are not merely two different perspectives but rather, the Bayesian approach forms the bedrock of probability understanding, with the frequentist method emerging as a historical aberration, a specific instance within the expansive Bayesian paradigm.

It was particularly enlightening to learn how a small cadre of British mathematics professors, namely Galton, Fisher, and Pearson, engineered an entire statistical school of thought. This school, founded on flawed and convenient principles, served to justify and rationalize their eugenic and racist viewpoints, reinforcing the Victorian-era racial supremacy of the British upper class through a veneer of mathematical rationalization. This review offered a fascinating glimpse into a quasi-scientific method employed by researchers who, standing on shaky ground, resort to limited group sampling and mathematical subterfuge to lend false precision and authority to their biased models and probability findings.

Something went wrong. Please try again in a few minutes.

You voted on this review!

You reported this review!

7 people found this helpful

  • Overall
    5 out of 5 stars
  • Performance
    5 out of 5 stars
  • Story
    5 out of 5 stars

No punches pulled!

There has been some effort to make frequentist and Bayesian approaches seem compatible in the last few years. But they really aren’t compatible. Clayton gives a full explanation of why this is the case. The reader should know introductory statistics at the undergraduate level well to appreciate the arguments, but more advanced understanding beyond that is not required. Clayton is very generous in recapping basic claims in frequentist statistics before turning them upside down and demonstrating their absurdity.

Something went wrong. Please try again in a few minutes.

You voted on this review!

You reported this review!

4 people found this helpful

  • Overall
    5 out of 5 stars
  • Performance
    5 out of 5 stars
  • Story
    5 out of 5 stars

Excellent and persuasive

I read the book along with listening to the Audible narration. I'm a big Edwin Jaynes fan, so this was preaching to the choir. In particular, a Presbyterian sermon from Probability Theory, driving home its themes thoroughly.

Something went wrong. Please try again in a few minutes.

You voted on this review!

You reported this review!

  • Overall
    5 out of 5 stars
  • Performance
    5 out of 5 stars
  • Story
    5 out of 5 stars

The best introduction to Bayesian stats I’ve read

The walk through the history of stats was very enlightening, and the discussion around frequency and probability explain why I’ve always had a hard time with stats in the past.

Something went wrong. Please try again in a few minutes.

You voted on this review!

You reported this review!

  • Overall
    4 out of 5 stars
  • Performance
    4 out of 5 stars
  • Story
    4 out of 5 stars

Rigorously Bayesian

Ignore the review from the snowflake triggered by the word Berkeley. This book is good. It sets up a sound logical argument against frequentist statistics. It give interesting historical details and explains why Bayesian methods are more robust.

Something went wrong. Please try again in a few minutes.

You voted on this review!

You reported this review!

19 people found this helpful

  • Overall
    5 out of 5 stars
  • Performance
    5 out of 5 stars
  • Story
    5 out of 5 stars

Excellent Intro to the Meaning of Probability

I have been reading E. T. Jaynes’ “Probability Theory: The Logic of Science”, which presents a fantastic explanation and formal derivation of probability as a system of logic (built on plausibility rather than certainty, unlike predicate logic). What I hadn’t known was the historical context around the Bayesian vs frequentist approaches to probability that made Jaynes’ work such an important masterpiece.

Bernoulli’s Fallacy provides this context, starting with Bernoulli’s contributions to the field, working all the way through the development and use (rather, a perversion) of statistics to meet the eugenics agenda, and finally the present day “crisis of replication” that is plaguing research across a variety of fields due to their reliance on statistical significance and p-values as a measure of evaluating hypotheses.

As such, this book, in its initial chapters, presents its core set of ideas. These are not novel ideas, but they are nevertheless poorly understood by the community today, and this book does a great job explaining them in depth. I would summarize these ideas as follows:

- Probability represents a subjective belief in a hypothesis based on information / knowledge that you possess, it is not an objective fact. Any statement that the probability of an event IS some number is incomplete; you must always state your assumptions (knowledge that you possess). All probability is conditional on these assumptions. (Jaynes does a good job of making this explicit via notation.)
- You cannot draw inferences from data alone. What you CAN do is convert prior probabilities (existing degrees of belief) to posterior probabilities through the act of observation (incorporating new data). Data doesn’t ever tell you the whole story; it can only alter the story you already have in terms of its plausibility.
- Unlikely events happen. You cannot infer the truth or falsity of a hypothesis based on the likelihood of an observation. Rather, you can only use an observation to alter your subjective belief in the plausibility of a hypothesis, and that too, relative to OTHER hypotheses that support the same observation. Again, unlikely events do occur (e.g., someone always wins the lottery), and so it’s really the relative likelihood of different hypotheses that you adjust as you learn more (by making more observations). Of particular importance here is the idea that it is up to YOU (not the data) to exhaustively formulate the relevant hypotheses, and assign suitable priors. As Pierre-Simon Laplace supposedly put it (paraphrasing), “extraordinary claims merit extraordinary evidence”, and so new data should alter your belief one way or the other toward a hypothesis based on the RELATIVE priors associated with all potential hypotheses. The more you believe in a hypothesis relative to others, the harder it should be to displace.

One idea this book clarifies is that Bayesian and frequentist are not two “equally valid” schools of thought, but that the Bayesian method underpins the whole idea of probability, whereas the frequentist approach is simply a special case (a sort of unhappy accident of history).

Overall, a well-argued, interesting, and balanced book, despite the seemingly extraordinary conclusion. The evidence is extraordinary and well-presented, though occasionally repetitive and dense.

Something went wrong. Please try again in a few minutes.

You voted on this review!

You reported this review!

4 people found this helpful

  • Overall
    5 out of 5 stars
  • Performance
    5 out of 5 stars
  • Story
    4 out of 5 stars

A well-marked path that cuts to the chase

If read/listened to attentively, it guides directly to the present [and past] day A.I. fallacy. Picture this:
Noah = Mathematics
on his barge
an Elephant = Statistics
and
a Penguine = Computer Science
Noah is pointing to their offspring, a creature with the body of a penguine [C.S.] and, attached to it, an elephant head [Statistics].
Noah [Mathematics]: "What the hell is this?!..."
E.g. Lifting oneself by one's own hair is unlikely to come down to horsepower.
[.... as Artur Avila pointed out (2014) for which he won the Fields Medal - hands down, to everyones' maximum satisfaction - puting in The Last Word on entire fields of Mathematics!]

Something went wrong. Please try again in a few minutes.

You voted on this review!

You reported this review!

3 people found this helpful

  • Overall
    5 out of 5 stars
  • Performance
    5 out of 5 stars
  • Story
    5 out of 5 stars

Explanation of Bayesian (Jaynesian) statistics

The "Fallacy" in the title is this: The validity of a hypothesis can be judged based solely on how likely or unlikely the observed data would be if the hypothesis were true. The author, Aubrey Clayton, calls it Bernoulli's Fallacy because Jacob Bernoulli's Ars Conjectandi is devoted to determining how likely or unlikely an observation is given that a hypothesis is true. What we need is not the probability of the data given the hypothesis, but the probability of the hypothesis given the data.

In the preface, Clayton describes the Bayesian vs Frequentist schism as a "dispute about the nature and origins of probability: whether it comes from 'outside us' in the form of uncontrollable random noise in observations, or 'inside us' as our uncertainty given limited information on the state of the world." Like Clayton, I am a fan of E.T. Jaynes's "Probability Theory: The Logic of Science", which presents the argument (proof really) that probability is a number representing a proposition's plausibility based on background information -- a number which can be updated based on new observations. So, I am a member of the choir to which Clayton is preaching.

And he is preaching. This is one long argument against classical frequentist statistics. But Clayton never implies that frequentists dispute the validity of the formula universally known as "Bayes's Rule". (By the way, Bayes never wrote the actual formula.) Disputing the validity of Bayes's Rule would be like disputing the quadratic formula or the Pythagorean Theorem. Some of the objections to Bayes/Price/Laplace are focused on "equal priors", a term which Clayton never uses. Instead, he says "uniform priors", "principle of insufficient reason", or (from J.M.Keynes) "principle of indifference".

I appreciate that it is available in audio. The narrator is fine, but I find that I need the print version too.

As someone already interested in probability theory and statistics, I highly recommend this book. I can't say how less interested individuals would like it.

Something went wrong. Please try again in a few minutes.

You voted on this review!

You reported this review!

3 people found this helpful

  • Overall
    5 out of 5 stars
  • Performance
    5 out of 5 stars
  • Story
    5 out of 5 stars

Amazing book

Great read and must have for everyone in risk management community. Yet another wake up call to the flaws in many traditional risk analysis techniques.

Something went wrong. Please try again in a few minutes.

You voted on this review!

You reported this review!

2 people found this helpful