A statistician’s case against Popperism

1 minute read

From McElreath, Richard. Statistical Rethinking: A Bayesian Course with Examples in R and Stan. Vol. 122. CRC Press, 2016.

He (Karl Popper) did persuasively argue that science works better by developing hypotheses that are, in principle, falsifiable. Seeking out evidence that might embarrass our ideas is a normative standard, and one that most scholars — whether they describe themselves as scientists or not — subscribe to. So maybe statistical procedures should falsify hypotheses, if we wish to be good statistical scientists. But the above is a kind of folk Popperism, an informal philosophy of science common among scientists but not among philosophers of science. Science is not described by the falsification standard, as Popper recognized and argued.

The book then went on to critique the ill-informed Popperism in the Academia using two arguments:

  1. Hypotheses are not models. The relations among hypotheses and different kinds of models are complex. Many models correspond to the same hypothesis, and many hypotheses correspond to a single model. This makes strict falsification impossible.
  2. Measurement matters. Even when we think the data falsify a model, another observer will debate our methods and measures. They don’t trust the data. Sometimes they are right.

To summarize,

The scientific method cannot be reduced to a statistical procedure, and so our statistical methods should not pretend. Statistical evidence is part of the hot mess that is science, with all of its combat and egotism and mutual coercion.

Leave a comment