Jim Berger is one of the most prominent Bayesian statisticians alive today. Berger combines strong mathematical ability with a deep knowledge about the foundations of Bayesian inference; for decades Berger has contributed valuable new insights and procedures, and is partly responsible for the increased prominence of objective Bayesian inference. Moreover, Jim Berger has the rare gift of being able to explain complicated statistical concepts in simple words, and he also cares about how statistics is used in practice. Jim has always been one of my statistical idols, and we are honored to have him on the JASP Advisory Board.

As some of the readers of this blog may already know, Jim Berger is one of the initiators of the recent paper “Redefine statistical significance“, due to appear in Nature Human Behavior. In this paper, he and his co-authors “propose to change the default P-value threshold for statistical significance for claims of new discoveries from 0.05 to 0.005.” The preprint has received a lot of attention, and we wanted to take this opportunity to talk to Jim about statistical inference, reincarnation, genies, and p values.
What is your favorite stats paper (not written by yourself) and why?
Edwards, W., Lindman, H., & Savage, L. J. (1963). Bayesian statistical inference for psychological research. Psychological Review, 70, 193-242.
This paper was the one that opened my eyes to the fact that there was something seriously wrong with standard statistical practice in regards to p-values. It also introduced me to robust Bayesian analysis, which I still view as the fundamentally correct way to think about statistical inference.
If you could give an applied researcher (say in biology or psychology) a single one of your papers to read, which one would that be, and why?
Bayarri, J. and Berger, J. O. (2013). Hypothesis testing and model uncertainty. In Damien, P., Dellaportas, P., Polson, N. G., & Stephens, D. A. (Eds.), Bayesian theory and applications (pp. 361-400). Oxford: Oxford University Press.
This was written to explain the key issues in testing and model uncertainty, using the best approaches and examples I had seen or developed over many years. So I think it is a good introduction to these issues for someone who actually cares.
You reincarnate and travel back in time to live the academic life (not the personal life) of a statistician from the past. A scary and implausible proposition. But forced to choose, who would you pick, and why?
Pierre-Simon Laplace. He not only was the first one to essentially get statistics right, but his impact was enormous over hundreds of years. Ed Jaynes -the prominent physicist/statistician – once told me that, whenever he encountered a new statistical problem, he would first go and look in Laplace’s 1812 book to see if the answer was there!
You wrote books that have grown to be classics in the field: the 1985 book on decision making and the 1988 book on the likelihood principle. Are you planning to write another book, and if you were forced to, what would it be about?
I am, indeed, in the process of writing two books. One is called Objective Bayesian Inference, and the title says it all. I’ve been working on this now for twenty years, with Jose Bernardo and Dongchu Sun, but the end is in sight!
The second is a monograph on Model Uncertainty (perhaps not the final title), with Susie Bayarri; alas, that has obviously slowed down with her untimely death.
You attend an unusual party where you meet a genie who has an interest in statistics. When you say goodbye the genie grants you one statistical wish, that is, you can change a single thing about how researchers do their inference. What would you wish for?
That objective Bayesian analysis (essentially Laplace’s way of doing statistics) became the standard approach. This would automatically cure most of the statistical issues that plague science, such as misuse of p-values and the lack of adjustment for multiple testing.
Several statisticians have speculated how many more years it would take before Bayesian procedures are more popular than frequentist procedures. Let’s call that time interval t. Can you specify your subjective prior distribution for t?
First, I’ll have to rephrase the question, because objective Bayesian procedures are typically also frequentist procedures. So my version of the question would be “How long will it be before the bad statistical practices – now (incorrectly) termed frequentist (such as the p-value) – disappear?” My prior distribution on that is very vague, maybe uniform over 20 to 100 years with probability mass of 0.8, and a probability mass of 0.2 on the bad practices never disappearing. This distribution will likely change considerably when I see the reaction of the scientific community to the paper you refer to in your next question.
You are one of the initiators of the paper “Redefine statistical significance“, due to appear in Nature Human Behavior. In the paper, you and your co-authors “propose to change the default P-value threshold for statistical significance for claims of new discoveries from 0.05 to 0.005.”
Many of your colleagues may be surprised to see you advocate any P-value threshold. For instance, in a 2001 paper with Sellke and Bayarri you wrote:
“The most important conclusion is that, for testing ‘precise’ hypotheses, p values should not be used directly, because they are too easily misinterpreted. The standard approach in teaching—of stressing the formal definition of a p value while warning against its misinterpretation—has simply been an abysmal failure.” (p. 71)
And in an earlier 1987 paper with Delampady you stated your dislike of p-values even more clearly:
“when testing precise hypotheses, formal use of P-values should be abandoned. Almost anything will give a better indication of the evidence provided by the data against H0.” (p. 330)
Can you briefly explain your motivation and the purpose of the .005 paper? Will you still be allowed to attended Bayesian conferences?
A p-value is just a statistic; the problem with today’s standard practice is that it is completely misinterpreted. The 2001 paper with Sellke and Bayarri said, in part, that you can take p, compute [-e p log p] and view this as roughly the Bayesian odds (Bayes factor) of H_0 to H_1. Thus a p-value of 0.05 suggests odds of 1 to 2.5, slight evidence against H_0, but not much. On the other hand, p=0.005 suggests odds of 1 to 14, which is reasonably strong evidence against H_0. (By the way, I love your recent cartoon that explains this issue clearly.)
One issue with [-e p log p] is that it is only a bound on the odds and, if one were to do a full Bayesian analysis, the odds could end up being more favorable to the null hypotheses (but not the reverse). Still, if one has to live with p-values, at least getting them interpreted reasonably goes a long way towards curing the problem. And that is what I view our “0.005” paper as doing.
Furthermore, part of the resistance to adopting Bayesian methods is because it is so easy to publish nonsense with standard practice, while Bayesian methods will not allow that. If the stricter standard of 0.005 is adopted, there might be less resistance to switching over to Bayesian methods.
As to your last question, I may well be barred from statistical conferences all together!
References
Bayarri, J. and Berger, J. O. (2013). Hypothesis testing and model uncertainty. In Damien, P., Dellaportas, P., Polson, N. G., & Stephens, D. A. (Eds.), Bayesian theory and applications (pp. 361-400). Oxford: Oxford University Press.
Berger, J. O., & Delampady, M. (1987). Testing precise hypotheses. Statistical Science, 2, 317-352
Edwards, W., Lindman, H., & Savage, L. J. (1963). Bayesian statistical inference for psychological research. Psychological Review, 70, 193-242.
Sellke, T., Bayarri, M. J., & Berger, J. O. (2001). Calibration of p values for testing precise null hypotheses. The American Statistician, 55, 62–71.
br>
Like this post?
Subscribe to our newsletter to receive regular updates about JASP including our latest blog posts, JASP articles, example analyses, new features, interviews with team members, and more! You can unsubscribe at any time.
br>