How to Conduct a Bayesian Model-Averaged Meta-Analysis in JASP

JASP 0.12 brings Bayesian meta-analysis! Based on the metaBMA package (Heck, Gronau & Wagenmakers, 2019), JASP now includes Bayesian model-averaged meta-analysis so you no longer have to make an all-or-none choice between fixed and random effects models. Additionally, a constrained random effects approach is implemented which answers the question whether every study shows an effect in the same, expected direction. This blog post introduces Bayesian meta-analysis in JASP through an example (the JASP file is available here).

Example: Can Dog Ownership Reduce Mortality Risk?

Kramer, Mehmood, and Suen (2019) conducted a classical meta-analysis on the association between owning a dog and all-cause mortality. They found a risk reduction for all-cause mortality of 24%, RR = 0.76, 95%CI [0.67-0.86]. Let’s see what a Bayesian re-analysis has to say.

You can find the Bayesian meta-analysis in the Meta-Analysis tab (obviously). From the forest plot reported by Kramer et al (2019), we extracted the RR (relative risk) and their confidence intervals per study. An RR measure of 1 indicates no change and therefore no effect. Since we want 0 to indicate the null effect, we take the log of the RR’s and CI’s per study. This is all that is needed to conduct the analysis. Alternatively, you could use the standard error of the effect sizes per study.

Before starting the analysis you may want to think about which priors you want to use for the overall effect size and the across-study heterogeneity (we discuss the default settings for the current example below). You can also visualize the prior distributions using a prior plot.

Figure 1 summarizes the results of the analysis in a forest plot. Note that with the Bayesian meta-analysis we can derive effect size estimates per study, which is not possible in classical meta-analysis with most statistical software. The model-averaged overall effect size estimate is logRR = -0.29, 95%CI [-0.50, -0.11]. Because our estimate is negative, this indicates a risk reduction. Our logRR estimate translates to an RR estimate of 0.75. This means that there is a risk reduction of 25% and is similar to the classical analysis estimate. The credible interval is wider and therefore more conservative than the confidence interval from the classical analysis. Figure 2 shows the entire posterior distribution of the model-averaged effect size.

Figure 1: Forest plot of the Bayesian meta-analysis. This plot shows the observed as well as the estimated effect sizes per study and the estimated overall effect size per model (fixed, random, and model-averaged). Note that the fixed effects estimate is extremely narrow. However, because the random effects analysis has such a high posterior probability, the model estimates of the model-averaged and the random effects analyses are similar.

Figure 2: Posterior distribution of the model-averaged overall effect size estimate.

A glance at the posterior model probabilities in Table 1 reveals that the random-effects alternative hypothesis is the clear winner, with a posterior model probability of .893. The random-effects null hypothesis has a posterior model probability of .107 and is not out of contention, but the two fixed-effect models have a posterior probability near zero. These results, together with the model-averaged effect size estimate, indicate that there is evidence for the presence of an effect.

Table 1. Posterior model probabilities.

Another advantage of the Bayesian meta-analysis is that we can examine the results sequentially, that is, we can evaluate the results after adding the studies to the analysis one-by-one. Figure 3 shows the sequential results for the posterior model probabilities. After a few studies, the posterior probabilities for the fixed-effect models are already close to zero; after four studies, the random-effects alternative hypothesis gradually starts to outperform the random-effects null hypothesis. Another interesting sequential analysis is shown in the cumulative forest plot (Figure 4). Here you can see how the overall effect size estimate evolves after adding the studies sequentially. With only two studies the estimate is -0.09 and goes down to -0.30 when all studies are included.


Figure 3: Sequential analysis of the posterior model probabilities.

Figure 4: Cumulative forest plot of the model averaged overall effect size estimates after adding the studies one-by-one.

Prior Distributions

For a model-averaged meta-analysis, there are two priors. The first one is for the overall effect size which has to be specified for both fixed-effect and random-effects meta-analytic models. The second one is for the between-study heterogeneity, the standard deviation of the distribution of true study effect sizes, which has to be specified only for random-effects meta-analytic models. Since we average across fixed-effect and random-effects models we need to specify both. In this example, we used the default heterogeneity prior in JASP: Inverse-Gamma(1, 0.15). This heterogeneity prior was suggested by Gronau et al (2017) based on empirical work by van Erp et al. (2017). The effect size prior was set as a Cauchy(0, .707), the default effect size prior in JASP (Morey & Rouder, 2018). Note that the location of the effect size prior is zero, because we use the log of RR.


This blog post provides a glimpse of what you can do with a Bayesian meta-analysis in JASP. For more details see:

A primer on Bayesian model-averaged meta-analysis

Gronau, Q. F., Heck, D., Berkhout, S., Haaf, J. M., & Wagenmakers, E. (preprint). A primer on Bayesian model-averaged meta-analysis.

A Bayesian model-averaged meta-analysis of the power pose effect

Gronau, Q. F., Van Erp, S., Heck, D. W., Cesario, J., Jonas, K. J., & Wagenmakers, E. J. (2017). A Bayesian model-averaged meta-analysis of the power pose effect with informed and default priors: The case of felt power. Comprehensive Results in Social Psychology, 2(1), 123-138.

A Bayesian multiverse meta-analysis of Many Labs 4

Haaf, J. M., Hoogeveen, S., Berkhout, S.W., Gronau, Q. F., & Wagenmakers, E. (preprint). A Bayesian multiverse analysis of Many Labs 4: Quantifying the evidence against mortality salience.

A conceptual introduction to Bayesian model averaging

Hinne, M., Gronau, Q. F., van den Bergh, D., & Wagenmakers, E. (in press). A conceptual introduction to Bayesian Model Averaging. Advances in Methods and Practices in Psychological Science.

Constrained random effects approach

Rouder, J. N., Haaf, J. M., Davis-Stober, C. P., & Hilgard, J. (2019). Beyond overall effects: A Bayesian approach to finding constraints in meta-analysis. Psychological Methods, 24(5), 606-621.


Heck, D. W., Gronau, Q. F., & Wagenmakers, E.-J. (2019). metaBMA: Bayesian model averaging for random and fixed effects meta-analysis. Retrieved from

Morey, R. D., & Rouder, J. N. (2018). BayesFactor: Computation of Bayes Factors for Common Designs, v. 0.9.12-4.2.Comprehensive RArchive Network. Retrieved from

Kramer, C. K., Mehmood, S., & Suen, R. S. (2019). Dog ownership and survival: A systematic review and meta-analysis. Circulation: Cardiovascular Quality and Outcomes, 12(10), e005554.

van Erp, S., Verhagen, A. J., Grasman, R. P. P. P., & Wagenmakers, E.-J. (2017). Estimates of between-study heterogeneity for 705 meta-analyses reported in Psychological Bulletin from 1990-2013. Journal of Open Psychology Data, 5.


Like this post?

Subscribe to our newsletter to receive regular updates about JASP including our latest blog posts, JASP articles, example analyses, new features, interviews with team members, and more! You can unsubscribe at any time.

Sophie Berkhout

Sophie Berkhout is a Research Master student in Psychology at the University of Amsterdam. At JASP, she is responsible for the Bayesian meta-analysis.

Julia Haaf

Julia Haaf is postdoc at the Psychological Methods Group at the University of Amsterdam.

Quentin Gronau

Quentin is a PhD candidate at the Psychological Methods Group of the University of Amsterdam. At JASP, he is responsible for the t-tests and the binomial test.

Daniel Heck

Daniel Heck is professor of Psychological Methods at the Philipps
University of Marburg, Germany.

Eric-Jan Wagenmakers

Eric-Jan (EJ) Wagenmakers is professor at the Psychological Methods Group at the University of Amsterdam. EJ guides the development of JASP.