JAGS Meets JASP

Data from the reproducibility project. The x-axis shows the effect size of the original studies, the y-axis shows the effect size of the replications. The color indicates whether a replication was significant (purple) or not (black). A linear regression line is fit for each group. Figure from JASP.

If there is one thing that caused widespread adoption of Bayesian inference, it’s black-box Markov chain Monte Carlo (MCMC) samplers. Rather than deriving an algorithm to draw samples from a posterior distribution and subsequently implementing it in a programming language, black-box samplers made it possible to specify only the model and have the back-end figure out how to draw MCMC samples from it. From JASP 0.12.0, one of the most well-known black-box samplers, JAGS, is available in JASP as a module!

To demonstrate the JAGS module we use data from the reproducibility project (Open Science Collaboration, 2015) and replicate the analysis done in Wagenmakers et. al., (2017).

For this analysis, there are two variables of interest:

  1. “Original effect”: The effect size (Cohen’s d) obtained in the original study.
  2. “Replication effect”: The effect size (Cohen’s d) obtained in the replication study.

The idea is the following. Each effect is replicable or not replicable.1 We capture this with a variable called clust that is either 1 (replicable) or 0 (not replicable).

If the study replicates, we expect that the replication effect is a function of that of the original effect. Otherwise, if the study fails to replicate, we expect that the replication effect is approximately zero. In other words, the replication studies fall into two qualitatively different camps; statistically, this is a mixture model.

The mixture model can be specified in JAGS as follows:

The replication effect of study i is a draw from a normal distribution with a certain mean mu[i] and a precision tau. Here, the ~ specifies that the relation is stochastic.

replication effect[i] ~ dnorm(mu[i] , tau)

JAGS uses precision, which is 1 over the variance.
Furthermore, whether the study i is replicable is captured by clust which follows a Bernoulli distribution with unknown probability phi.

clust[i] ~ dbern(phi)

Next, we specify the mean mu[i]. We use an arrow <- (or =) because this relation is deterministic.
 
mu[i] <- alpha * original effect[i] * equals(clust[i], 1)

If clust[i] equals 1, which is determined by equals(clust[i], 1), we have mu[i] <- alpha * original effect[i]. In other words, the study replicates and the replication effect is a function of the original effect. We further multiply this by alpha, some value within [0, 1] to accommodate the fact that effect sizes tend to be overestimated in practice.
If clust[i] equals 0, then we have mu[i] <- 0, specifying that the population mean is 0 for replication effect sizes when a study does not replicate.

That’s all there is to the model! What remains is to choose priors for tau, the precision of the studies, alpha, the slope that accommodates overestimation of effect sizes, and phi, the true effect rate. Here, we use the following three uninformative priors.

alpha ~ dunif(0, 1) # flat prior on slope for predicted effect size under H1
tau ~ dgamma(0.001, 0.001) # vague prior on study precision
phi ~ dbeta(1, 1) # flat prior on the true effect rate

Putting it all together we get the following JAGS model:

model {

  # Mixture Model Priors on the Model Parameters:
  alpha ~ dunif(0, 1) # flat prior on slope for predicted effect size under H1
  tau ~ dgamma(0.001, 0.001) # vague prior on study precision
  phi ~ dbeta(1, 1) # flat prior on the true effect rate

  # Mixture Model Likelihood:
  for(i in 1:n) {
    #Although JAGS cannot handle variable names with spaces, such as replication
    effect, JASP has no problems with this.
    replication effect[i] ~ dnorm(mu[i] , tau) # point prediction is mu[i]

    clust[i] ~ dbern(phi)
    # if clust[i] = 0 then H0 is true, if clust[i] = 1 then H1 is true, and
    # the replication effect is a function of the original effect:

    mu[i] <- alpha * dnorm[i] * equals(clust[i], 1)
    # when clust[i] = 0, then mu[i] = 0;
    # when clust[i] = 0, then mu[i] = alpha * orgEffect[i]
    }
}

Let’s fit the model in JASP! We open the JAGS module (after enabling it from the modules menu by clicking the ‘+’ in the top right of the JASP window). Next, we copy-paste the model code in the text block in the top left. Under “Observed values”, we specify that the sample size is 94. We click once more on the model block and press Ctrl+Enter (or ⌘+Enter on a Mac) to apply the model. That’s all we need to do to fit the model in JASP! Under plots, we select density and trace and get the following output.


Screenshot of the JAGS module in action.

In the JASP screenshot, the left panel shows the input options and the right panel shows the output. The first table in the output summarizes three posterior distributions by their mean, standard deviation, median, and 95% credible interval. For example, this table tells us that the posterior mean for the probability that a study replicates is 0.54. Furthermore, the table also provides MCMC convergence diagnostics such as Rhat and effective sample size.

Below the table, we see density plots and trace plots. The trace plots confirms that the chains have converged, and the density plots visualize the posterior distribution. Here, we only examine a subset of the joint posterior because in the input options on the left we selected alpha, tau, and phi. Had we also selected mu and clust we would have obtained information about these parameters as well. We did not do so here, because the parameters are vectors and adding them would create 376 additional figures and entries in the table.

Rather than inspecting the marginal posterior distribution for a single parameter, we can also visualize the bivariate marginal posterior distribution. We tick Bivariate scatter in the options on the left and obtain:


Panel plot of the posterior distribution. Plots on the diagonal show the marginal densities of the parameters. Plots on the off-diagonal show bivariate contour plots, i.e., bivariate densities.

For instance, the bottom left panel of this plot shows us that phi, the replication rate, and alpha, the correction for overestimated effect size are negatively related. Given this model and the data, if the replication rate were higher than it must be that the original effect sizes are more overestimated.

Notes

1 Note that we’re not actually testing if a study is replicable or not (that would be impossible). This is merely the interpretation of this mixture model.

Credits

We could not have created the JAGS module without the JAGS software itself and the following R packages:

Plummer, M. (2003, March). JAGS: A program for analysis of Bayesian graphical models using Gibbs sampling. In Proceedings of the 3rd international workshop on distributed statistical computing (Vol. 124, No. 125.10, pp. 1-10).

Martyn Plummer (2018). rjags: Bayesian Graphical Models using MCMC. R package version 4-8. https://CRAN.R-project.org/package=rjags

Martyn Plummer, Nicky Best, Kate Cowles and Karen Vines (2006). CODA: Convergence Diagnosis and Output Analysis for MCMC, R News, vol 6, 7-11

Hadley Wickham (2019). stringr: Simple, Consistent Wrappers for Common String Operations. R package version 1.4.0. https://CRAN.R-project.org/package=stringr

Note that you can obtain these references from JASP by clicking on the title of the analysis in the output window and selecting Copy Citations.

References

Open Science Collaboration. (2015). Estimating the reproducibility of psychological science. Science, 349(6251), aac4716.

Wagenmakers, E., Marsman, M., van den Bergh, D., Chambers, C. D., Pashler, H., De Ruiter, J. P., … Tullett, A. M. (2017, August 16). Suggestions to Advance Your Mission: An Open Letter to Dr. Shinobu Kitayama, Editor of JPSP:ASC.https://doi.org/10.31234/osf.io/39ugj


 

Like this post?

Subscribe to our newsletter to receive regular updates about JASP including our latest blog posts, JASP articles, example analyses, new features, interviews with team members, and more! You can unsubscribe at any time.

About the authors

Don van den Bergh

Don is a PhD candidate at the Department of Psychological Methods at the University of Amsterdam. At JASP, he is responsible for the frequentist and Bayesian reliability analysis, the machine learning module, and the network module.