It is highly recommended to evaluate the performance of prediction models across different study populations, settings, or locations since good performance is essential for proper decision making regarding patients’ health (Debray et al., 2015). When multiple estimates of prediction model performance are available (e.g. from the published literature), meta-analysis may help to obtain a summary estimate and investigate the presence of between-study heterogeneity (Debray et al, 2017).
For example, the European system for cardiac operative risk evaluation II (EuroSCORE II) was developed to predict 30-day mortality in patients undergoing any type of cardiac surgery. A systematic literature review was conducted to identify validation studies that assessed the discrimination and calibration performance of EuroSCORE II in patients undergoing coronary artery bypass grafting. Their results are distributed with the metamisc R package (Debray & de Jong, 2021) and include performance estimates from the original development study and 22 validations. The data set can be downloaded here and the JASP file here.
Consider that we wish to summarize discrimination performance of the EuroSCORE II by conducting a meta-analysis of its reported concordance (C-) statistics. We open the Prediction Model Performance analysis under the Meta-Analysis module and change the Measure radio button to C-statistic. Next, we specify all of the required inputs, including the C-statistic, their standard errors, confidence bounds (some studies only reported one or the other), and additional information regarding the predictions (in the left side of the JASP screenshot below).
The Concordance Statistic Meta-Analysis Summary table in the top of the output shows the pooled concordance statistic as well as its 95% confidence and (approximate) prediction intervals. We can further visualize concordance statistics from the individual studies and the summary estimate with the Forest plot by selecting the corresponding checkbox.
Moreover, the analysis provides a rich set of Funnel plot asymmetry tests. We check two of them, the Egger (unweighted) and Debray’s funnel plot asymmetry test (Debray et al., 2018). The resulting Funnel Plot Asymmetry Tests table summarizes the results — both tests report p-values larger than 0.05 and we are unable to reject the null hypothesis assuming the absence of funnel plot asymmetry. The presence of publication bias is therefore deemed unlikely in this example.
We can further visualize test results by selecting the Plot checkbox. This option allows us to visually assess the funnel plot with the individual study estimates and the overlying funnel plot asymmetry test fit.
We used the Prediction Model Performance JASP module to estimate the predictive performance of the European system for cardiac operative risk evaluation in patients undergoing cardiac surgery.
Debray, T. P., Vergouwe, Y., Koffijberg, H., Nieboer, D., Steyerberg, E. W., & Moons, K. G. (2015). A new framework to enhance the interpretation of external validation studies of clinical prediction models. Journal of Clinical Epidemiology, 68, 279-289. https://dx.doi.org/10.1016/j.jclinepi.2014.06.018
Debray, T. P., Damen, J. A., et al. (2017). A guide to systematic review and meta-analysis of prediction model performance. BMJ, 356. https://doi.org/10.1136/bmj.i6460
Debray, T. P., Damen, et al. (2019). A framework for meta-analysis of prediction model studies with binary and time-to-event outcomes. Statistical Methods in Medical Research, 28, 2768-2786. https://doi.org/10.1177/0962280218785504
Debray, T. P., Moons, K. G., & Riley, R. D. (2018). Detecting small‐study effects and funnel plot asymmetry in meta‐analysis of survival data: a comparison of new and existing tests. Research Synthesis Methods, 9, 41-50. https://doi.org/10.1002/jrsm.1266
Debray, T. P. & de Jong, V. (2021). metamisc: Meta-Analysis of Diagnosis and Prognosis Research Studies. R package version 0.2.6/r591. https://R-Forge.R-project.org/projects/metamisc/