Robust Bayesian meta-analysis: Model-averaging across complementary publication bias adjustment methods

Res Synth Methods. 2023 Jan;14(1):99-116. doi: 10.1002/jrsm.1594. Epub 2022 Aug 7.

Abstract

Publication bias is a ubiquitous threat to the validity of meta-analysis and the accumulation of scientific evidence. In order to estimate and counteract the impact of publication bias, multiple methods have been developed; however, recent simulation studies have shown the methods' performance to depend on the true data generating process, and no method consistently outperforms the others across a wide range of conditions. Unfortunately, when different methods lead to contradicting conclusions, researchers can choose those methods that lead to a desired outcome. To avoid the condition-dependent, all-or-none choice between competing methods and conflicting results, we extend robust Bayesian meta-analysis and model-average across two prominent approaches of adjusting for publication bias: (1) selection models of p-values and (2) models adjusting for small-study effects. The resulting model ensemble weights the estimates and the evidence for the absence/presence of the effect from the competing approaches with the support they receive from the data. Applications, simulations, and comparisons to preregistered, multi-lab replications demonstrate the benefits of Bayesian model-averaging of complementary publication bias adjustment methods.

Keywords: Bayesian model-averaging; PET-PEESE; meta-analysis; publication bias; selection models.

Publication types

  • Meta-Analysis

MeSH terms

  • Bayes Theorem
  • Bias
  • Computer Simulation
  • Models, Statistical*
  • Publication Bias

Grants and funding