Standard models for network meta-analysis simultaneously estimate multiple relative treatment effects. In practice, after estimation, these multiple estimates usually pass through a formal or informal selection procedure, eg, when researchers draw conclusions about the effects of the best performing treatment in the network. In this paper, we present theoretical arguments as well as results from simulations to illustrate how such practices might lead to exaggerated and overconfident statements regarding relative treatment effects. We discuss how the issue can be addressed via multilevel Bayesian modelling, where treatment effects are modelled exchangeably, and hence estimates are shrunk away from large values. We present a set of alternative models for network meta-analysis, and we show in simulations that in several scenarios, such models perform better than the usual network meta-analysis model.
Keywords: hierarchical modelling; mixed treatment comparison; multiple testing; multiple treatments.
© 2019 The Authors. Research Synthesis Methods published by John Wiley & Sons Ltd.