How does the DerSimonian and Laird procedure for random effects meta-analysis compare with its more efficient but harder to compute counterparts?
Introduction
Meta-analysis—the pooling of separate studies concerned with the same treatment or issue—is frequently used in medical and other applications. Although some debate concerning random versus fixed effects modelling continues, the random effects model has become a standard approach. Although the conventional random effects model is easily implemented, it has often been criticised. First of all, the studies must be large enough to use normal approximations, with known variances, for the within-study distributions. More recently, methods have been proposed using exact conditional distributions (van Houwelingen et al., 1993, Taye et al., 2008, Shi and Copas, 2002) and other developments recognise that the within-study variances are given in the form of estimates and that these are typically functions of the underlying treatment effect (Böhning et al., 2002, Malzahn et al., 2000). We will assume here, however, that studies are large enough to justify the standard within-study normal approximations.
The conventional random effects model also makes the assumption that the random effect is normally distributed, although alternative distributional assumptions have been considered (Lee and Thompson, 2008, Baker and Jackson, 2008). The naive application of the random effects model has also been questioned due to the suspicion that study results may have been distorted due to publication and related biases (Baker and Jackson, 2006). Whilst recognising all these issues and concerns, we will assume here that the random effects model is appropriate and hence that all issues relate to which estimation and related procedures to use. In particular, a now standard method originally proposed by DerSimonian and Laird (1986) is widely used, although this idea has more recently been extended (DerSimonian and Kacker, 2007). The popularity of this procedure is no doubt partly due to its relative simplicity. The DerSimonian and Laird procedure also has the merit of not requiring the assumption of normality for the random effect, an assumption that is sometimes questioned (Hardy and Thompson, 1998). Hence the procedure is ‘valid approximately in a distribution-free context when there are many studies’ (Higgins et al., 2009). This statement does not reassure us that the DerSimonian and Laird procedure is effective compared to the alternatives, however.
The rest of the paper is set out as follows. In Section 2, the random effects model is described and we also provide a proof that estimates of treatment effect are unbiased under the assumptions of the model. In Section 3 the asymptotic (large number of studies) efficiency of the DerSimonian and Laird estimates is investigated and the small sample case is considered in Section 4. In Section 5, an investigation into using the profile likelihood suggests that this provides more suitable coverage probabilities of confidence intervals when the sample size is modest and we conclude with a discussion in Section 6.
Section snippets
The random effects model
The conventional fixed and random effects models (DerSimonian and Laird, 1986, Biggerstaff and Tweedie, 1997, Jackson, 2009, Hardy and Thompson, 1996) initially assume that the estimate of treatment effect from the i th study, , is distributed as , where is the true underlying treatment effect of the i th study and is the corresponding within-study variance. The variance is unknown but is replaced by a consistent estimate in practice. The conventional random effects
Asymptotic efficiency of point estimates
In this section, we will examine the asymptotic efficiency of standard point estimates of both and in order to investigate how well the various procedures perform in large samples. We denote . We make the assumption that the estimators and are asymptotically unbiased, which is the case for all standard estimation procedures when applying the random effects model, as the various possibilities for are unbiased (as proved in Section 2.4) and the bias of all
Efficiency of the estimates of treatment effect in small samples
Although the results in the previous section show that the DerSimonian and Laird estimate of the treatment effect is surprisingly efficient in large samples, it is of interest to see if this is also the case for the much smaller sample sizes more generally encountered in practice. Since the corresponding estimate of has been found to have unsatisfactory asymptotic properties, attention here will focus on the estimate of the treatment effect.
In order to investigate the small sample properties
The coverage probability of confidence intervals for the treatment effect
The above investigation does not address the perhaps more important issue of the performance of competing methods for constructing confidence intervals for the treatment effect when the sample size is small. Brockwell and Gordon (2001) investigated this via a simulation study and the intention here is to extend this work by exploring this issue analytically. Jackson (2009) considered the special case where all studies are the same size and showed that the actual significance level of hypothesis
Discussion
Our investigation suggests that if the sample size is large and inferences are restricted to the treatment effect, then we can do little better than DerSimonian and Laird's original procedure under the random effects model. Despite this, if inferences about the between-study variance are considered important, the extra effort of adopting efficient procedures for this is generally worthwhile.
For small samples, the standard methods for constructing confidence intervals for the treatment effect,
Acknowledgements
The authors wish to thank Ian R. White and Julian Higgins for their helpful comments and suggestions. DJ and JB re employed by the UK Medical Research Council (grant codes U.1052.00.006 and U.1052.00.001).
References (26)
- et al.
Random-effects model for meta-analysis of clinical trials: an update
Contemporary Clinical Trials
(2007) - et al.
Meta-analysis in clinical trials
Controlled Clinical Trials
(1986) - et al.
Using journal impact factors to correct for the publication bias of medical studies
Biometrics
(2006) - et al.
A new approach to outliers in meta-analysis
Health Care in Management Science
(2008) - et al.
The exact distribution of Cochran's heterogeneity statistic in one-way random effects meta-analysis
Statistics in Medicine
(2008) - et al.
Incorporating variability of estimates of heterogeneity in the random effects model in meta-analysis
Statistics in Medicine
(1997) - et al.
Some general points in estimating heterogeneity variance with the DerSimonian–Laird estimator
Biostatistics
(2002) - et al.
A comparison of statistical methods for meta-analysis
Statistics in Medicine
(2001) - et al.
A simple method for inference on an overall effect in meta-analysis
Statistics in Medicine
(2007) - et al.
Minimum variance estimation without regularity assumptions
Annals of Mathematical Statistics
(1951)
On the multivariate Rao–Cramer inequality
Statistical Papers
Algorithm AS204, the distribution of a positive linear combination of chi-squared random variables
Journal of the Royal Statistical Society (Series C)
A likelihood approach to meta-analysis with random effects
Statistics in Medicine
Cited by (154)
Benchmarking the diagnostic test accuracy of certified AI products for screening pulmonary tuberculosis in digital chest radiographs: Preliminary evidence from a rapid review and meta-analysis
2023, International Journal of Medical InformaticsMeta-analysis
2023, Translational Sports MedicineCalf augmentation and volumetric restoration: A systematic review and meta-analysis
2022, Journal of Plastic, Reconstructive and Aesthetic SurgeryMethods for meta-analysis and meta-regression of binomial data: concepts and tutorial with Stata command metapreg
2024, Archives of Public HealthSensitivity analysis with iterative outlier detection for systematic reviews and meta-analyses
2024, Statistics in Medicine