1 Fixed Effect & Random Effects Meta-Analysis

1.1 Effect Measures for Continuous Outcomes

  • focus on comparing two interventions (say experimental and conrol).
  • for continuous responses: mean \(\widehat\mu_{ik}\), standard deviation \(s^2_{ik}\), and sample size \(n_{ik}\) where \(i \in \{\text{experimental, control}\}\) and \(k \in \{1,2,\dots,K\}\).
  • We consider two different types of effect measures for continuous outcomes: mean difference and standardized mean difference.While the former is used with the same scale of the outcome, the other is used when the outcome are different scales.

1.1.1 Mean Difference

The estimated mean difference is

\[\begin{equation} \widehat\mu_k = \widehat\mu_{ek} - \widehat\mu_{ck},~~~~~k \in \{1,2,\dots,K\} \tag{1.1} \end{equation}\] with the variance

\[\begin{equation} \widehat{Var}(\widehat\mu_k) = \frac{s^2_{ek}}{n_{ek}}+\frac{s^2_{ck}}{n_{ck}} \tag{1.2} \end{equation}\]

Thus, we can obtain confidence interval

\[\begin{equation} \widehat{\mu}_{ek}-\widehat\mu_{ck} \pm z_{1-\alpha/2}\sqrt{\widehat{Var}(\widehat\mu_k)} \tag{1.3} \end{equation}\]

dat1 = read.csv("meta analysis with R dataset/dataset01.csv", as.is = TRUE)
head(dat1)
##         author  year Ne    Me    Se Nc    Mc    Sc
## 1        Boner  1988 13 13.54 13.85 13 20.77 21.46
## 2        Boner  1989 20 15.70 13.10 20 22.70 16.47
## 3       Chudry  1987 12 21.30 13.10 12 39.70 12.90
## 4        Comis  1993 12 14.50 12.20 12 31.30 15.10
## 5 DeBenedictis 1994a 17 14.40 11.10 17 27.40 17.30
## 6 DeBenedictis 1994b  8 14.80 18.60  8 31.40 20.60
with(dat1[2,],{
MD = Me - Mc
seMD = sqrt((Se)^2/Ne + Sc^2/Nc)
return(MD + c(-1,1)*qnorm(0.975)*seMD)
})
## [1] -16.222988   2.222988

or

library(meta)
library(metafor)
metacont(Ne,Me,Se,Nc,Mc,Sc, data = dat1, subset = 1)
##       MD             95%-CI     z p-value
##  -7.2300 [-21.1141; 6.6541] -1.02  0.3074
## 
## Details:
## - Inverse variance method

1.1.2 Standardized Mean Difference

\(\Rightarrow\) Used when different studies use different outcome scales.

We calculate a dimentionless effect measure from each study and use them for pooling. There are a number of formulae, we shall consider one of them which is called Hedges’s g.

\[ \begin{equation} \widehat{g}_k = \bigg(1-\frac{3}{4n_k-9} \bigg)\frac{\widehat\mu_{ek}-\widehat\mu_{ck}}{\sqrt{[(n_{ek}-1)s^2_{ek}+(n_{ck}-1)s^2_{ck}]/(n_k-2)}} \tag{1.4} \end{equation} \] and \[ \begin{equation} \widehat{Var}(\widehat{g}_k) = \frac{n_k}{n_{ek}n_{ck}}+\frac{\widehat{g}^2_k}{2(n_k-3.94)} \tag{1.5} \end{equation} \]

dat2 = read.csv("meta analysis with R dataset/dataset02.csv", as.is = TRUE)
head(dat2)
##             author Ne   Me   Se Nc   Mc   Sc
## 1  Blashki(75%150) 13  6.4  5.4 18 11.4  9.6
## 2   Hormazabal(86) 17 11.0  8.2 16 19.0  8.2
## 3 Jacobson(75-100) 10 17.5  8.8  6 23.0  8.8
## 4      Jenkins(75)  7 12.3  9.9  7 20.0 10.5
## 5   Lecrubier(100) 73 15.7 10.6 73 18.7 10.6
## 6      Murphy(100) 26  8.5 11.0 28 14.5 11.0
metacont(Ne,Me,Se,Nc,Mc,Sc, sm = "SMD",data = dat2, subset = 1)
##      SMD            95%-CI     z p-value
##  -0.5990 [-1.3300; 0.1320] -1.61  0.1083
## 
## Details:
## - Inverse variance method
## - Hedges' g (bias corrected standardised mean difference)

1.2 Fixed Effect Model

  • suppose used component studies com from homogeneous population.
  • We shall denote \(\{\widehat\theta_k|k=1,2.\dots,K\}\) as the intervention effect estimate from study \(k\), and \(\{\widehat{\sigma}^2_k|k=1,2,\dots,K\}\) is the sample estimate of \(Var(\widehat\theta_k)\).
  • The fixed effect model is \[ \begin{equation} \widehat{\theta}_k = \theta+\sigma_k\epsilon_k, ~~~~~\epsilon_k \stackrel{i.i.d}{\sim} N(0,1) \tag{1.6} \end{equation} \] Let \(\widehat\theta_F\) be the fixed effect estimate of \(\theta\). Thus, \[ \begin{equation} \widehat{\theta}_F = \frac{\sum_{k=1}^Kw_k\widehat\theta_k}{\sum_{k=1}^Kw_k}, \tag{1.7} \end{equation} \] where \(\{w_k = 1/\widehat\sigma^2_k|k=1,2,\dots,K\}\), and \[ \begin{equation} \widehat{Var}(\widehat\theta_F) = \frac{1}{\sum_{k=1}^Kw_k} \tag{1.8} \end{equation} \]
mc1 = metacont(Ne,Me,Se,Nc,Mc,Sc, data = dat1, studlab = paste0(author,"(",year,")"))
mc1
##                           MD               95%-CI %W(fixed) %W(random)
## Boner(1988)          -7.2300 [-21.1141;   6.6541]       2.8        3.1
## Boner(1989)          -7.0000 [-16.2230;   2.2230]       6.4        6.6
## Chudry(1987)        -18.4000 [-28.8023;  -7.9977]       5.0        5.3
## Comis(1993)         -16.8000 [-27.7835;  -5.8165]       4.5        4.8
## DeBenedictis(1994a) -13.0000 [-22.7710;  -3.2290]       5.7        5.9
## DeBenedictis(1994b) -16.6000 [-35.8326;   2.6326]       1.5        1.6
## DeBenedictis(1995)  -13.9000 [-27.6461;  -0.1539]       2.9        3.1
## Debelic(1986)       -18.2500 [-30.6692;  -5.8308]       3.5        3.8
## Henriksen(1988)     -29.7000 [-41.6068; -17.7932]       3.8        4.1
## Konig(1987)         -14.2000 [-25.0013;  -3.3987]       4.7        4.9
## Morton(1992)        -22.5300 [-33.5382; -11.5218]       4.5        4.8
## Novembre(1994f)     -13.0400 [-19.5067;  -6.5733]      13.0       12.1
## Novembre(1994s)     -15.1000 [-23.8163;  -6.3837]       7.1        7.3
## Oseid(1995)         -14.8000 [-23.7200;  -5.8800]       6.8        7.0
## Roberts(1985)       -20.0000 [-36.9171;  -3.0829]       1.9        2.1
## Shaw(1985)          -24.1600 [-33.1791; -15.1409]       6.7        6.9
## Todaro(1993)        -13.4000 [-18.7042;  -8.0958]      19.3       16.6
## 
## Number of studies combined: k = 17
## 
##                            MD               95%-CI      z  p-value
## Fixed effect model   -15.5140 [-17.8435; -13.1845] -13.05 < 0.0001
## Random effects model -15.6436 [-18.1369; -13.1502] -12.30 < 0.0001
## 
## Quantifying heterogeneity:
## tau^2 = 2.4374; H = 1.05 [1.00; 1.35]; I^2 = 8.9% [0.0%; 45.3%]
## 
## Test of heterogeneity:
##      Q d.f. p-value
##  17.57   16  0.3496
## 
## Details on meta-analytical method:
## - Inverse variance method
## - DerSimonian-Laird estimator for tau^2

We can plot a forest

forest(mc1, fontsize = 6)

1.3 Random Effects Models

Under random effects model, \[ \widehat{\theta}_k = \theta +u_k+\sigma_k\epsilon_k, ~~~~~ \epsilon_k \stackrel{i.i.d}{\sim} N(0,1); u_k \stackrel{i.i.d}{\sim} N(0,\tau^2), \tag{1.9} \]

Define the weighted sum of squares about the fixed effect estimate with \(w_k = 1/\widehat{\sigma}^2_k\) as follows \[ Q = \sum_{k=1}^Kw_k(\widehat{\theta}_k-\widehat{\theta}_F)^2, \tag{1.10} \] this is referred to as either the homogeneity test statistic or the heterogeneity statistic. Also define \[ S = \sum_{k=1}^Kw_k - \frac{\sum_{k=1}^Kw_k^2}{\sum_{k=1}^Kw_k}. \] if \(Q < K-1\), then \(\widehat{\tau}^2 := 0\), so that \(\widehat{\theta}_R = \widehat{\theta}_F\). Otherwise, the between-study variance is defined as \[ \widehat{\tau}^2 = \frac{Q-(K-1)}{S}, \] and the random effects estimate and its variance are given by \[ \widehat{\theta}_R = \frac{\sum_{k=1}^Kw_k^*\widehat{\theta}_k}{\sum_{k=1}^Kw^*_k} \tag{1.11} \] \[ \widehat{Var}(\widehat{\theta}_R) = \frac{1}{\sum_{k=1}^Kw_k^*} \tag{1.12} \] where \(w^*_k = 1/(\widehat{\sigma}^2_k+\widehat{\tau}^2)\). This method is called “inverse variance method”.

While the above method is used popularly, Hartung nad Knapp introduced a new method in meta-analysis based on refined variance estimator in the random effects model. Instead of using (1.12), HK propose to use the following variance estimator for \(\widehat{\theta}_R\): \[ \widehat{Var}_{HK}(\widehat{\theta}_R) = \frac{1}{K-1}\sum_{k=1}^K\frac{w^*_k}{w^*}\bigg(\widehat{\theta}_k-\widehat{\theta}_R\bigg), \tag{1.13} \] where \(w^*_k\) as given in (1.12), anh \(w^* = \sum_{k=1}^Kw^*_k\). Also the author showed \[ \frac{\widehat{\theta}_R-\theta}{\sqrt{\widehat{Var}_{HK}(\widehat{\theta}_R)}} \sim t_{K-1} \]

1.4 Prediction Interval

A \((1-\alpha)\) prediction interval can be calculated as \[ \widehat{\theta}_R\pm t_{K-1,1-\alpha/2}\big[\widehat{Var}(\widehat{\theta}_R)+\widehat{\tau}^2\big]^{1/2} \]

1.5 Tests and Measures of Heterogeneity

2 Network Meta-analysis

Network meta-analysis (also known as multiple treatment comparison or mixed treatment comparison) seeks to combine information from all randomised compar- isons among a set of treatments for a given medical condition.

2.1 Concepts and Challenges

There are two studies in which the first one was comparison between treatment A and treatment C while the second one compared treatment B and treament C.

2.2 Multi-Arm Studies

There are bunch of studies in which each study has more than two treatments.

Consider a muti-arm of \(p\) treatments with known variances. It requires to supply effects and standard error for each of \({p\choose2}\) comparisons.

Let \[ \boldsymbol{L^+_s} = -\frac{1}{2p^2_s}\boldsymbol{X^{\top}_sX_sV_sX^{\top}_sX_s} \tag{2.1} \] where \(\boldsymbol{L_s} = (\boldsymbol{L^+_s})^+\) that can be obtained by \(\boldsymbol{L^+} = (\boldsymbol{L}-\boldsymbol{J}/n)^{-1}+\boldsymbol{J}/n\). Also denote elements of \(\boldsymbol{L_s}\) by \(l_{sij}\).

It was shown we will obtain the same result when using adjusted variances of the comparison of treatment \(i\) and \(j\) are \(-1/l_{sij}^{-1}\).