論文再現

Mihara T, et.al. A network meta-analysis of the clinical properties of various types of supraglottic airway device in children. Anaesthesia. 2017;72: 1251–1264.

論文ではBayesianネットワークメタ解析しているが、近年Frequentistの方法でも解析できるようになってきている。実習ではFrequentistの方法で解析する。  

本演習では野間先生の開発したNMAパッケージ及びnetmeta packageを使用する。  

packages

pacman::p_load(tidyverse, NMA, netmeta, metafor, meta, 
               svglite, flextable)
forest <- meta::forest
select <- dplyr::select

data

データは各研究の各比較アームごとに1行で作成する。
(2群比較のRCTなら2行、3群比較のRCTなら3行になる)

df_OLP <- read_csv("NMA_OLP.csv")
## Rows: 130 Columns: 5
## ── Column specification ────────────────────────────────────────────────────────
## Delimiter: ","
## chr (1): treatment
## dbl (4): study, mean, std.dev, sampleSize
## 
## ℹ Use `spec()` to retrieve the full column specification for this data.
## ℹ Specify the column types or set `show_col_types = FALSE` to quiet this message.
df_OLP

Network plot

NMAパッケージ

Network plotはNMAパッケージの方が綺麗な図ができる。

データのセットアップ

nma <- setup(study = study, trt = treatment, m = mean, s = std.dev,
             n = sampleSize, measure = "MD", ref = "Classic", 
             data = df_OLP)

作図

# svglite::svglite("Network_meta_analysis/netplot.svg", width = 8, height = 8)

# png("test.png", width = 8, height = 8, units = "in", res = 50)

netplot(nma) 

# dev.off()

netmetaパッケージ

netmetaパッケージを利用したnetwork plot作成方法は後述。

Forest plot

こちらはnetmeta packageの方が綺麗な図になる

データ形成:netmetaに合うように meta::pairwise or netmeta::pairwise (旧バージョンだとnetmeta)

df_nma <- 
  meta::pairwise(treat = treatment,
           n = sampleSize,
           mean = mean,
           sd = std.dev,
           data = df_OLP, 
           studlab = study)

数値の上書き

TE, seTEは上書き可能。 ただし、3群以上の場合、点推定値が矛盾の無い数値になっている必要がある。 (A, B, Cの点推定値の差分をループが閉じる方向に全て足すとゼロになる必要がある)

# df_nma_2 <- 
#   df_nma %>% 
#   mutate(TE = case_when(
#     studlab==5  ~ -1,
#     TRUE ~ TE
#   )) %>% 
#   mutate(seTE = case_when(
#     studlab==1 & treat1 == "Classic" & treat2 == "Cobra" ~ 1.0,
#     studlab==5  ~ 0.4,
#     TRUE ~ seTE
#   ))
# 
# df_nma <- df_nma_2

Frequentistの方法でネットワークメタ解析実施

m.netmeta <- netmeta(TE = TE,
                     seTE = seTE,
                     treat1 = treat1,
                     treat2 = treat2,
                     studlab = paste(df_nma$studlab),
                     data = df_nma,
                     sm = "MD",
                     comb.fixed = FALSE,
                     comb.random = TRUE,
                     reference.group = "Classic",
                     details.chkmultiarm = TRUE,
                     sep.trts = " vs ")
## Warning: Use argument 'common' instead of 'comb.fixed' (deprecated).
## Warning: Use argument 'random' instead of 'comb.random' (deprecated).
# m.netmeta

Forest plot

# treatment_labs <-
#   df_OLP %>%
#   distinct(treatment) %>%
#   pull() 
# treatment_labs |> dput()
treatment_forest <-
  c("Classic", "airQ", "airQ_SP", "Ambu_AG", "Ambu_i", "Ambu_o",
"Cobra", "Flexible", "i_gel", "LT", "Proseal", "SLIPA", "SoftSeal",
"Supreme", "Unique")
# svglite::svglite("Network_meta_analysis/forest.svg", width = 6, height = 8)

forest(m.netmeta, 
       reference.group = "Classic",
       sortvar = treatment_forest,
       xlim=c(-10,10),
       col.square = "blue",
       smlab = "SGA vs. Classic \n OLP (cmH2O)")

netmetaパッケージ

デフォルトの図はとても見にくい。
これで論文にしたいとは思えない、、、 しかし、必要情報を織り込むと綺麗な作図が可能。こちらの方が柔軟性が高い。

netgraph(m.netmeta)

デバイスの順序を確認、取得する

ordered_labels <- m.netmeta$trts
ordered_labels
##  [1] "airQ"     "airQ_SP"  "Ambu_AG"  "Ambu_i"   "Ambu_o"   "Classic" 
##  [7] "Cobra"    "Flexible" "i_gel"    "LT"       "Proseal"  "SLIPA"   
## [13] "SoftSeal" "Supreme"  "Unique"

デバイス毎の症例数を計算し、デバイス順に並べる

samplesize_df <- 
  df_OLP %>% 
  group_by(treatment) %>% 
  summarise(size = sum(sampleSize)) %>% 
  mutate(treatment = factor(treatment, levels = ordered_labels)) %>% 
  arrange(treatment)


samplesize <- samplesize_df %>% select(size) %>% pull()
samplesize
##  [1]  325   69   50  113  172 1011  301   69 1027   78 1134   50   36  488  410

Network Graph

# svglite("networkgraph.svg")
netgraph(m.netmeta, 
         plastic = F,
         labels = ordered_labels,
         cex = 2,
         col = "grey",
         lwd = 3,
         multiarm = F,
         points = T, 
         col.points = "blue",
         bg.points = "blue",
         cex.points = samplesize,
         number.of.studies = F,
         cex.number.of.studies = 1.5,
         col.number.of.studies = "white",
         thickness = "number.of.studies") 

# dev.off()

league table リーグテーブル

netmeta packageの方が上下で別の結果を指定できる  

上部:直接比較のメタ解析
下部:ネットワークメタ解析

netleague <- netleague(m.netmeta, bracket = "(", digits=1)
netleague$random
write_csv(netleague$random, "Network_meta_analysis/netleague.csv", col_names = FALSE)

Ranking: SUCRA & P-score

NMA packageではSUCRAが算出できる。
netmeta packageではP-scoreも算出できる。
- 治療順位確率の累積確率関数から曲線化面積を求めたものがSUCRA。
- 他の全ての介入との比較における片側P値の平均値がP-score
- SUCRAとP-scoreは同等の指標

SUCRA

NMA package

ranking <- 
  nmarank(nma, B = 20000, method = "NH", ascending = F)

SUCRA <-
  ranking$SUCRA |> 
  as.data.frame() |> 
  rownames_to_column(var = "device") |> 
  rename(SUCRA = V1)

SUCRA
# svglite::svglite("Network_meta_analysis/SUCRA.svg", width = 8, height = 6)

SUCRA|> 
  ggplot() +
  geom_bar(aes(x = reorder(device, SUCRA), y = SUCRA),
           stat = "identity", show.legend = F) +
  coord_flip() +
  scale_fill_grey()+
  xlab("")+
  ylab("SUCRA")+
  theme_bw()+
  theme(text = element_text(size = 16))

P-score

netmeta package

rank_res <- netrank(m.netmeta,
                    method = "P-score",
                    small.values = "bad")

# rank_res_sucra <- netrank(m.netmeta,
#                           method = "SUCRA", nsim = 10000,
#                           small.values = "bad")
Pscore <- 
  rank_res$ranking.random |> 
  as.data.frame() |> 
  rownames_to_column() |> 
  setNames(c("device", "value")) |> 
  arrange(desc(value)) 

Pscore
# SUCRA <- 
#   rank_res_sucra$ranking.random |> 
#   as.data.frame() |> 
#   rownames_to_column() |> 
#   setNames(c("device", "value")) |> 
#   arrange(desc(value)) 
# 
# SUCRA

P-scoreの図を作成

# svglite::svglite("Network_meta_analysis/Pscore.svg", width = 8, height = 6)

Pscore |>  
  ggplot() +
  geom_bar(aes(x = reorder(device, value), y = value),
           stat = "identity", show.legend = F) +
  coord_flip() +
  scale_fill_grey()+
  xlab("")+
  ylab("P-score")+
  theme_bw()+
  theme(text = element_text(size = 16))

Inconsistencyをチェックする

NMA package

Global approach

Global <- global.ict(nma)
Global$`P-value`
## [1] 0.2332895

Local approach

side split

NMA package   結果を出すまでは簡単。(その後ややこしい操作が必要)

side_split <- sidesplit(nma)
side_split
## $coding
##    code treatment
## 1     1   Classic
## 2     2      airQ
## 3     3   airQ_SP
## 4     4   Ambu_AG
## 5     5    Ambu_i
## 6     6    Ambu_o
## 7     7     Cobra
## 8     8  Flexible
## 9     9     i_gel
## 10   10        LT
## 11   11   Proseal
## 12   12     SLIPA
## 13   13  SoftSeal
## 14   14   Supreme
## 15   15    Unique
## 
## $reference
## [1] "Classic"
## 
## $`Direct evidence`
##                Coef.        SE       95%CL      95%CU      P-value
## 1 vs. 7    2.7033577 1.9098377  -1.0398554  6.4465708 1.569246e-01
## 1 vs. 9    1.6403506 1.0773291  -0.4711757  3.7518769 1.278567e-01
## 1 vs. 10          NA        NA          NA         NA           NA
## 1 vs. 11   3.7340367 0.7560236   2.2522576  5.2158158 7.850505e-07
## 1 vs. 13          NA        NA          NA         NA           NA
## 1 vs. 14   0.0000000 3.1347038  -6.1439065  6.1439065 1.000000e+00
## 1 vs. 15   1.7000000 3.4256156  -5.0140832  8.4140832 6.197102e-01
## 2 vs. 5           NA        NA          NA         NA           NA
## 2 vs. 6   -1.0000000 3.1418007  -7.1578163  5.1578163 7.502659e-01
## 2 vs. 7    4.5000000 3.2996748  -1.9672438 10.9672438 1.726397e-01
## 2 vs. 8   -3.7000000 3.4127920 -10.3889493  2.9889493 2.782955e-01
## 2 vs. 9    1.0000000 3.3236116  -5.5141591  7.5141591 7.635079e-01
## 2 vs. 11   5.3000000 3.6661493  -1.8855206 12.4855206 1.482730e-01
## 2 vs. 15  -2.9000000 3.2129693  -9.1973041  3.3973041 3.667425e-01
## 3 vs. 9    5.6000000 3.1879642  -0.6482951 11.8482951 7.898484e-02
## 3 vs. 15   2.0000000 3.3091507  -4.4858162  8.4858162 5.455879e-01
## 4 vs. 14  -1.0000000 3.3424733  -7.5511273  5.5511273 7.648030e-01
## 6 vs. 9    0.4777712 2.3356616  -4.1000414  5.0555837 8.379198e-01
## 7 vs. 8   -8.2600000 3.3095256 -14.7465510 -1.7734490 1.256642e-02
## 7 vs. 9   -5.1000000 3.3155241 -11.5983079  1.3983079 1.239952e-01
## 7 vs. 11  -3.0000000 3.3574842  -9.5805482  3.5805482 3.715755e-01
## 7 vs. 13          NA        NA          NA         NA           NA
## 7 vs. 15  -2.7249175 1.8770876  -6.4039416  0.9541066 1.465925e-01
## 9 vs. 11  -2.2117813 0.9247032  -4.0241662 -0.3993964 1.676221e-02
## 9 vs. 14  -1.5852695 1.6235042  -4.7672793  1.5967403 3.288419e-01
## 11 vs. 14 -1.6176552 1.8774354  -5.2973610  2.0620507 3.888910e-01
## 12 vs. 15 -2.7000000 3.1711533  -8.9153463  3.5153463 3.945332e-01
## 13 vs. 15 -0.2000000 3.2456799  -6.5614158  6.1614158 9.508652e-01
## 14 vs. 15 -0.7541278 2.2513002  -5.1665951  3.6583395 7.376444e-01
## 
## $`Indirect evidence`
##                Coef.         SE         95%CL       95%CU      P-value
## 1 vs. 7    5.6905431   2.352417    1.07989129  10.3011948 1.556238e-02
## 1 vs. 9    5.3396739   1.177540    3.03173823   7.6476096 5.771235e-06
## 1 vs. 10          NA         NA            NA          NA           NA
## 1 vs. 11   0.4463313   1.494819   -2.48346004   3.3761226 7.652563e-01
## 1 vs. 13          NA         NA            NA          NA           NA
## 1 vs. 14   1.6535815   1.299261   -0.89292420   4.2000871 2.031214e-01
## 1 vs. 15   0.2966848   1.676008   -2.98823005   3.5815997 8.594937e-01
## 2 vs. 5           NA         NA            NA          NA           NA
## 2 vs. 6    2.0560566   2.949131   -3.72413451   7.8362477 4.856941e-01
## 2 vs. 7    2.5651256   2.042507   -1.43811363   6.5683648 2.091626e-01
## 2 vs. 8   -5.3840228   3.800350  -12.83257135   2.0645258 1.565655e-01
## 2 vs. 9    2.3218823   1.883546   -1.36980074   6.0135653 2.176808e-01
## 2 vs. 11   0.7236758   1.861455   -2.92470896   4.3720605 6.974471e-01
## 2 vs. 15   0.1883985   2.167702   -4.06021866   4.4370156 9.307418e-01
## 3 vs. 9    4.6715919   3.686129   -2.55308853  11.8962723 2.050325e-01
## 3 vs. 15   2.9243780   3.578120   -4.08860793   9.9373640 4.137601e-01
## 4 vs. 14  -0.5529114 146.838442 -288.35096924 287.2451465 9.969956e-01
## 6 vs. 9    3.5328148   3.620234   -3.56271273  10.6283423 3.291375e-01
## 7 vs. 8   -6.5758397   3.890623  -14.20132058   1.0496411 9.099418e-02
## 7 vs. 9   -0.4049121   1.719595   -3.77525634   2.9654321 8.138444e-01
## 7 vs. 11  -0.9603456   1.719720   -4.33093404   2.4102429 5.765498e-01
## 7 vs. 13          NA         NA            NA          NA           NA
## 7 vs. 15  -5.8275653   2.484648  -10.69738537  -0.9577452 1.900519e-02
## 9 vs. 11   2.3688060   1.187187    0.04196178   4.6956502 4.600913e-02
## 9 vs. 14  -2.3380573   1.574033   -5.42310475   0.7469902 1.374397e-01
## 11 vs. 14 -1.6517636   1.464758   -4.52263735   1.2191101 2.594594e-01
## 12 vs. 15 -1.3569766 147.236411 -289.93503984 287.2210866 9.926466e-01
## 13 vs. 15 -1.7090298 146.951809 -289.72928311 286.3112234 9.907209e-01
## 14 vs. 15 -0.8486647   2.154073   -5.07056925   3.3732399 6.935947e-01
## 
## $Difference
##                 Coef.         SE         95%CL       95%CU     P-value
## 1 vs. 7   -2.98718538   3.030073 -8.926020e+00   2.9516490 0.324208720
## 1 vs. 9   -3.69932330   1.596007 -6.827439e+00  -0.5712072 0.020456841
## 1 vs. 10           NA         NA            NA          NA          NA
## 1 vs. 11   3.28770541   1.675128  4.513903e-03   6.5708969 0.049685851
## 1 vs. 13           NA         NA            NA          NA          NA
## 1 vs. 14  -1.65358148   3.393295 -8.304317e+00   4.9971537 0.626039686
## 1 vs. 15   1.40331518   3.813639 -6.071280e+00   8.8779107 0.712893579
## 2 vs. 5            NA         NA            NA          NA          NA
## 2 vs. 6   -3.05605662   4.309094 -1.150172e+01   5.3896115 0.478193537
## 2 vs. 7    1.93487440   3.880681 -5.671121e+00   9.5408698 0.618067245
## 2 vs. 8    1.68402280   5.107818 -8.327117e+00  11.6951625 0.741630332
## 2 vs. 9   -1.32188230   3.820228 -8.809391e+00   6.1656268 0.729326299
## 2 vs. 11   4.57632421   4.111650 -3.482362e+00  12.6350100 0.265702390
## 2 vs. 15  -3.08839847   3.875836 -1.068490e+01   4.5080999 0.425547322
## 3 vs. 9    0.92840811   4.873465 -8.623408e+00  10.4802246 0.848915259
## 3 vs. 15  -0.92437802   4.873748 -1.047675e+01   8.6279926 0.849571874
## 4 vs. 14  -0.44708862 146.876479 -2.883197e+02 287.4255210 0.997571262
## 6 vs. 9   -3.05504363   4.308295 -1.149915e+01   5.3890596 0.478257873
## 7 vs. 8   -1.68416027   5.107828 -1.169532e+01   8.3269986 0.741610471
## 7 vs. 9   -4.69508788   3.734931 -1.201542e+01   2.6252418 0.208726490
## 7 vs. 11  -2.03965444   3.772285 -9.433198e+00   5.3538888 0.588718068
## 7 vs. 13           NA         NA            NA          NA          NA
## 7 vs. 15   3.10264777   3.113990 -3.000660e+00   9.2059556 0.319076365
## 9 vs. 11  -4.58058731   1.504822 -7.529984e+00  -1.6311902 0.002335021
## 9 vs. 14   0.75278778   2.261271 -3.679221e+00   5.1847969 0.739206171
## 11 vs. 14  0.03410846   2.381235 -4.633027e+00   4.7012436 0.988571612
## 12 vs. 15 -1.34302338 147.270557 -2.899880e+02 287.3019646 0.992723849
## 13 vs. 15  1.50902984 146.987648 -2.865815e+02 289.5995259 0.991808764
## 14 vs. 15  0.09453685   3.115828 -6.012373e+00   6.2014465 0.975795220
## 
## $Warning
## [1] "For the NA components, the corresponding estimates were not calculable because of singularity issues (possibly, all the evidence about these contrasts comes from the studies which directly compare them)."
df_lookup <-
  side_split$coding |> 
  as.data.frame()

side_split_table <-
  side_split$Difference |> 
  as.data.frame() |> 
  rownames_to_column(var="comp") |> 
  rename(Pvalue = "P-value") |>
  filter(Pvalue != "NA") |> 
  select(comp, Pvalue) |> 
  separate(comp, into = c("device1", "device2"), sep = " vs. ") |> 
  mutate_if(is.character, as.numeric) |> 
  left_join(df_lookup, by = c("device1" = "code")) |> 
  rename(treatment1 = treatment) |> 
  left_join(df_lookup, by = c("device2" = "code")) |> 
  rename(treatment2 = treatment) |> 
  mutate(comparison = paste0(treatment1, " vs ", treatment2)) |> 
  select(comparison, Pvalue) |> 
  mutate_if(is.numeric, round, 3) 

side_split_table
side_split_table |> write_excel_csv("Network_meta_analysis/sidesplit_NMA.csv")

Forest plot パッケージ内部に組み込まれた作図関数がないので、meta packageを流用する。

Direct <- 
  side_split$`Direct evidence` |> 
  as.data.frame() |> 
  rownames_to_column(var = "comp") |> 
  filter(!is.na(Coef.)) |> 
  mutate(mode = "Direct") 

Indirect <-
  side_split$`Indirect evidence` |> 
  as.data.frame() |> 
  rownames_to_column(var = "comp") |> 
  filter(!is.na(Coef.)) |> 
  mutate(mode = "Indirect") 

Total <-
  bind_rows(Direct, Indirect) |> 
  separate(comp, into = c("device1", "device2"), sep = " vs. ") |> 
  mutate_at(c("device1", "device2"), as.numeric) |> 
  left_join(df_lookup, by = c("device1" = "code")) |> 
  rename(treatment1 = treatment) |> 
  left_join(df_lookup, by = c("device2" = "code")) |> 
  rename(treatment2 = treatment) |> 
  mutate(comparison = paste0(treatment1, " vs ", treatment2)) |> 
  select(comparison, Coef., SE, '95%CL', '95%CU', mode) 
df_side_split <-
  Total |> 
  mutate(subgroup = comparison) |> 
  left_join(side_split_table, by = "comparison")


ma <- metagen(
  TE = df_side_split$Coef.,
  seTE = df_side_split$SE,
  studlab = df_side_split$mode,
  comb.fixed = FALSE,
  comb.random = FALSE,
  subgroup = df_side_split$subgroup
)
## Warning: Use argument 'common' instead of 'comb.fixed' (deprecated).
## Warning: Use argument 'random' instead of 'comb.random' (deprecated).
# png("Network_meta_analysis/side_split.png", width = 8, height = 22, units = "in",res = 600)
forest(ma, 
       print.tau2 = FALSE, 
       print.I2 = FALSE, 
       print.subgroup.name = F, 
       leftcols = c("studlab"),
       leftlabs = c("Device"),
       rightcols = c("effect", "ci"),
       rightlabs = c("MD", "95% CI"),
       xlim = c(-15, 10))  

node split  

netmeta package   side splitとほぼ同じ結果になる

res_netsplit <- netsplit(m.netmeta, random = T)
print(res_netsplit, show= "both")
## Separate indirect from direct evidence (SIDE) using back-calculation method
## 
## Random effects model: 
## 
##           comparison  k prop     nma  direct  indir.     Diff     z p-value
##       airQ vs Ambu_o  1 0.47 -0.6060  1.0000 -2.0292   3.0292  0.72  0.4699
##        airQ vs Cobra  1 0.28 -3.0987 -4.5000 -2.5614  -1.9386 -0.51  0.6080
##     airQ vs Flexible  1 0.55  4.4543  3.7000  5.3866  -1.6866 -0.34  0.7345
##        airQ vs i_gel  1 0.24 -1.9958 -1.0000 -2.3158   1.3158  0.35  0.7233
##      airQ vs Proseal  1 0.20 -1.6538 -5.3000 -0.7213  -4.5787 -1.13  0.2583
##       airQ vs Unique  1 0.31  0.7862  2.9000 -0.1785   3.0785  0.81  0.4155
##     airQ_SP vs i_gel  1 0.57 -5.2072 -5.6000 -4.6795  -0.9205 -0.19  0.8457
##    airQ_SP vs Unique  1 0.54 -2.4252 -2.0000 -2.9205   0.9205  0.19  0.8457
##      Ambu_o vs i_gel  2 0.71 -1.3898 -0.4981 -3.5273   3.0292  0.72  0.4699
##     Cobra vs Classic  3 0.57  4.4947  2.7039  6.8488  -4.1450 -1.43  0.1530
##     i_gel vs Classic  8 0.54  3.3917  1.6308  5.4745  -3.8437 -2.34  0.0191
##   Proseal vs Classic 17 0.79  3.0497  3.7341  0.4089   3.3252  1.97  0.0486
##  SoftSeal vs Classic  1 0.69  3.0480  1.9000  5.6512  -3.7512 -0.63  0.5260
##   Supreme vs Classic  1 0.15  1.4157 -0.0000  1.6598  -1.6598 -0.50  0.6143
##    Unique vs Classic  1 0.19  0.6097  1.7000  0.3493   1.3507  0.36  0.7217
##    Cobra vs Flexible  1 0.58  7.5530  8.2600  6.5734   1.6866  0.34  0.7345
##       Cobra vs i_gel  1 0.20  1.1029  5.1000  0.0876   5.0124  1.37  0.1717
##     Cobra vs Proseal  1 0.20  1.4449  3.0000  1.0645   1.9355  0.52  0.6010
##    Cobra vs SoftSeal  1 0.70  1.4466 -2.0000  9.3227 -11.3227 -1.92  0.0551
##      Cobra vs Unique  3 0.64  3.8849  2.7390  5.8907  -3.1517 -1.04  0.2979
##     i_gel vs Proseal 10 0.62  0.3420  2.1967 -2.6269   4.8236  3.02  0.0025
##     i_gel vs Supreme  4 0.48  1.9761  1.5816  2.3465  -0.7649 -0.35  0.7277
##   Proseal vs Supreme  3 0.38  1.6341  1.6164  1.6449  -0.0285 -0.01  0.9902
##   SoftSeal vs Unique  1 0.71  2.4383  0.2000  7.9741  -7.7741 -1.29  0.1961
##    Supreme vs Unique  2 0.48  0.8059  0.7544  0.8533  -0.0989 -0.03  0.9739
## 
## Legend:
##  comparison - Treatment comparison
##  k          - Number of studies providing direct evidence
##  prop       - Direct evidence proportion
##  nma        - Estimated treatment effect (MD) in network meta-analysis
##  direct     - Estimated treatment effect (MD) derived from direct evidence
##  indir.     - Estimated treatment effect (MD) derived from indirect evidence
##  Diff       - Difference between direct and indirect treatment estimates
##  z          - z-value of test for disagreement (direct versus indirect)
##  p-value    - p-value of test for disagreement (direct versus indirect)
forest(res_netsplit)

Funnel plot

comparison adjusted funnel plot

ざっくり見る方法(NMA package)

デフォルトだとLegendsが見にくい。

# png("Network_meta_analysis/Funnel.png", width = 10, height = 8, units = "in", res = 600)

nmafunnel(nma, legends = "bottomright")

## Comparison adjusted funnel plot for the trials involving treatment 1 (as control)
## $coding
##    code treatment
## 1     1   Classic
## 2     2      airQ
## 3     3   airQ_SP
## 4     4   Ambu_AG
## 5     5    Ambu_i
## 6     6    Ambu_o
## 7     7     Cobra
## 8     8  Flexible
## 9     9     i_gel
## 10   10        LT
## 11   11   Proseal
## 12   12     SLIPA
## 13   13  SoftSeal
## 14   14   Supreme
## 15   15    Unique
## 
## $summary
##      design  N    n
## 1      1-10  3  160
## 2      1-11 14 1131
## 3      1-14  1  100
## 4       1-7  1   80
## 5 1-7-13-15  1  133
## 6  1-7-9-11  1   60
## 7       1-9  5  315
## 8    1-9-11  2  210

より詳細に見る方法(netmeta package)

通常のメタ解析においては、Control群に対してIntervention群の効果が報告できない研究論文は出版されない(または出版が遅れる)という仮定のもとでFunnel plotの左右対照性の検定を行っている。
- NMAにおいては各RCTにおいてどちらがControl群に相当するのか不明瞭な場合が多い。
- 各介入(デバイス、薬剤)の優先順位(発売順など)を指定する事でconparison adjusted funnel plotを描くことができ、左右対称性の検定も出来る。
- この検定は研究者が指定した優先順位に影響を受ける事に注意すべし。

dev_order <- read_csv("device_order.csv")
## Rows: 16 Columns: 2
## ── Column specification ────────────────────────────────────────────────────────
## Delimiter: ","
## chr (1): device
## dbl (1): order
## 
## ℹ Use `spec()` to retrieve the full column specification for this data.
## ℹ Specify the column types or set `show_col_types = FALSE` to quiet this message.
treatment_labs <-
  df_OLP %>% 
  distinct(treatment) %>% 
  pull(treatment)
col_order <-  
  dev_order |> 
  filter(device %in% treatment_labs) |> 
  arrange(order) |> 
  select(device) |>  
  pull()
col_order
##  [1] "Classic"  "Flexible" "Unique"   "Cobra"    "LT"       "Proseal" 
##  [7] "SoftSeal" "Supreme"  "airQ"     "i_gel"    "airQ_SP"  "Ambu_o"  
## [13] "SLIPA"    "Ambu_i"   "Ambu_AG"
# 色を29種類指定する
library(RColorBrewer)
n <- 29
qual_col_pals = brewer.pal.info[brewer.pal.info$category == 'qual',]
col_vector = unlist(mapply(brewer.pal, qual_col_pals$maxcolors, rownames(qual_col_pals)))
cols = sample(col_vector, n)
# pie(rep(1,n), col = cols)
# svglite::svglite("Network_meta_analysis/funnel_netmeta.svg", width = 12, height = 8)

funnel(m.netmeta,
       order =  col_order,
       pch = 19,
       col = cols,
       linreg = F,
       rank =F,
       xlim = c(-10, 12),
       ylim = c(2.5, 0),
       studlab = F,
       cex.studlab = 3,
       method.bias = "Egger",
       text.linreg = "(Egger)")
## Warning: Deprecated argument 'linreg' ignored as 'method.bias' is also
## provided.
## Warning: Deprecated argument 'rank' ignored as 'method.bias' is also provided.