These are the analyses for the dataset that only included Pearson’s R. For analyses including all effet sizes click here. See R Markdown file for code used to open data set and calculated effect size variances etc. It prints kind of messy, and I have suppressed it from this HTML file.
Fit baseline model to the full data set
m1 <- rma.mv(yi, vi, random=~1|Article/Comp, data=EffectSizes_short)
summary(m1)
##
## Multivariate Meta-Analysis Model (k = 87; method: REML)
##
## logLik Deviance AIC BIC AICc
## -5.5354 11.0708 17.0708 24.4338 17.3634
##
## Variance Components:
##
## estim sqrt nlvls fixed factor
## sigma^2.1 0.0381 0.1952 27 no Article
## sigma^2.2 0.0027 0.0518 87 no Article/Comp
##
## Test for Heterogeneity:
## Q(df = 86) = 206.7883, p-val < .0001
##
## Model Results:
##
## estimate se zval pval ci.lb ci.ub
## 0.2572 0.0461 5.5758 <.0001 0.1668 0.3476 ***
##
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Look at residuals
boxplot(resid(m1, method="rstandard"))
hist(resid(m1, method="rstandard"))
These residuals aren’t great.
Summary, controlling for correlated sampling error.
summary(robust(m1, cluster=EffectSizes_short$Article))
##
## Number of outcomes: 87
## Number of clusters: 27
## Outcomes per cluster: 1-8 (mean: 3.22, median: 2)
##
## Model Results:
##
## estimate se tval df pval ci.lb ci.ub
## 0.2572 0.0460 5.5859 26 <.0001 0.1625 0.3518 ***
##
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Tests of publication bias.
regtest(rma(yi, vi, data=EffectSizes_short))
##
## Regression Test for Funnel Plot Asymmetry
##
## Model: mixed-effects meta-regression model
## Predictor: standard error
##
## Test for Funnel Plot Asymmetry: z = 1.4447, p = 0.1486
## Limit Estimate (as sei -> 0): b = 0.1430 (CI: 0.0433, 0.2428)
funnel(rma(yi, vi, data=EffectSizes_short))
There are a lot of effect size variances that are quite large because a subset of the correlaions were calculated on very small sample sizes. These variances could potentially strongly influence the slope of the corrections for effect sizes. If we look at a histogram of the effect size standard errors, we see that all of the especially large effect sizes standard errors come from studies with sample sizes < = 10.
EffectSizes_short %>%
mutate(
small_n = ifelse(n <= 10, yes="N <= 10", no = "N > 10")
) %>%
ggplot(aes(x = sei, fill=small_n)) + geom_histogram()
## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
Given that, these effect sizes could strongly influence the publication bias correction, and that effect sizes based on 10 participants are likely not very informative to begin with. I have decided to drop these corrections from subsequent analyses.
Here I fit the model again, without those effect sizes.
m2 <- rma.mv(yi, vi, random=~1|Article/Comp, data=EffectSizes_short2)
summary(m2)
##
## Multivariate Meta-Analysis Model (k = 71; method: REML)
##
## logLik Deviance AIC BIC AICc
## 15.5986 -31.1972 -25.1972 -18.4518 -24.8336
##
## Variance Components:
##
## estim sqrt nlvls fixed factor
## sigma^2.1 0.0354 0.1880 23 no Article
## sigma^2.2 0.0026 0.0512 71 no Article/Comp
##
## Test for Heterogeneity:
## Q(df = 70) = 175.5400, p-val < .0001
##
## Model Results:
##
## estimate se zval pval ci.lb ci.ub
## 0.2546 0.0464 5.4876 <.0001 0.1636 0.3455 ***
##
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
and test for publication bias again
regtest(rma(yi, vi, data=EffectSizes_short2))
##
## Regression Test for Funnel Plot Asymmetry
##
## Model: mixed-effects meta-regression model
## Predictor: standard error
##
## Test for Funnel Plot Asymmetry: z = 2.5729, p = 0.0101
## Limit Estimate (as sei -> 0): b = 0.0567 (CI: -0.0650, 0.1784)
funnel(rma(yi, vi, data=EffectSizes_short2))
The funnel plots and regression test still suggest publication bias, but there aren’t the disproporionately large effect sizes that would influence the publication bias correction.
boxplot(resid(m2, method="rstandard"))
hist(resid(m2, method="rstandard"))
The residuals look much better, if still a bit skewed.
Here’s a forest plot
forest(m2, slab=EffectSizes_short2$Article, xlim=c(-2.8,2.8))
Add variance to correct for publication bias
m3 <- rma.mv(yi, vi, mods=vi, random=~1|Article/Comp, data=EffectSizes_short2)
summary(m3)
##
## Multivariate Meta-Analysis Model (k = 71; method: REML)
##
## logLik Deviance AIC BIC AICc
## 15.7316 -31.4632 -23.4632 -14.5268 -22.8382
##
## Variance Components:
##
## estim sqrt nlvls fixed factor
## sigma^2.1 0.0372 0.1928 23 no Article
## sigma^2.2 0.0026 0.0512 71 no Article/Comp
##
## Test for Residual Heterogeneity:
## QE(df = 69) = 150.0938, p-val < .0001
##
## Test of Moderators (coefficient 2):
## QM(df = 1) = 0.0695, p-val = 0.7921
##
## Model Results:
##
## estimate se zval pval ci.lb ci.ub
## intrcpt 0.2367 0.0841 2.8153 0.0049 0.0719 0.4015 **
## mods 0.5583 2.1183 0.2635 0.7921 -3.5936 4.7101
##
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
boxplot(resid(m3, method="rstandard"))
hist(resid(m3, method="rstandard"))
Summarize the above model, with cluster robust variance estimation, by article, to account for the correlated sampling errors within articles.
summary(robust(m3, cluster=EffectSizes_short2$Article))
##
## Number of outcomes: 71
## Number of clusters: 23
## Outcomes per cluster: 1-8 (mean: 3.09, median: 2)
##
## Test of Moderators (coefficient 2):
## F(df1 = 1, df2 = 21) = 0.1060, p-val = 0.7479
##
## Model Results:
##
## estimate se tval df pval ci.lb ci.ub
## intrcpt 0.2367 0.0828 2.8578 21 0.0094 0.0645 0.4090 **
## mods 0.5583 1.7144 0.3256 21 0.7479 -3.0069 4.1235
##
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
predict(robust(m3, cluster=EffectSizes_short2$Article), 0,transf=transf.ztor) # get estimates on scale of r.
##
## pred ci.lb ci.ub pi.lb pi.ub
## 0.2324 0.0644 0.3876 -0.2093 0.5953
Here are all of the analyses of the moderator variables. The code for each moderator is basically the same, so I have annotated it for the model testing the effect of modality only.
m_modal <- rma.mv(yi, vi, mods= ~ Modalitys*vi, random=~1|Article/Comp, data=EffectSizes_short2) # fit multilevel model.
summary(robust(m_modal, cluster=EffectSizes_short2$Article)) # get robust standard errors
##
## Number of outcomes: 71
## Number of clusters: 23
## Outcomes per cluster: 1-8 (mean: 3.09, median: 2)
##
## Test of Moderators (coefficients 2:4):
## F(df1 = 3, df2 = 19) = 18.3082, p-val < .0001
##
## Model Results:
##
## estimate se tval df pval ci.lb ci.ub
## intrcpt 0.2512 0.0813 3.0883 19 0.0061 0.0809 0.4214 **
## Modalitys1 0.0704 0.0139 5.0752 19 <.0001 0.0414 0.0994 ***
## vi 0.0354 1.5031 0.0236 19 0.9814 -3.1107 3.1815
## Modalitys1:vi -2.0670 0.5903 -3.5013 19 0.0024 -3.3026 -0.8314 **
##
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
anova(robust(m_modal, cluster=EffectSizes_short2$Article), btt=2) # F test with robust standard errors
##
## Test of Moderators (coefficient 2):
## F(df1 = 1, df2 = 19) = 25.7578, p-val < .0001
predict(robust(m_modal, cluster=EffectSizes_short2$Article), c(1, 0, 0), transf=transf.ztor) # get means and CIs for level 1 (assuming 0 se)
##
## pred ci.lb ci.ub pi.lb pi.ub
## 0.3109 0.1624 0.4457 -0.1099 0.6372
predict(robust(m_modal, cluster=EffectSizes_short2$Article), c(-1, 0, 0), transf=transf.ztor) # get means and CIs for level -1 (assuming 0 se)
##
## pred ci.lb ci.ub pi.lb pi.ub
## 0.1789 -0.0056 0.3516 -0.2566 0.5539
table(EffectSizes_short2$Modalitys)
##
## comp prod
## 27 44
contrasts(EffectSizes_short2$Modalitys) # Just a sanity check: which is -1 and which is 1
## [,1]
## comp 1
## prod -1
EffectSizes_short2 %>% # get descriptive statistics for each level of the variable.
group_by(Modalitys) %>%
summarise(
N_part = sum(n),
K = n()
)
m_measure <- rma.mv(yi, vi, mods= ~ Measures*vi, random=~1|Article/Comp, data=EffectSizes_short2)
summary(robust(m_measure, cluster=EffectSizes_short2$Article))
##
## Number of outcomes: 71
## Number of clusters: 23
## Outcomes per cluster: 1-8 (mean: 3.09, median: 2)
##
## Test of Moderators (coefficients 2:4):
## F(df1 = 3, df2 = 19) = 1.6824, p-val = 0.2045
##
## Model Results:
##
## estimate se tval df pval ci.lb ci.ub
## intrcpt 0.2399 0.0891 2.6906 19 0.0145 0.0533 0.4264 *
## Measures1 0.0257 0.0137 1.8789 19 0.0757 -0.0029 0.0543 .
## vi 0.7572 1.7864 0.4239 19 0.6764 -2.9818 4.4963
## Measures1:vi 0.0925 0.7425 0.1245 19 0.9022 -1.4617 1.6466
##
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
anova(robust(m_measure, cluster=EffectSizes_short2$Article), btt=2)
##
## Test of Moderators (coefficient 2):
## F(df1 = 1, df2 = 19) = 3.5303, p-val = 0.0757
predict(robust(m_measure, cluster=EffectSizes_short2$Article), c(1, 0, 0), transf=transf.ztor)
##
## pred ci.lb ci.ub pi.lb pi.ub
## 0.2595 0.0853 0.4183 -0.1992 0.6249
predict(robust(m_measure, cluster=EffectSizes_short2$Article), c(-1, 0, 0), transf=transf.ztor)
##
## pred ci.lb ci.ub pi.lb pi.ub
## 0.2110 0.0171 0.3896 -0.2544 0.5970
table(EffectSizes_short2$Measures)
##
## Other Vocab
## 27 44
contrasts(EffectSizes_short2$Measures)
## [,1]
## Other 1
## Vocab -1
EffectSizes_short2 %>%
group_by(Measures) %>%
summarise(
N_part = sum(n),
K = n()
)
m_context <- rma.mv(yi, vi, mods= ~ Just_Pointings*vi, random=~1|Article/Comp, data=EffectSizes_short2)
summary(robust(m_context, cluster=EffectSizes_short2$Article))
##
## Number of outcomes: 71
## Number of clusters: 23
## Outcomes per cluster: 1-8 (mean: 3.09, median: 2)
##
## Test of Moderators (coefficients 2:4):
## F(df1 = 3, df2 = 19) = 12.4862, p-val < .0001
##
## Model Results:
##
## estimate se tval df pval ci.lb ci.ub
## intrcpt 0.1136 0.0839 1.3531 19 0.1919 -0.0621 0.2892
## Just_Pointings1 0.1347 0.0305 4.4197 19 0.0003 0.0709 0.1985
## vi 4.8324 1.7441 2.7707 19 0.0122 1.1820 8.4828
## Just_Pointings1:vi -4.9678 1.4378 -3.4551 19 0.0027 -7.9772 -1.9584
##
## intrcpt
## Just_Pointings1 ***
## vi *
## Just_Pointings1:vi **
##
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
anova(robust(m_context, cluster=EffectSizes_short2$Article), btt=2)
##
## Test of Moderators (coefficient 2):
## F(df1 = 1, df2 = 19) = 19.5342, p-val = 0.0003
table(EffectSizes_short2$Just_Pointings)
##
## pointing_alone Gesture_Speech_Combination
## 64 7
predict(robust(m_context, cluster=EffectSizes_short2$Article), c(1, 0, 0), transf=transf.ztor)
##
## pred ci.lb ci.ub pi.lb pi.ub
## 0.2433 0.0628 0.4083 -0.2230 0.6190
predict(robust(m_context, cluster=EffectSizes_short2$Article), c(-1, 0, 0), transf=transf.ztor)
##
## pred ci.lb ci.ub pi.lb pi.ub
## -0.0211 -0.2065 0.1657 -0.4601 0.4261
contrasts(EffectSizes_short2$Just_Pointings)
## [,1]
## pointing_alone 1
## Gesture_Speech_Combination -1
EffectSizes_short2 %>%
group_by(Just_Pointings) %>%
summarise(
N_part = sum(n),
K = n()
)
m_pointing <- rma.mv(yi, vi, mods= ~ Age_Pointings*vi , random=~1|Article/Comp, data=EffectSizes_short2)
summary(robust(m_pointing, cluster=EffectSizes_short2$Article))
##
## Number of outcomes: 71
## Number of clusters: 23
## Outcomes per cluster: 1-8 (mean: 3.09, median: 2)
##
## Test of Moderators (coefficients 2:4):
## F(df1 = 3, df2 = 19) = 375.9006, p-val < .0001
##
## Model Results:
##
## estimate se tval df pval ci.lb ci.ub
## intrcpt 0.2901 0.0939 3.0897 19 0.0060 0.0936 0.4866 **
## Age_Pointings1 0.0604 0.0056 10.8430 19 <.0001 0.0487 0.0720 ***
## vi 0.2348 1.9025 0.1234 19 0.9031 -3.7472 4.2169
## Age_Pointings1:vi -0.0157 0.7042 -0.0223 19 0.9824 -1.4895 1.4581
##
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
anova(robust(m_pointing, cluster=EffectSizes_short2$Article), btt=2)
##
## Test of Moderators (coefficient 2):
## F(df1 = 1, df2 = 19) = 117.5698, p-val < .0001
predict(robust(m_pointing, cluster=EffectSizes_short2$Article), c(1, 0, 0), transf=transf.ztor)
##
## pred ci.lb ci.ub pi.lb pi.ub
## 0.3368 0.1622 0.4909 -0.1242 0.6782
predict(robust(m_pointing, cluster=EffectSizes_short2$Article), c(-1, 0, 0), transf=transf.ztor)
##
## pred ci.lb ci.ub pi.lb pi.ub
## 0.2258 0.0233 0.4104 -0.2483 0.6126
contrasts(EffectSizes_short2$Age_Pointings)
## [,1]
## Over_18 1
## Under 18 -1
EffectSizes_short2 %>%
group_by(Age_Pointings) %>%
summarise(
N_part = sum(n),
K = n()
)
m_language <- rma.mv(yi, vi, mods= ~ Languages*vi, random=~1|Article/Comp, data=EffectSizes_short2)
## Warning: Redundant predictors dropped from the model.
summary(robust(m_language, cluster=EffectSizes_short2$Article))
##
## Number of outcomes: 71
## Number of clusters: 23
## Outcomes per cluster: 1-8 (mean: 3.09, median: 2)
##
## Test of Moderators (coefficients 2:5):
## F(df1 = 4, df2 = 18) = 12.0935, p-val < .0001
##
## Model Results:
##
## estimate se tval df pval ci.lb ci.ub
## intrcpt 0.1263 0.0614 2.0575 18 0.0544 -0.0027 0.2552 .
## Languages1 0.2100 0.0895 2.3458 18 0.0306 0.0219 0.3981 *
## Languages2 -0.1059 0.0547 -1.9376 18 0.0685 -0.2207 0.0089 .
## vi 1.6932 1.7269 0.9805 18 0.3398 -1.9349 5.3213
## Languages1:vi -2.2328 1.7269 -1.2929 18 0.2124 -5.8609 1.3954
##
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
anova(robust(m_language, cluster=EffectSizes_short2$Article), btt=c(2, 3))
##
## Test of Moderators (coefficients 2:3):
## F(df1 = 2, df2 = 18) = 3.9237, p-val = 0.0385
table(EffectSizes_short2$Languages)
##
## English Mixed Other
## 31 8 32
predict(robust(m_language, cluster=EffectSizes_short2$Article), c(-1, -1, 0, 0), transf=transf.ztor) #Other
##
## pred ci.lb ci.ub pi.lb pi.ub
## 0.0221 -0.2511 0.2921 -0.4416 0.4765
predict(robust(m_language, cluster=EffectSizes_short2$Article), c(1, 0, 0, 0), transf=transf.ztor) #English
##
## pred ci.lb ci.ub pi.lb pi.ub
## 0.3241 0.0889 0.5251 -0.1420 0.6726
predict(robust(m_language, cluster=EffectSizes_short2$Article), c(0, 1, 0, 0), transf=transf.ztor) #Mixed
##
## pred ci.lb ci.ub pi.lb pi.ub
## 0.0204 0.0041 0.0366 -0.3719 0.4065
contrasts(EffectSizes_short2$Languages)
## [,1] [,2]
## English 1 0
## Mixed 0 1
## Other -1 -1
EffectSizes_short2 %>%
group_by(Languages) %>%
summarise(
N_part = sum(n),
K = n(),
mean = median(yi)
)
m_country <- rma.mv(yi, vi, mods= ~ Countrys*vi , random=~1|Article/Comp, data=EffectSizes_short2)
## Warning: Redundant predictors dropped from the model.
summary(robust(m_country, cluster=EffectSizes_short2$Article))
##
## Number of outcomes: 71
## Number of clusters: 23
## Outcomes per cluster: 1-8 (mean: 3.09, median: 2)
##
## Test of Moderators (coefficients 2:4):
## F(df1 = 3, df2 = 19) = 1.3935, p-val = 0.2754
##
## Model Results:
##
## estimate se tval df pval ci.lb ci.ub
## intrcpt 0.1901 0.0878 2.1666 19 0.0432 0.0065 0.3738 *
## Countrys1 0.1360 0.0878 1.5499 19 0.1377 -0.0477 0.3197
## vi 1.7230 1.8980 0.9078 19 0.3754 -2.2496 5.6955
## Countrys1:vi -1.4138 1.8980 -0.7449 19 0.4655 -5.3863 2.5587
##
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
anova(robust(m_country, cluster=EffectSizes_short2$Article), btt=2)
##
## Test of Moderators (coefficient 2):
## F(df1 = 1, df2 = 19) = 2.4021, p-val = 0.1377
predict(robust(m_country, cluster=EffectSizes_short2$Article), c(1, 0, 0), transf=transf.ztor)
##
## pred ci.lb ci.ub pi.lb pi.ub
## 0.3151 0.0598 0.5317 -0.1534 0.6679
predict(robust(m_country, cluster=EffectSizes_short2$Article), c(-1, 0, 0), transf=transf.ztor)
##
## pred ci.lb ci.ub pi.lb pi.ub
## 0.0541 -0.1963 0.2979 -0.3965 0.4836
contrasts(EffectSizes_short2$Countrys)
## [,1] [,2]
## North America 1 0
## Mixed 0 1
## Other -1 -1
EffectSizes_short2 %>%
group_by(Countrys) %>%
summarise(
N_part = sum(n),
K = n()
)
m_fo <- rma.mv(yi, vi, mods= ~ Pointing_Measures + vi, random=~1|Article/Comp, data=EffectSizes_short2)
summary(robust(m_fo, cluster=EffectSizes_short2$Article))
##
## Number of outcomes: 71
## Number of clusters: 23
## Outcomes per cluster: 1-8 (mean: 3.09, median: 2)
##
## Test of Moderators (coefficients 2:3):
## F(df1 = 2, df2 = 20) = 0.0482, p-val = 0.9530
##
## Model Results:
##
## estimate se tval df pval ci.lb ci.ub
## intrcpt 0.2342 0.1115 2.0999 20 0.0486 0.0015 0.4669 *
## Pointing_Measures1 0.0040 0.0696 0.0570 20 0.9551 -0.1411 0.1491
## vi 0.5778 1.9039 0.3035 20 0.7647 -3.3936 4.5492
##
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
anova(robust(m_fo, cluster=EffectSizes_short2$Article), btt=2)
##
## Test of Moderators (coefficient 2):
## F(df1 = 1, df2 = 20) = 0.0032, p-val = 0.9551
predict(robust(m_fo, cluster=EffectSizes_short2$Article), c(1, 0), transf=transf.ztor)
##
## pred ci.lb ci.ub pi.lb pi.ub
## 0.2338 0.0631 0.3912 -0.2208 0.6049
predict(robust(m_fo, cluster=EffectSizes_short2$Article), c(-1, 0), transf=transf.ztor)
##
## pred ci.lb ci.ub pi.lb pi.ub
## 0.2263 -0.1153 0.5200 -0.3099 0.6532
contrasts(EffectSizes_short2$Pointing_Measures)
## [,1]
## frequency 1
## onset -1
EffectSizes_short2 %>%
group_by(Pointing_Measures) %>%
summarise(
N_part = sum(n),
K = n(),
S = sum(as.integer(Pointing_Measures)),
mean(yi)
)
# I got strange predicted values when I allowed the interaction between variance and condition, so I removed the interaction.
m_design <- rma.mv(yi, vi, mods= ~ Designs*vi, random=~1|Article/Comp, data=EffectSizes_short2)
summary(robust(m_design, cluster=EffectSizes_short2$Article))
##
## Number of outcomes: 71
## Number of clusters: 23
## Outcomes per cluster: 1-8 (mean: 3.09, median: 2)
##
## Test of Moderators (coefficients 2:4):
## F(df1 = 3, df2 = 19) = 1.5816, p-val = 0.2268
##
## Model Results:
##
## estimate se tval df pval ci.lb ci.ub
## intrcpt 0.2420 0.0888 2.7240 19 0.0135 0.0561 0.4279 *
## Designs1 0.0300 0.0212 1.4109 19 0.1745 -0.0145 0.0744
## vi 0.4948 1.6133 0.3067 19 0.7624 -2.8819 3.8716
## Designs1:vi -0.8769 0.8958 -0.9789 19 0.3399 -2.7519 0.9980
##
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
table(EffectSizes_short2$Designs)
##
## Concurrent Longitudinal
## 18 53
anova(robust(m_design, cluster=EffectSizes_short2$Article), btt=2)
##
## Test of Moderators (coefficient 2):
## F(df1 = 1, df2 = 19) = 1.9905, p-val = 0.1745
predict(robust(m_design, cluster=EffectSizes_short2$Article), c(1, 0, 0), transf=transf.ztor)
##
## pred ci.lb ci.ub pi.lb pi.ub
## 0.2654 0.0560 0.4525 -0.1940 0.6294
predict(robust(m_design, cluster=EffectSizes_short2$Article), c(-1, 0, 0), transf=transf.ztor)
##
## pred ci.lb ci.ub pi.lb pi.ub
## 0.2089 0.0492 0.3582 -0.2303 0.5774
contrasts(EffectSizes_short2$Designs)
## [,1]
## Concurrent 1
## Longitudinal -1
EffectSizes_short2 %>%
group_by(Designs) %>%
summarise(
N_part = sum(n),
K = n()
)
sessionInfo()
## R version 4.1.0 (2021-05-18)
## Platform: x86_64-w64-mingw32/x64 (64-bit)
## Running under: Windows 10 x64 (build 19043)
##
## Matrix products: default
##
## locale:
## [1] LC_COLLATE=Dutch_Netherlands.1252 LC_CTYPE=Dutch_Netherlands.1252
## [3] LC_MONETARY=Dutch_Netherlands.1252 LC_NUMERIC=C
## [5] LC_TIME=Dutch_Netherlands.1252
##
## attached base packages:
## [1] stats graphics grDevices utils datasets methods base
##
## other attached packages:
## [1] forcats_0.5.1 stringr_1.4.0 dplyr_1.0.7 purrr_0.3.4
## [5] tidyr_1.1.3 tibble_3.1.3 ggplot2_3.3.5 tidyverse_1.3.1
## [9] metafor_3.0-2 Matrix_1.3-3 readr_2.0.0
##
## loaded via a namespace (and not attached):
## [1] Rcpp_1.0.7 lubridate_1.7.10 lattice_0.20-44 assertthat_0.2.1
## [5] digest_0.6.27 utf8_1.2.2 R6_2.5.0 cellranger_1.1.0
## [9] backports_1.2.1 reprex_2.0.1 evaluate_0.14 highr_0.9
## [13] httr_1.4.2 pillar_1.6.2 rlang_0.4.11 readxl_1.3.1
## [17] rstudioapi_0.13 jquerylib_0.1.4 rmarkdown_2.9 mathjaxr_1.4-0
## [21] labeling_0.4.2 bit_4.0.4 munsell_0.5.0 broom_0.7.9
## [25] compiler_4.1.0 modelr_0.1.8 xfun_0.24 pkgconfig_2.0.3
## [29] htmltools_0.5.1.1 tidyselect_1.1.1 fansi_0.5.0 crayon_1.4.1
## [33] tzdb_0.1.2 dbplyr_2.1.1 withr_2.4.2 grid_4.1.0
## [37] nlme_3.1-152 jsonlite_1.7.2 gtable_0.3.0 lifecycle_1.0.0
## [41] DBI_1.1.1 magrittr_2.0.1 scales_1.1.1 cli_3.0.1
## [45] stringi_1.7.3 vroom_1.5.3 farver_2.1.0 fs_1.5.0
## [49] xml2_1.3.2 bslib_0.2.5.1 ellipsis_0.3.2 generics_0.1.0
## [53] vctrs_0.3.8 tools_4.1.0 bit64_4.0.5 glue_1.4.2
## [57] hms_1.1.0 parallel_4.1.0 yaml_2.2.1 colorspace_2.0-2
## [61] rvest_1.0.1 knitr_1.33 haven_2.4.3 sass_0.4.0