Comparison of GGM and DCG as Causal Discovery Tools in Psychological Research
Gaussian Graphical Models (GGM) vs. Directed Cyclic Graphs (DCG)
1 Introduction
1.1 Background
In psychology, one of the core questions is how psychopathology comes about, with the network theory positing that mental disorder is produced by a system of direct and mutual causal interactions between symptoms that reinforce each other via feedback loops (Borsboom, 2017). In practice, empirical researchers often aim to gain insights into these causal relations by fitting statistical network models to observational (cross-sectional) data (Robinaugh et al., 2020). Although, statistical network models do not reflect causal relations, it is described such that researchers can use network models to generate causal hypotheses (Borsboom & Cramer, 2013). Regarding the utility of a network model as a causal discovery tool, there has been some research done, for example comparing the statistical network models to the directed acyclic graph models (DAG) (Dablander & Hinne, 2019). However, as Borsboom (2017) suggests, in reality, the true causal dynamics of psychopathology are likely to contain cycles, raising questions about the suitability of using DAGs in this context. This motivates our research into directed cyclic graph models (DCG).
1.2 Research Question
We aim to investigate the utility of statistical network models as tools for causal discovery in cyclic settings compared to the directed cyclic graph (DCG) models estimated by a constraint-based causal discovery method. We try to answer the question: how do statistical network models behave in comparison to DCGs when the true causal system contains cycles?
2 Methods
To compare the performance of statistical network models in comparison to the directed cyclic graph (DCG) models as causal discovery tools, we perform a simulation study. Of the statistical network models, we use Gaussian graphical models (GGM), as we focus on the observational (cross-sectional) research setting. In this section, we discuss the causal discovery method we use, the data generating process, the specifics of simulation study design, and the evaluation metrics.
2.1 Causal Discovery algorithm
In this simulation study, we decide to use the cyclic causal discovery (CCD) algorithm (Richardson, 1996b). There are other cyclic causal discovery methods exist (Strobl, 2019), but CCD is a relatively simple one to use and can estimate the cyclic structure with the asymptotic correctness (Richardson, 1996a). As shown in Figure 1, given the estimated statistical independence relations from data, CCD outputs a partial ancestral graph (PAG), which represents a Markov equivalence class of DCGs. Typically, a (constraint-based) causal discovery algorithm cannot uniquely identify the underlying true causal graph, but instead return a set of graphs that imply the same statistical independencies. It can be seen that in the example below, the PAG represents two different directed cyclic graphs that are Markov equivalent – meaning that they are statistically indistinguishable.
2.2 Generating Data
We simulate data from different cyclic causal models, all of which are characterized by linear causal relationships and independent Gaussian error terms, which are commonly assumed in psychological research. To generate data, we first choose a coefficient matrix \(\boldsymbol{B}\), and a covariance matrix \(\boldsymbol{\varepsilon}\) for the error terms. Then, we sample the error terms from a joint normal distribution and generate the observations of \(\boldsymbol{X}\) by solving the following systems of equations: \[ \boldsymbol{X} = (\boldsymbol{I} − \boldsymbol{B})^{-1} \boldsymbol{\varepsilon}. \] Note that the equations may not have unique solutions for some causal models. The necessary condition to have a unique solution is that \((\boldsymbol{I} − \boldsymbol{B})\) is invertible (so that a cyclic model converges to an equilibrium state). In cyclic models, this condition is only satisfied when the absolute values of eigenvalues of \(B\) are all smaller than one, \(|\lambda| < 1\) (Eberhardt et al., 2010). Therefore, we check the aforementioned condition holds for the specified \(B\) matrix of every simulated cyclic graph.1
2.3 Simulation Design
We generate data from different types of cyclic models by varying the number of variables (\(p = 4, 5, 6\)) and the model density (sparse/dense), which results in 3×2 design (see Figure 2). The sample size is fixed to \(10^6\) across all simulated models; such a large sample size is utilized in order to minimize the variability due to the sampling error and to enhance the estimation stability.
2.4 Evaluation Metrics
- Density : we look at the overall density of models in order to gauge how much the GGMs and DCGs deviate from the true model in general. Density is computed as follows: \[ Density = \frac{\text{Total Number of Edges}}{\text{Total Number of Possible Edges}} = \frac{E}{p(p-1)/2} \]
,where \(E\) represents the total number of existing edges in a model and \(p\) represents the number of nodes.
- Degree centrality : we look at the degree centrality in order to compare the GGMs with the DCGs on a more local level. Since DCGs do not present the edge weights but only the edge directions, while GGMs do not present the directions but only the edge weights, degree centrality is the only reasonable metric that allows us to directly compare them to each other. Degree of node \(i\) is computed as follows:
\[ Degree (i) = \sum_{j=1}^{n}a_{ij} \] ,where \(n\) refers to the number of nodes and \(a_{ij}\) represents the element at row \(i\) and column \(j\) of the adjacency matrix \(A\).
Note. Once again, the output of the CCD algorithm is a Markov equivalence class of directed cyclic graphs (DCG), meaning that it provides a set of statistically equivalent DCGs (as explained in Section 2.1). Therefore, we compute the density and degree (per node) of each DCG that belongs to the the equivalence class, and take the average of the density/degree to compare with those of GGMs.
3 Results
Below, we show the results from comparing the GGM with the PAG (i.e., partial ancestral graph: representation of the equivalence class of DCGs) in all 6 different simulated cases in terms of the overall density and degree centrality, as we previously described in Section 2.3.
3.1 Sparse four nodes model
See code here.
## Specify B matrix
# set the number of nodes (p)
p = 4
B4 = matrix(c(0, 0, 0, 0,
1, 0, 0.5, 0,
0, 0.5, 0, 0.9,
0, 0, 0, 0), p, p, byrow = T)
colnames(B4) <- c("X1", "X2", "X3", "X4")
## Generate data
# first, equilibrium check (necessary condition for cyclic models to converge)
equilibrium_check(B4)
# generated data with N = 10^6, seed = 12345
data4p <- gen_dat(B4, N =1e6, seed = 12345)
## Specify layout
layout4 = matrix(c(-1,1,
-1,0,
1,0,
1,1),4,2,byrow = T)
layout(t(1:3))
par(oma=c(0, 0, 6, 0))
## True cyclic graph
true4p <- qgraph(t(B4), layout=layout4, labels = colnames(B4),
theme="colorblind", vsize = 20, asize = 10)
title("True cyclic graph", font.main = 1, cex.main = 1.2, line = 4, outer=TRUE, adj = 0.11)
## Estimate GGM
ggm4p <- qgraph(cor(data4p), layout=layout4, theme="colorblind", vsize = 20, asize = 10)
title(main = "GGM", font.main = 1, cex.main = 1.2, line = 4, outer=TRUE, adj = 0.5)
## Run CCD algorithm
ccd_4p <- ccdKP(df=data4p, dataType = "continuous", alpha = 0.05)
mat4p <- CreateAdjMat(ccd_4p, 4)
## Estimate PAG
pag4p <- plotPAG(ccd_4p, mat4p)
title(main = "PAG", font.main = 1, cex.main = 1.2, line = 4, outer=TRUE, adj = 0.86)
## Compute equivalence class of all DCGs given the PAG
# (this takes relatively a long time, so we save it as an Rdata)
# equiv4p <- semiequiv_dcg(ccd_4p, mat4p)
# save(equiv4p, file="data/equiv4p.RData")
load("../data/equiv4p.RData")NOTE. In the PAG representation, there exists two types of underlining that can be used in a triple of nodes: solid underlining (A - B - C) and dotted underlining (A - B - C). The colored nodes (in blue) in PAGs refer to the presence of the solid underlinings and the dashed nodes refer to the presence of the dotted underlinings on the corresponding nodes. These underlinings are used to further orient the edges in a PAG. For more information on this, see (Richardson, 1996b).
3.2 Dense four nodes model
See code here.
## Specify B matrix
# set the number of nodes (p)
p = 4
B4_high = matrix(c(0, 0, 0, 0,
0.9, 0, 0.4, 0,
0, 0.5, 0, .5,
-0.8, 0, 0, 0), p, p, byrow = T)
colnames(B4_high) <- c("X1", "X2", "X3", "X4")
## Generate data
# first, equilibrium check (necessary condition for cyclic models to converge)
equilibrium_check(B4_high)
# generated data with N = 10^6, seed = 1
data4p_high <- gen_dat(B4_high, N =1e6, seed = 1)
## Specify layout
layout4 = matrix(c(-1,1,
-1,0,
1,0,
1,1),4,2,byrow = T)
layout(t(1:3))
par(oma=c(0, 0, 6, 0))
## True cyclic graph
true4p_high <- qgraph(t(B4_high), layout=layout4, labels = colnames(B4_high),
theme="colorblind", vsize = 20, asize = 10)
title("True cyclic graph", font.main = 1, cex.main = 1.2, line = 4, outer=TRUE, adj = 0.11)
## Estimate GGM
ggm4p_high <- qgraph(t(cor(data4p_high)), layout=layout4, theme="colorblind", vsize = 20, asize = 10)
title(main = "GGM", font.main = 1, cex.main = 1.2, line = 4, outer=TRUE, adj = 0.5)
## Run CCD algorithm
ccd_4p_high <- ccdKP(df=data4p_high, dataType = "continuous", alpha = 0.05)
mat4p_high <- CreateAdjMat(ccd_4p_high, 4)
## Estimate PAG
pag4p <- plotPAG(ccd_4p_high, mat4p_high)
title(main = "PAG", font.main = 1, cex.main = 1.2, line = 4, outer=TRUE, adj = 0.86)
## Compute equivalence class of all DCGs given the PAG
# (this takes relatively a long time, so we save it as an Rdata)
# equiv4p_high <- semiequiv_dcg(ccd_4p_high, mat4p_high)
# save(equiv4p_high, file="data/equiv4p_high.RData")
load("../data/equiv4p_high.RData")3.3 Sparse five nodes model
See code here.
## Specify B matrix
# set the number of nodes (p)
p = 5
B5 = matrix(c(0, 1, 0, 0, 0,
0, 0, 0, 0.7, 0,
0, 0.4, 0, 0, 0,
0, 0, .5, 0, 0,
0, 0, 0, -1.5, 0), p, p, byrow = T)
colnames(B5) <- c("X1", "X2", "X3", "X4", "X5")
## Generate data
# first, equilibrium check (necessary condition for cyclic models to converge)
equilibrium_check(B5)
# generated data with N = 10^6, seed = 123
data5p <- gen_dat(B5, N =1e6, seed = 1)
## Specify layout
layout5 = matrix(c(0,1,
0,0,
1,-1,
2,0,
2,1),5,2,byrow = T)
layout(t(1:3))
par(oma=c(0, 0, 6, 0))
## True cyclic graph
true5p <- qgraph(t(B5), layout=layout5, labels = colnames(B5),
theme="colorblind", vsize = 20, asize = 10)
title("True cyclic graph", font.main = 1, cex.main = 1.2, line = 4, outer=TRUE, adj = 0.11)
## Estimate GGM
ggm5p <- qgraph(cor(data5p), layout = layout5, theme="colorblind", vsize = 20, asize = 10)
title(main = "GGM", font.main = 1, cex.main = 1.2, line = 4, outer=TRUE, adj = 0.5)
## Run CCD algorithm
ccd_5p <- ccdKP(df=data5p, dataType = "continuous", alpha = 0.05)
mat5p <- CreateAdjMat(ccd_5p, 5)
## Estimate PAG
pag5p <- plotPAG(ccd_5p, mat5p)
title(main = "PAG", font.main = 1, cex.main = 1.2, line = 4, outer=TRUE, adj = 0.86)
## Compute equivalence class of all DCGs given the PAG
# (this takes relatively a long time, so we save it as an Rdata)
# equiv5p <- semiequiv_dcg(ccd_5p, mat5p)
# save(equiv5p, file="data/equiv5p.RData")
load("../data/equiv5p.RData")3.4 Dense five nodes model
See code here.
## Specify B matrix
# set the number of nodes (p)
p = 5
B5_high = matrix(c(0, 0.9, 0, 0, 0.6,
0, 0, 0, 0.7, 0,
0, 0.9, 0, 0, 0,
0, 0, 0.5, 0, 0,
0, 0, 0, 1, 0), p, p, byrow = T)
colnames(B5_high) <- c("X1", "X2", "X3", "X4", "X5")
## Generate data
# first, equilibrium check (necessary condition for cyclic models to converge)
equilibrium_check(B5_high)
# generated data with N = 10^6, seed = 1
data5p_high <- gen_dat(B5_high, N =1e6, seed = 1)
## Specify layout
layout5 = matrix(c(0,1,
0,0,
1,-1,
2,0,
2,1),5,2,byrow = T)
layout(t(1:3))
par(oma=c(0, 0, 6, 0))
## True cyclic graph
true5p_high <- qgraph(t(B5_high), layout=layout5, labels = colnames(B5_high),
theme="colorblind", vsize = 20, asize = 10)
title("True cyclic graph", font.main = 1, cex.main = 1.2, line = 4, outer=TRUE, adj = 0.11)
## Estimate GGM
ggm5p_high <- qgraph(cor(data5p_high), layout = layout5, theme="colorblind", vsize = 20, asize = 10)
title(main = "GGM", font.main = 1, cex.main = 1.2, line = 4, outer=TRUE, adj = 0.5)
## Run CCD algorithm
ccd_5p_high <- ccdKP(df=data5p_high, dataType = "continuous", alpha = 0.05)
mat5p_high <- CreateAdjMat(ccd_5p_high, 5)
## Estimate PAG
pag5p_high <- plotPAG(ccd_5p_high, mat5p_high)
title(main = "PAG", font.main = 1, cex.main = 1.2, line = 4, outer=TRUE, adj = 0.86)
## Compute equivalence class of all DCGs given the PAG
# (this takes relatively a long time, so we save it as an Rdata)
# equiv5p_high <- semiequiv_dcg(ccd_5p_high, mat5p_high)
# save(equiv5p_high, file="data/equiv5p_high.RData")
load("../data/equiv5p_high.RData")3.5 Sparse six nodes model
See code here.
## Specify B matrix
# set the number of nodes (p)
p = 6
B6 = matrix(c(0, 0, 0, 0, 0, 0,
0.3, 0, 0.4, 0, 0, 0,
0, 0, 0, 0.9, 0, 0,
0, 0, 0, 0, 0.4, 0,
0, 0, 1, 0, 0, 0,
1, 0, 0, 0, 0.5, 0), p, p, byrow = T)
colnames(B6) <- c("X1", "X2", "X3", "X4", "X5", "X6")
## Generate data
# first, equilibrium check (necessary condition for cyclic models to converge)
equilibrium_check(B6)
# generated data with N = 10^6, seed = 123
data6p <- gen_dat(B6, N =1e6, seed = 123)
## Specify layout
layout6 = matrix(c(1, 2,
0,1,
0,0,
1,-1,
2,0,
2,1),6,2,byrow = T)
layout(t(1:3))
par(oma=c(0, 0, 6, 0))
## True cyclic graph
true6p <- qgraph(t(B6), layout=layout6, labels = colnames(B6), theme="colorblind", vsize = 20, asize = 10)
title("True cyclic graph", font.main = 1, cex.main = 1.2, line = 4, outer=TRUE, adj = 0.11)
## Estimate GGM
ggm6p <- qgraph(cor(data6p), layout = layout6, theme="colorblind", vsize = 20, asize = 10)
title(main = "GGM", font.main = 1, cex.main = 1.2, line = 4, outer=TRUE, adj = 0.5)
## Run CCD algorithm
ccd_6p <- ccdKP(df=data6p, dataType = "continuous", alpha = 0.05)
mat6p <- CreateAdjMat(ccd_6p, 6)
## Estimate PAG
pag6p <- plotPAG(ccd_6p, mat6p)
title(main = "PAG", font.main = 1, cex.main = 1.2, line = 4, outer=TRUE, adj = 0.86)
## Compute equivalence class of all DCGs given the PAG
# (this takes relatively a long time, so we save it as an Rdata)
# equiv6p <- semiequiv_dcg(ccd_6p, mat6p)
# save(equiv6p, file="data/equiv6p.RData")
load("../data/equiv6p.RData")3.6 Dense six nodes model
See code here.
## Specify B matrix
# set the number of nodes (p)
p = 6
B6_high = matrix(c(0, 0, 0, 0, 0, 0,
0.7, 0, 0.4, 0, 0, 0.9,
0, 0, 0, 0.9, 0, 0,
0, 0, 0, 0, 0.4, 0,
0, 0, 1, 0, 0, 0,
1, 0, 0, 0, 0.5, 0), p, p, byrow = T)
# colnames for B matrix is necessary for running CCD
colnames(B6_high) <- c("X1", "X2", "X3", "X4", "X5", "X6")
## Generate data
# first, equilibrium check (necessary condition for cyclic models to converge)
equilibrium_check(B6_high)
# generated data with N = 10^6, seed = 123
data6p_high<- gen_dat(B6_high, N =1e6, seed = 123)
layout(t(1:3))
par(oma=c(0, 0, 6, 0))
## True cyclic graph
# we use the layout specified earlier in the 6p-sparse model.
true6p_high <- qgraph(t(B6_high), layout=layout6, labels = colnames(B6), theme="colorblind", vsize = 20, asize = 10)
title("True cyclic graph", font.main = 1, cex.main = 1.2, line = 4, outer=TRUE, adj = 0.11)
## Estimate GGM
ggm6p_high <- qgraph(cor(data6p_high), layout = layout6, theme="colorblind", vsize = 20, asize = 10)
title(main = "GGM", font.main = 1, cex.main = 1.2, line = 4, outer=TRUE, adj = 0.5)
## Run CCD algorithm
ccd_6p_high <- ccdKP(df=data6p_high, dataType = "continuous", alpha = 0.05)
mat6p_high <- CreateAdjMat(ccd_6p_high, 6)
## Estimate PAG
pag6p_high <- plotPAG(ccd_6p_high, mat6p_high)
title(main = "PAG", font.main = 1, cex.main = 1.2, line = 4, outer=TRUE, adj = 0.86)
## Compute equivalence class of all DCGs given the PAG
# (this takes relatively a long time, so we save it as an Rdata)
# equiv6p_high <- semiequiv_dcg(ccd_6p_high, mat6p_high)
# save(equiv6p_high, file="data/equiv6p_high.RData")
load("../data/equiv6p_high.RData")3.7 Density comparison
Below, we can see the overall density for each of the simulated models. Figure 9 (a) shows the density of the sparse models, and Figure 9 (b) shows the density of the dense models. Across conditions, the DCGs approximate the true density better than the GGMs (as the red line follows the yellow line more closely). GGMs (green line) almost always overestimate the density. In addition, it can be seen that DCGs more clearly outperform GGMs when the true causal model is sparse. This could be due to the fact that the causal discovery algorithm (CCD) also struggles to estimate, when the true model is dense.
See code here.
## Compute densities
# density for true models
trueden <- list(B4, B4_high, B5, B5_high, B6, B6_high) %>%
map( ~ truemoddensity(.)) %>% unlist() %>% as.data.frame() %>% rename("TRUE"=".")
# density for GGM
ggmden <- list(ggm4p, ggm4p_high, ggm5p, ggm5p_high, ggm6p, ggm6p_high) %>%
map( ~ GGMdensity(.)) %>% unlist() %>% as.data.frame() %>% rename("GGM"=".")
# density for true DCG
dcgden <- list(equiv4p, equiv4p_high, equiv5p, equiv5p_high, equiv6p, equiv6p_high) %>%
map( ~ DCGdensity(.)) %>% unlist() %>% as.data.frame() %>% rename("DCG"=".")
# bind them together
modeldensities <- bind_cols(trueden, ggmden, dcgden) %>%
magrittr::set_rownames(c("4p-sparse", "4p-dense", "5p-sparse", "5p-dense", "6p-sparse", "6p-dense")) %>%
mutate(model = rownames(.)) %>% tidyr::pivot_longer(!model, names_to = "type", values_to="density")
## Specify my custom theme
MyTheme <- theme(plot.title = element_blank(),
plot.subtitle = element_text( face = "italic"),
axis.text=element_text(face = "bold"),
legend.text = element_text(face = "bold"))
## Create density plots
# Low-density (sparse) conditions
low_densities <- modeldensities %>%
filter(model %in% c("4p-sparse", "5p-sparse", "6p-sparse")) %>%
ggplot(aes(x=model, y=density, group = type, colour = type)) +
scale_colour_manual(values = c("#FF0000", "#00A08A", "#F2AD00"), name= "") +
geom_line(aes(group = type)) +
geom_point() +
labs(x="", y="", title = "", subtitle = "(a) Sparse Condition") +
theme_classic() + MyTheme
# High-density (sparse) conditions
high_densities <- modeldensities %>%
filter(model %in% c("4p-dense", "5p-dense", "6p-dense")) %>%
ggplot(aes(x=model, y=density, group = type, colour = type)) +
scale_colour_manual(values = c("#FF0000", "#00A08A", "#F2AD00"), name="") +
geom_line(aes(group = type)) +
geom_point() +
labs(x="", y="", title = "", subtitle = "(b) Dense Condition") +
theme_classic() + MyTheme
# combine plots
ggarrange(low_densities, high_densities, ncol = 1, nrow = 2, common.legend = TRUE, legend = "bottom")3.8 Degree comparison
Figure 10 below shows the degree centrality for each of the simulated models. Across conditions, the DCGs again approximate the true degree for all nodes more closely than the GGMs, except for the 4node-dense case (see the top right graph in Figure 10). As previously mentioned, the GGMs always overestimate the density, which naturally leads to the higher degree for almost all the nodes in the models. The exceptional 4node-dense case seems to be a difficult model for the CCD algorithm to estimate, as it is a rather small model with a high density including a cycle. As you can see in the corresponding PAG in Section 3.2, CCD was not able to recover any directions, which subsequently resulted in a huge class of equivalent DCGs. Probably that is why the degrees are shown to deviate a lot, as there were numerous DCGs that were considered to compute the average degrees in this case.
See code here.
## Compute degrees
trueobj <- list(true4p, true4p_high, true5p, true5p_high, true6p, true6p_high)
ggmobj <- list(ggm4p, ggm4p_high, ggm5p, ggm5p_high, ggm6p, ggm6p_high)
dcgobj <- list(equiv4p, equiv4p_high, equiv5p, equiv5p_high, equiv6p, equiv6p_high)
modelnames <- c("4node-sparse","4node-dense","5node-sparse","5node-dense","6node-sparse","6node-dense")
# storage for computed degrees
deglist <- list()
for(i in seq_along(trueobj)){
# compute the degree for each of the true models and GGMs,
# and the average degree for DCGs
deglist[[i]] <- bind_cols(GGMdegree(trueobj[[i]]),GGMdegree(ggmobj[[i]]), DCGdegree(dcgobj[[i]])) %>%
select(contains(c("node...1", "degree"))) %>%
rename(node = node...1, truedegree = degree...2, ggmdegree = degree...4 , dcg_avgdegree = average_degree) %>%
tidyr::pivot_longer(!node, names_to = "model", values_to = "degree") %>%
mutate(name = modelnames[i]) %>%
suppressMessages() # suppress messages for renaming columns
}
## Create degree centrality plots
degplots <- deglist %>%
map(~
ggplot(data = ., aes(x = degree, y = node, group = model, colour = model)) +
geom_point() + geom_path(aes(group = model)) +
labs(x="", y="", subtitle=.$name[1]) +
scale_colour_manual(values = c("#FF0000", "#00A08A", "#F2AD00"), name = "",
labels = c("DCG", "GGM", "TRUE")) + theme_bw() + MyTheme
)
# combine plots
ggarrange(plotlist = degplots,
ncol = 2, nrow = 3, common.legend = TRUE, legend = "bottom")4 Conclusion
The conclusion based on these results is that statistical network models relatively perform poorly as causal discovery tools in cyclic settings and hence, it shall be preferred to use the purpose-built cyclic causal discovery methods such as the CCD algorithm, when one is interested in the underlying causal mechanism of psychological processes.
5 References
6 Session info
sessionInfo()R version 4.2.2 (2022-10-31)
Platform: aarch64-apple-darwin20 (64-bit)
Running under: macOS Ventura 13.0
Matrix products: default
BLAS: /Library/Frameworks/R.framework/Versions/4.2-arm64/Resources/lib/libRblas.0.dylib
LAPACK: /Library/Frameworks/R.framework/Versions/4.2-arm64/Resources/lib/libRlapack.dylib
locale:
[1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8
attached base packages:
[1] grid stats graphics grDevices utils datasets methods
[8] base
other attached packages:
[1] Rgraphviz_2.40.0 graph_1.74.0 BiocGenerics_0.42.0
[4] DOT_0.1 rcausal_1.2.1 devtools_2.4.5
[7] usethis_2.1.6 rJava_1.0-6 purrr_0.3.4
[10] ggpubr_0.4.0 dplyr_1.0.9 ggplot2_3.3.6
[13] pcalg_2.7-7 qgraph_1.9.2
loaded via a namespace (and not attached):
[1] colorspace_2.0-3 ggsignif_0.6.3 deldir_1.0-6
[4] ellipsis_0.3.2 htmlTable_2.4.1 corpcor_1.6.10
[7] base64enc_0.1-3 fs_1.5.2 clue_0.3-61
[10] rstudioapi_0.13 farver_2.1.1 lavaan_0.6-12
[13] remotes_2.4.2 fansi_1.0.3 splines_4.2.2
[16] mnormt_2.1.0 cachem_1.0.6 robustbase_0.95-0
[19] knitr_1.40 glasso_1.11 pkgload_1.3.0
[22] Formula_1.2-4 jsonlite_1.8.0 broom_1.0.0
[25] cluster_2.1.4 png_0.1-7 sfsmisc_1.1-13
[28] shiny_1.7.2 compiler_4.2.2 backports_1.4.1
[31] assertthat_0.2.1 Matrix_1.5-1 fastmap_1.1.0
[34] cli_3.4.1 later_1.3.0 prettyunits_1.1.1
[37] htmltools_0.5.2 tools_4.2.2 igraph_1.3.5
[40] gtable_0.3.0 glue_1.6.2 reshape2_1.4.4
[43] V8_4.2.1 Rcpp_1.0.9 carData_3.0-5
[46] vctrs_0.4.1 nlme_3.1-160 psych_2.2.5
[49] xfun_0.31 stringr_1.4.1 ps_1.7.1
[52] ggm_2.5 mime_0.12 miniUI_0.1.1.1
[55] lifecycle_1.0.1 gtools_3.9.3 rstatix_0.7.0
[58] DEoptimR_1.0-11 MASS_7.3-58.1 scales_1.2.0
[61] promises_1.2.0.1 parallel_4.2.2 RBGL_1.72.0
[64] RColorBrewer_1.1-3 curl_4.3.2 yaml_2.3.5
[67] memoise_2.0.1 pbapply_1.5-0 gridExtra_2.3
[70] bdsmatrix_1.3-6 rpart_4.1.19 fastICA_1.2-3
[73] latticeExtra_0.6-30 stringi_1.7.8 checkmate_2.1.0
[76] pkgbuild_1.3.1 rlang_1.0.5 pkgconfig_2.0.3
[79] evaluate_0.15 lattice_0.20-45 labeling_0.4.2
[82] htmlwidgets_1.5.4 cowplot_1.1.1 tidyselect_1.1.2
[85] processx_3.7.0 plyr_1.8.7 magrittr_2.0.3
[88] R6_2.5.1 profvis_0.3.7 generics_0.1.3
[91] Hmisc_4.7-0 DBI_1.1.3 pillar_1.7.0
[94] foreign_0.8-83 withr_2.5.0 survival_3.4-0
[97] abind_1.4-5 nnet_7.3-18 tibble_3.1.7
[100] crayon_1.5.1 car_3.1-0 interp_1.1-2
[103] fdrtool_1.2.17 utf8_1.2.2 urlchecker_1.0.1
[106] rmarkdown_2.16 jpeg_0.1-9 data.table_1.14.2
[109] pbivnorm_0.6.0 callr_3.7.1 digest_0.6.29
[112] xtable_1.8-4 tidyr_1.2.0 httpuv_1.6.5
[115] stats4_4.2.2 munsell_0.5.0 sessioninfo_1.2.2
Footnotes
We fix the
RNGseed to allow for full reproduction of the findingsset.seed(12345)↩︎