Setup

library(pacman); p_load(lavaan, psych, qgraph, igraph, qpcR, knitr)

FITM <- c("chisq", "df", "nPar", "cfi", "rmsea", "rmsea.ci.lower", "rmsea.ci.upper", "aic", "bic")

Rationale

There is an unfortunate and growing tendency to subject different intelligence test datasets and correlation matrices to factor and network analyses and to then compare the fits of those models to declare a theory victorious without giving any thought whatsoever to model veracity. The results of doing this have appeared to reliably favor network models, but this support is illusory for a simple reason: the factor models have been fitted based on theoretical constraints, not ones derived based on exploratory methods.

Confirmatory network models are fitted based on exploratory methods, not theoretical constraints. The factor model equivalent to a confirmatory network model is a confirmatory factor model fitted based on an exploratory factor analysis. This key component of factor-network comparisons is the sole reason for the apparently better fit of network models. Addressing the factor-network comparison issue by comparing like-to-like instead of unlike-to-unlike leads to indistinguishably well-fitting factor and network models, and when done well enough, indistinguishable parameter estimates.

Despite this issue, factor and network models that have been fitted so far usually (but not always) produce highly similar estimates (if not fits), suggesting the amount of unmodeled complexity (favoring less parsimonious models) is not large in typical published examples. The main reasons for dissimilarity are not that factor and network models correspond to different underlying realities; they are, after all, being fit to the same data and their mutual aim is to provide the simplest dimensionality reduction that achieves local independence, so if they do so, they cannot disagree on any substantive matter concerning fit, manifest score uni- or multidimensionality, or cross-sectional causal implications (see Meredith, 1964; Meredith and Millsap, 1992; Millsap and Meredith, 1992; Meredith, 1993).

Discrepancy thus has to do with misfit, like that obtained when a bifactor is fitted and factor collapse occurs and remains unaddressed, as happened in Knyspel and Plomin’s (2024) recent study. Heywood and ultra-Heywood cases might be more commonly understood scenarios where misfit has endangered a model’s interpretation (e.g., Colom et al. 2004, Fig. 1). For fans of structure-after-measurement modeling, the construct misidentification that happens when latent variables are mistakenly indicated by variables factor scores are meant to be regressed on, see examples of that sort of misidentification in Malanchini et al. (2020; this study mistreats g as a manifest variable and not a latent variable and draws conclusions about the latter despite conducting zero tests involving it) and Procopio et al. (2022; this study also drew conclusions about specific cognitive abilities net of g, but never showed any results whatsoever involving said abilities net of g).

To illustrate the problem of discovering correct models in domains with model equivalency, I will reanalyze several datasets that have previously been used to compare factor and network models.

Analysis

The Wechsler Adult Intelligence Scale - Fourth Edition (WAIS-IV)

This data was obtained from Kan, van der Maas and Levine (2019), but it is also contained in the manual, and in a future iteration of this analysis, I will look at the extended correlation matrix and assess replication across age groups. For now, I wish to simply replicate their analysis.

WAISIV <- '
1.00
0.74 1.00
0.64 0.73 1.00
0.71 0.74 0.66 1.00
0.49 0.45 0.44 0.44 1.00
0.51 0.51 0.49 0.49 0.54 1.00
0.44 0.42 0.43 0.43 0.64 0.53 1.00
0.53 0.53 0.51 0.53 0.56 0.57 0.58 1.00
0.44 0.39 0.41 0.40 0.49 0.42 0.48 0.41 1.00
0.48 0.50 0.43 0.48 0.45 0.47 0.40 0.50 0.39 1.00
0.54 0.57 0.57 0.55 0.50 0.52 0.48 0.61 0.37 0.60 1.00
0.45 0.48 0.43 0.47 0.42 0.45 0.41 0.48 0.37 0.69 0.46 1.00
0.35 0.34 0.34 0.32 0.41 0.39 0.38 0.34 0.41 0.40 0.37 0.37 1.00
0.41 0.41 0.34 0.39 0.40 0.45 0.37 0.36 0.38 0.45 0.43 0.38 0.65 1.00
0.23 0.24 0.22 0.21 0.34 0.26 0.32 0.29 0.33 0.34 0.31 0.30 0.46 0.42 1.00'

WIVNames = list("SI", "VC", "IN", "CO", "BD", "MR", "VP", "FW", "PCm", "DS", "AR", "LN", "SS", "CD", "CA")

WAISIV.cor = getCov(WAISIV, names = WIVNames)

NIV <- 1800
fa.parallel(WAISIV.cor, n.obs = NIV)

## Parallel analysis suggests that the number of factors =  5  and the number of components =  2
print(fa(WAISIV.cor, n.obs = NIV, nfactors = 4), cut = .1, sort = T)
## Loading required namespace: GPArotation
## Factor Analysis using method =  minres
## Call: fa(r = WAISIV.cor, nfactors = 4, n.obs = NIV)
## Standardized loadings (pattern matrix) based upon correlation matrix
##     item   MR1   MR4   MR2   MR3   h2     u2 com
## VC     2  0.93                   0.81 0.1939 1.0
## CO     4  0.82                   0.70 0.2997 1.0
## SI     1  0.76                   0.68 0.3169 1.0
## IN     3  0.76                   0.64 0.3584 1.0
## AR    11  0.32  0.27        0.26 0.55 0.4504 2.9
## VP     7        0.84             0.66 0.3450 1.0
## BD     5        0.73             0.62 0.3816 1.0
## FW     8  0.21  0.57        0.15 0.59 0.4059 1.5
## MR     6  0.22  0.45             0.51 0.4909 1.6
## PCm    9  0.11  0.39  0.22       0.38 0.6155 1.8
## SS    13              0.86       0.72 0.2841 1.0
## CD    14  0.10        0.70       0.60 0.3984 1.1
## CA    15 -0.13  0.16  0.47       0.32 0.6788 1.5
## DS    10                    1.02 1.00 0.0025 1.0
## LN    12  0.13  0.13        0.52 0.51 0.4860 1.3
## 
##                        MR1  MR4  MR2  MR3
## SS loadings           3.31 2.54 1.74 1.70
## Proportion Var        0.22 0.17 0.12 0.11
## Cumulative Var        0.22 0.39 0.51 0.62
## Proportion Explained  0.36 0.27 0.19 0.18
## Cumulative Proportion 0.36 0.63 0.82 1.00
## 
##  With factor correlations of 
##      MR1  MR4  MR2  MR3
## MR1 1.00 0.65 0.47 0.58
## MR4 0.65 1.00 0.56 0.54
## MR2 0.47 0.56 1.00 0.50
## MR3 0.58 0.54 0.50 1.00
## 
## Mean item complexity =  1.3
## Test of the hypothesis that 4 factors are sufficient.
## 
## df null model =  105  with the objective function =  8.56 with Chi Square =  15358.31
## df of  the model are 51  and the objective function was  0.16 
## 
## The root mean square of the residuals (RMSR) is  0.02 
## The df corrected root mean square of the residuals is  0.02 
## 
## The harmonic n.obs is  1800 with the empirical chi square  97.97  with prob <  8.5e-05 
## The total n.obs was  1800  with Likelihood Chi Square =  290.36  with prob <  3.2e-35 
## 
## Tucker Lewis Index of factoring reliability =  0.968
## RMSEA index =  0.051  and the 90 % confidence intervals are  0.045 0.057
## BIC =  -91.91
## Fit based upon off diagonal values = 1
print(fa(WAISIV.cor, n.obs = NIV, nfactors = 5), cut = .1, sort = T)
## Factor Analysis using method =  minres
## Call: fa(r = WAISIV.cor, nfactors = 5, n.obs = NIV)
## Standardized loadings (pattern matrix) based upon correlation matrix
##     item   MR1   MR3   MR2   MR4   MR5   h2   u2 com
## VC     2  0.91                         0.81 0.19 1.0
## CO     4  0.81                         0.70 0.30 1.0
## SI     1  0.78                         0.69 0.31 1.0
## IN     3  0.72                    0.10 0.64 0.36 1.1
## VP     7        0.82                   0.66 0.34 1.0
## BD     5        0.70                   0.62 0.38 1.0
## FW     8  0.13  0.47        0.10  0.25 0.60 0.40 1.9
## PCm    9  0.16  0.42  0.20       -0.13 0.41 0.59 2.1
## MR     6  0.19  0.40              0.11 0.51 0.49 1.8
## SS    13              0.85             0.71 0.29 1.0
## CD    14              0.72             0.61 0.39 1.1
## CA    15 -0.14  0.15  0.46  0.10       0.32 0.68 1.5
## DS    10                    0.91       0.86 0.14 1.0
## LN    12  0.12  0.11        0.69 -0.11 0.59 0.41 1.2
## AR    11                          0.73 0.81 0.19 1.1
## 
##                        MR1  MR3  MR2  MR4  MR5
## SS loadings           3.07 2.24 1.72 1.60 0.91
## Proportion Var        0.20 0.15 0.11 0.11 0.06
## Cumulative Var        0.20 0.35 0.47 0.57 0.64
## Proportion Explained  0.32 0.23 0.18 0.17 0.10
## Cumulative Proportion 0.32 0.56 0.74 0.90 1.00
## 
##  With factor correlations of 
##      MR1  MR3  MR2  MR4  MR5
## MR1 1.00 0.62 0.47 0.60 0.62
## MR3 0.62 1.00 0.55 0.56 0.54
## MR2 0.47 0.55 1.00 0.54 0.39
## MR4 0.60 0.56 0.54 1.00 0.59
## MR5 0.62 0.54 0.39 0.59 1.00
## 
## Mean item complexity =  1.2
## Test of the hypothesis that 5 factors are sufficient.
## 
## df null model =  105  with the objective function =  8.56 with Chi Square =  15358.31
## df of  the model are 40  and the objective function was  0.09 
## 
## The root mean square of the residuals (RMSR) is  0.01 
## The df corrected root mean square of the residuals is  0.02 
## 
## The harmonic n.obs is  1800 with the empirical chi square  51.08  with prob <  0.11 
## The total n.obs was  1800  with Likelihood Chi Square =  166.66  with prob <  2.1e-17 
## 
## Tucker Lewis Index of factoring reliability =  0.978
## RMSEA index =  0.042  and the 90 % confidence intervals are  0.035 0.049
## BIC =  -133.16
## Fit based upon off diagonal values = 1
## Measures of factor score adequacy             
##                                                    MR1  MR3  MR2  MR4  MR5
## Correlation of (regression) scores with factors   0.96 0.92 0.91 0.94 0.89
## Multiple R square of scores with factors          0.92 0.85 0.83 0.89 0.80
## Minimum correlation of possible factor scores     0.84 0.70 0.66 0.78 0.60

We’ll remember these EFA results for ultimate comparison to the theoretical CFA and the network model results. For now, let’s fit the theory-based four-factor and five-factor models used, respectively, by Kan, van der Maas and Levine (2019) and Schmank et al. (2019). For further information on four- and five-factor configurations for the WAIS-IV see Weiss et al. (2013) and Staffaroni et al. (2018).

Theory4 <- '
F1 =~ SI + VC + IN + CO
F2 =~ BD + MR + VP + FW + PCm
F3 =~ DS + AR + LN
F4 =~ SS + CD + CA

g =~ F1 + F2 + F3 + F4'

Theory5 <- '
F1 =~ IN + VC + CO + SI
F2 =~ FW + MR
F3 =~ BD + PCm + VP
F4 =~ AR + DS + LN
F5 =~ CA + CD + SS

g =~ F1 + F2 + F3 + F4 + F5'

WAISTheory4 <- cfa(Theory4, sample.cov = WAISIV.cor, sample.nobs = NIV, std.lv = T)
WAISTheory5 <- cfa(Theory5, sample.cov = WAISIV.cor, sample.nobs = NIV, std.lv = T)

We’ll also fit the EFA-based CFAs (cut = 0.3, except for F5, for which cut = 0.2).

EFA4 <- '
F1 =~ VC + CO + SI + IN + AR
F2 =~ VP + BD + FW + MR + PCm
F3 =~ SS + CD + CA
F4 =~ DS + LN

g =~ F1 + F2 + F3 + F4'

EFA5 <- '
F1 =~ VC + CO + SI + IN
F2 =~ VP + BD + FW + PCm + MR
F3 =~ SS + CD + CA
F4 =~ DS + LN
F5 =~ AR + FW

g =~ F1 + F2 + F3 + F4 + F5'

WAISEFA4 <- cfa(EFA4, sample.cov = WAISIV.cor, sample.nobs = NIV, std.lv = T)
WAISEFA5 <- cfa(EFA5, sample.cov = WAISIV.cor, sample.nobs = NIV, std.lv = T)

Now let’s add cross-loadings and residual covariances to make these models truly equivalent to network models, while keeping the models more parsimonious than the network model. To do this, we’ll use the EFA above and the modification indices. To set our goal, let’s use the network model as a measuring stick. For that, we’ll use a lambda.min.ratio of 0.01 and ebicTuning of 0.5 (defaults), and we’ll do stepwise selection and EBICglasso.

#Glasso

WAISIVGlassoNet <- qgraph(WAISIV.cor, sampleSize = NIV, graph = "glasso", layout = "spring", legend = FALSE, 
                      palette = "colorblind", theme = "Reddit", 
                      fade = T, threshold = T)

Glasso <- ggmFit(WAISIVGlassoNet, WAISIV.cor, sampleSize = NIV, ebicTuning = 0.5); Glasso
## Refitting network
## 
## ggmFit object:
##  Use plot(object) to plot the network structure 
##  Fit measures stored under object$fitMeasures 
## 
##              Measure     Value
## 1               nvar        15
## 2               nobs       120
## 3               npar        59
## 4                 df        61
## 5               fmin     0.057
## 6              chisq    203.92
## 7             pvalue  < 0.0001
## 8     baseline.chisq  15416.84
## 9        baseline.df       105
## 10   baseline.pvalue         0
## 11               nfi      0.99
## 12               tli      0.98
## 13               rfi      0.98
## 14               ifi      0.99
## 15               rni      0.99
## 16               cfi      0.99
## 17             rmsea     0.036
## 18    rmsea.ci.lower     0.031
## 19    rmsea.ci.upper     0.042
## 20      rmsea.pvalue       1.0
## 21               rmr     0.023
## 22              srmr     0.023
## 23              logl -30697.38
## 24 unrestricted.logl -30595.42
## 25            aic.ll  61512.76
## 26           aic.ll2  61516.83
## 27             aic.x     81.92
## 28            aic.x2    321.92
## 29               bic  61837.00
## 30              bic2  61649.56
## 31              ebic  62156.55
## 32        ebicTuning      0.50
#Stepwise

StepwiseGGM <- ggmModSelect(WAISIV.cor, n = NIV, criterion = "bic")
## Running glasso to obtain starting model...
## Testing all edges...
## Changed one edge...
## Testing subset of edges...
## Changed one edge...
## Testing subset of edges...
## Changed one edge...
## Testing subset of edges...
## Changed one edge...
## Testing subset of edges...
## Changed one edge...
## Testing subset of edges...
## Changed one edge...
## Testing subset of edges...
## Changed one edge...
## Testing subset of edges...
## Changed one edge...
## Testing subset of edges...
## Changed one edge...
## Testing subset of edges...
## Changed one edge...
## Testing subset of edges...
## Changed one edge...
## Testing subset of edges...
## Changed one edge...
## Testing subset of edges...
## Changed one edge...
## Testing subset of edges...
## Changed one edge...
## Testing subset of edges...
## Changed one edge...
## Testing subset of edges...
## Changed one edge...
## Testing subset of edges...
## Testing all edges...
## Changed one edge...
## Testing subset of edges...
## Changed one edge...
## Testing subset of edges...
## Changed one edge...
## Testing subset of edges...
## Testing all edges...
qgraph(StepwiseGGM$graph, layout = "spring", legend = FALSE, 
       palette = "colorblind", theme = "Reddit", 
       fade = T)$layout

##               [,1]        [,2]
##  [1,]  0.525117346  0.55100104
##  [2,]  0.929660723  0.37642254
##  [3,]  0.520990689  0.01494086
##  [4,]  1.000000000 -0.06564325
##  [5,] -1.000000000 -0.23618065
##  [6,] -0.365684285 -0.28001548
##  [7,] -0.898234084 -0.65480178
##  [8,] -0.347897760 -1.00000000
##  [9,] -0.888974319  0.34526163
## [10,]  0.361576343 -0.98321489
## [11,]  0.101615676 -0.66895400
## [12,]  0.696258716 -0.86184596
## [13,] -0.475996717  1.00000000
## [14,] -0.436851181  0.59033315
## [15,] -0.004597131  0.76003297
Stepwise <- ggmFit(StepwiseGGM$graph, WAISIV.cor, sampleSize = NIV, ebicTuning = 0.5); Stepwise
## Refitting network
## 
## ggmFit object:
##  Use plot(object) to plot the network structure 
##  Fit measures stored under object$fitMeasures 
## 
##              Measure     Value
## 1               nvar        15
## 2               nobs       120
## 3               npar        64
## 4                 df        56
## 5               fmin     0.036
## 6              chisq    129.95
## 7             pvalue  < 0.0001
## 8     baseline.chisq  15416.84
## 9        baseline.df       105
## 10   baseline.pvalue         0
## 11               nfi      0.99
## 12               tli      0.99
## 13               rfi      0.98
## 14               ifi       1.0
## 15               rni       1.0
## 16               cfi       1.0
## 17             rmsea     0.027
## 18    rmsea.ci.lower     0.021
## 19    rmsea.ci.upper     0.033
## 20      rmsea.pvalue       1.0
## 21               rmr     0.012
## 22              srmr     0.012
## 23              logl -30660.40
## 24 unrestricted.logl -30595.42
## 25            aic.ll  61448.79
## 26           aic.ll2  61453.59
## 27             aic.x     17.95
## 28            aic.x2    257.95
## 29               bic  61800.50
## 30              bic2  61597.18
## 31              ebic  62147.14
## 32        ebicTuning      0.50

Great! Via glasso, we have a model with 61 degrees of freedom and a BIC of 61,837 and through stepwise selection, 64 degrees of freedom and a BIC of 61,800.5.

EFA4Mod <- '
F1 =~ VC + CO + SI + IN + AR + SS + CA
F2 =~ VP + BD + FW + MR + PCm + AR + VC + CA
F3 =~ SS + CD + CA + FW
F4 =~ DS + LN + PCm

g =~ F1 + F2 + F3 + F4

AR ~~ DS + PCm
VP ~~ BD
FW ~~ PCm + MR
IN ~~ AR + VC + CD
MR ~~ PCm + CA
SI ~~ BD'

WAISEFA4Mod <- cfa(EFA4Mod, sample.cov = WAISIV.cor, sample.nobs = NIV, std.lv = T)

For the model with four factors, adding cross-loadings and residual covariances brings us to superior fit with a more parsimonious model (68 df).

EFA5Mod <- '
F1 =~ VC + CO + SI + IN + SS
F2 =~ VP + BD + FW + PCm + MR
F3 =~ SS + CD + CA + PCm + FW
F4 =~ DS + LN
F5 =~ AR + FW + IN

g =~ F1 + F2 + F3 + F4 + F5

DS ~~ AR
VP ~~ BD
VC ~~ IN
MR ~~ CD
IN ~~ CD
PCm ~~ AR + FW + MR
VP ~~ FW
SS ~~ AR
CD ~~ DS
VC ~~ PCm'

WAISEFA5Mod <- cfa(EFA5Mod, sample.cov = WAISIV.cor, sample.nobs = NIV, std.lv = T)

For the model with four factors, adding cross-loadings and residual covariances brings us to equivalent fit with 69 df and superior fit with a more parsimonious model at 68 df.

extract_fit_measures <- function(model, is_network = FALSE) {
  if (is_network) {
    fit_measures <- list(
      nvar = model$fitMeasures$nvar,
      npar = model$fitMeasures$npar,
      df = model$fitMeasures$df,
      chisq = model$fitMeasures$chisq,
      nfi = model$fitMeasures$nfi,
      tli = model$fitMeasures$tli,
      cfi = model$fitMeasures$cfi,
      rmsea = model$fitMeasures$rmsea,
      rmsea_ci_lower = model$fitMeasures$rmsea.ci.lower,
      rmsea_ci_upper = model$fitMeasures$rmsea.ci.upper,
      rmr = model$fitMeasures$rmr,
      srmr = model$fitMeasures$srmr,
      aic_ll = model$fitMeasures$aic.ll,
      bic = model$fitMeasures$bic)} else {
    fit_measures <- fitMeasures(model)
    fit_measures <- list(
      nvar = NA,  # lavaan models do not have nvar, set it to NA to match the structure
      npar = fit_measures["npar"],
      df = fit_measures["df"],
      chisq = fit_measures["chisq"],
      nfi = fit_measures["nfi"],
      tli = fit_measures["tli"],
      cfi = fit_measures["cfi"],
      rmsea = fit_measures["rmsea"],
      rmsea_ci_lower = fit_measures["rmsea.ci.lower"],
      rmsea_ci_upper = fit_measures["rmsea.ci.upper"],
      rmr = fit_measures["rmr"],
      srmr = fit_measures["srmr"],
      aic_ll = fit_measures["aic"],
      bic = fit_measures["bic"])}
  
  fit_measures <- lapply(fit_measures, function(x) {
    if (is.numeric(x)) {
      round(x, 3)} else {
      NA}})
  
  return(fit_measures)
}

fit_measures_list <- list(
  WAISTheory4 = extract_fit_measures(WAISTheory4),
  WAISTheory5 = extract_fit_measures(WAISTheory5),
  WAISEFA4 = extract_fit_measures(WAISEFA4),
  WAISEFA5 = extract_fit_measures(WAISEFA5),
  Glasso = extract_fit_measures(Glasso, is_network = TRUE),
  Stepwise = extract_fit_measures(Stepwise, is_network = TRUE),
  WAISEFA4Mod = extract_fit_measures(WAISEFA4Mod),
  WAISEFA5Mod = extract_fit_measures(WAISEFA5Mod)
)

fit_measures_combined <- data.frame(
  Measure = names(fit_measures_list[[1]]),
  WAISTheory4 = unlist(fit_measures_list$WAISTheory4, use.names = FALSE),
  WAISTheory5 = unlist(fit_measures_list$WAISTheory5, use.names = FALSE),
  WAISEFA4 = unlist(fit_measures_list$WAISEFA4, use.names = FALSE),
  WAISEFA5 = unlist(fit_measures_list$WAISEFA5, use.names = FALSE),
  Glasso = unlist(fit_measures_list$Glasso, use.names = FALSE),
  Stepwise = unlist(fit_measures_list$Stepwise, use.names = FALSE),
  WAISEFA4Mod = unlist(fit_measures_list$WAISEFA4Mod, use.names = FALSE),
  WAISEFA5Mod = unlist(fit_measures_list$WAISEFA5Mod, use.names = FALSE))

kable(fit_measures_combined, caption = "Selected Fit Measures for Various Models")
Selected Fit Measures for Various Models
Measure WAISTheory4 WAISTheory5 WAISEFA4 WAISEFA5 Glasso Stepwise WAISEFA4Mod WAISEFA5Mod
nvar NA NA NA NA 15.000 15.000 NA NA
npar 34.000 35.000 34.000 36.000 59.000 64.000 52.000 52.000
df 86.000 85.000 86.000 84.000 61.000 56.000 68.000 68.000
chisq 922.557 862.934 964.038 629.179 203.925 129.952 149.120 185.842
nfi 0.940 0.944 0.937 0.959 0.987 0.992 0.990 0.988
tli 0.933 0.937 0.930 0.955 0.984 0.991 0.992 0.988
cfi 0.945 0.949 0.943 0.964 0.991 0.995 0.995 0.992
rmsea 0.074 0.071 0.075 0.060 0.036 0.027 0.026 0.031
rmsea_ci_lower 0.069 0.067 0.071 0.056 0.031 0.021 0.020 0.026
rmsea_ci_upper 0.078 0.076 0.080 0.064 0.042 0.033 0.031 0.036
rmr 0.043 0.042 0.048 0.037 0.023 0.012 0.016 0.022
srmr 0.043 0.042 0.048 0.037 0.023 0.012 0.016 0.022
aic_ll 62181.396 62123.773 62222.876 61892.018 61512.763 61448.790 61443.959 61480.681
bic 62368.244 62316.117 62409.725 62089.857 61837.000 61800.505 61729.727 61766.449

Replications Across Age

  • Post here later (Appendix A)

Replication Across Borders: The Hungarian Edition of the WAIS-IV

The goal here will be to see which model or graph configuration holds up best with the data from the Hungarian norming sample.

MagWAIS <- '
1                                                       
0.75    1                                                   
0.71    0.73    1                                               
0.72    0.73    0.72    1                                           
0.51    0.44    0.44    0.5 1                                       
0.55    0.49    0.51    0.57    0.55    1                                   
0.59    0.56    0.57    0.6 0.57    0.63    1                               
0.58    0.55    0.55    0.55    0.5 0.62    0.64    1                           
0.53    0.49    0.48    0.54    0.56    0.7 0.68    0.61    1                       
0.6 0.52    0.52    0.55    0.52    0.58    0.66    0.57    0.59    1                   
0.54    0.51    0.47    0.52    0.45    0.48    0.51    0.51    0.49    0.62    1               
0.46    0.41    0.4 0.45    0.38    0.43    0.44    0.41    0.41    0.52    0.47    1           
0.34    0.31    0.3 0.34    0.36    0.41    0.35    0.35    0.39    0.37    0.34    0.31    1       
0.49    0.46    0.44    0.48    0.43    0.49    0.47    0.5 0.44    0.42    0.49    0.44    0.49    1   
0.39    0.39    0.36    0.38    0.41    0.48    0.45    0.44    0.45    0.42    0.45    0.4 0.54    0.67    1'

MagNames = list("IN", "VC", "CO", "SI", "PCm", "BD", "FW", "MR", "VP", "AR", "DS", "LN", "CA", "CD", "SS")

MagWAIS.cor = getCov(MagWAIS, names = MagNames)

NMag <- 1112
MagTheory4 <- cfa(Theory4, sample.cov = MagWAIS.cor, sample.nobs = NMag, std.lv = T)
#MagTheory5 <- cfa(Theory5, sample.cov = MagWAIS.cor, sample.nobs = NMag, std.lv = T) # Fails to converge
MagEFA4 <- cfa(EFA4, sample.cov = MagWAIS.cor, sample.nobs = NMag, std.lv = T)
#MagEFA5 <- cfa(EFA5, sample.cov = MagWAIS.cor, sample.nobs = NMag, std.lv = T) # Fails to converge
MagEFA4Mod <- cfa(EFA4Mod, sample.cov = MagWAIS.cor, sample.nobs = NMag, std.lv = T)
MagEFA5Mod <- cfa(EFA5Mod, sample.cov = MagWAIS.cor, sample.nobs = NMag, std.lv = T)
MagGlasso <- ggmFit(WAISIVGlassoNet, MagWAIS.cor, sampleSize = NMag, ebicTuning = 0.5)
## Refitting network
MagStepwise <- ggmFit(StepwiseGGM$graph, MagWAIS.cor, sampleSize = NMag, ebicTuning = 0.5)
## Refitting network
fit_measures_list <- list(
  MagTheory4 = extract_fit_measures(MagTheory4),
  #MagTheory5 = extract_fit_measures(MagTheory5),
  MagEFA4 = extract_fit_measures(MagEFA4),
  #MagEFA5 = extract_fit_measures(MagEFA5),
  MagGlasso = extract_fit_measures(MagGlasso, is_network = TRUE),
  MagStepwise = extract_fit_measures(MagStepwise, is_network = TRUE),
  MagEFA4Mod = extract_fit_measures(MagEFA4Mod),
  MagEFA5Mod = extract_fit_measures(MagEFA5Mod)
)

fit_measures_combined <- data.frame(
  Measure = names(fit_measures_list[[1]]),
  MagTheory4 = unlist(fit_measures_list$MagTheory4, use.names = FALSE),
  #MagTheory5 = unlist(fit_measures_list$MagTheory5, use.names = FALSE),
  MagEFA4 = unlist(fit_measures_list$MagEFA4, use.names = FALSE),
  #MagEFA5 = unlist(fit_measures_list$MagEFA5, use.names = FALSE),
  MagGlasso = unlist(fit_measures_list$MagGlasso, use.names = FALSE),
  MagStepwise = unlist(fit_measures_list$MagStepwise, use.names = FALSE),
  MagEFA4Mod = unlist(fit_measures_list$MagEFA4Mod, use.names = FALSE),
  MagEFA5Mod = unlist(fit_measures_list$MagEFA5Mod, use.names = FALSE))

kable(fit_measures_combined, caption = "Selected Fit Measures for Various Models")
Selected Fit Measures for Various Models
Measure MagTheory4 MagEFA4 MagGlasso MagStepwise MagEFA4Mod MagEFA5Mod
nvar NA NA 15.000 15.000 NA NA
npar 34.000 34.000 59.000 64.000 52.000 52.000
df 86.000 86.000 61.000 56.000 68.000 68.000
chisq 352.268 659.585 823.775 689.577 198.658 186.719
nfi 0.967 0.938 0.923 0.935 0.981 0.983
tli 0.969 0.934 0.876 0.888 0.981 0.983
cfi 0.975 0.946 0.928 0.940 0.988 0.989
rmsea 0.053 0.077 0.106 0.101 0.042 0.040
rmsea_ci_lower 0.047 0.072 0.100 0.094 0.035 0.033
rmsea_ci_upper 0.059 0.083 0.113 0.108 0.048 0.046
rmr 0.029 0.048 0.067 0.051 0.020 0.020
srmr 0.029 0.048 0.067 0.051 0.020 0.020
aic_ll 37063.882 37371.199 37585.389 37461.190 36946.272 36934.332
bic 37234.355 37541.672 37881.210 37782.081 37206.996 37195.056

The theoretical and EFA-based four-factor models ended up being more cross-culturally replicable than either network model, and both of the heavily-modified EFAs also ended up being more replicable. But there’s a parameter-related kicker. MagEFA4Mod and MagEFA5Mod have several parameters that should be dropped because they were nonsignificant in the Hungarian sample.

MagEFA4Model <- '
F1 =~ VC + CO + SI + IN + SS + CA
F2 =~ VP + BD + FW + MR + PCm + AR + VC
F3 =~ SS + CD + CA + FW
F4 =~ DS + LN 

g =~ F1 + F2 + F3 + F4

AR ~~ DS 
VP ~~ BD
IN ~~ AR
MR ~~ PCm
SI ~~ BD'

MagEFA4ModCut <- cfa(MagEFA4Model, sample.cov = MagWAIS.cor, sample.nobs = NMag, std.lv = T)

summary(MagEFA4ModCut, stand = T, fit = T)
## lavaan 0.6.17 ended normally after 75 iterations
## 
##   Estimator                                         ML
##   Optimization method                           NLMINB
##   Number of model parameters                        43
## 
##   Number of observations                          1112
## 
## Model Test User Model:
##                                                       
##   Test statistic                               219.603
##   Degrees of freedom                                77
##   P-value (Chi-square)                           0.000
## 
## Model Test Baseline Model:
## 
##   Test statistic                             10677.169
##   Degrees of freedom                               105
##   P-value                                        0.000
## 
## User Model versus Baseline Model:
## 
##   Comparative Fit Index (CFI)                    0.987
##   Tucker-Lewis Index (TLI)                       0.982
## 
## Loglikelihood and Information Criteria:
## 
##   Loglikelihood user model (H0)             -18431.608
##   Loglikelihood unrestricted model (H1)     -18321.807
##                                                       
##   Akaike (AIC)                               36949.217
##   Bayesian (BIC)                             37164.815
##   Sample-size adjusted Bayesian (SABIC)      37028.236
## 
## Root Mean Square Error of Approximation:
## 
##   RMSEA                                          0.041
##   90 Percent confidence interval - lower         0.035
##   90 Percent confidence interval - upper         0.047
##   P-value H_0: RMSEA <= 0.050                    0.992
##   P-value H_0: RMSEA >= 0.080                    0.000
## 
## Standardized Root Mean Square Residual:
## 
##   SRMR                                           0.021
## 
## Parameter Estimates:
## 
##   Standard errors                             Standard
##   Information                                 Expected
##   Information saturated (h1) model          Structured
## 
## Latent Variables:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##   F1 =~                                                                 
##     VC                0.487    0.023   20.751    0.000    0.991    0.991
##     CO                0.408    0.019   20.934    0.000    0.830    0.830
##     SI                0.417    0.020   21.255    0.000    0.850    0.850
##     IN                0.422    0.020   21.370    0.000    0.859    0.860
##     SS               -0.155    0.025   -6.074    0.000   -0.315   -0.316
##     CA               -0.066    0.023   -2.883    0.004   -0.134   -0.135
##   F2 =~                                                                 
##     VP                0.275    0.021   13.134    0.000    0.782    0.783
##     BD                0.272    0.021   13.108    0.000    0.773    0.774
##     FW                0.327    0.025   13.356    0.000    0.930    0.930
##     MR                0.273    0.021   13.088    0.000    0.775    0.775
##     PCm               0.244    0.019   12.647    0.000    0.693    0.694
##     AR                0.269    0.020   13.260    0.000    0.765    0.765
##     VC               -0.053    0.014   -3.643    0.000   -0.150   -0.149
##   F3 =~                                                                 
##     SS                0.576    0.028   20.740    0.000    1.068    1.072
##     CD                0.425    0.025   16.808    0.000    0.788    0.789
##     CA                0.389    0.026   14.980    0.000    0.721    0.722
##     FW               -0.062    0.022   -2.868    0.004   -0.116   -0.116
##   F4 =~                                                                 
##     DS                0.177    0.060    2.934    0.003    0.716    0.718
##     LN                0.154    0.052    2.947    0.003    0.620    0.620
##   g =~                                                                  
##     F1                1.773    0.105   16.900    0.000    0.871    0.871
##     F2                2.659    0.223   11.945    0.000    0.936    0.936
##     F3                1.564    0.114   13.764    0.000    0.843    0.843
##     F4                3.911    1.350    2.897    0.004    0.969    0.969
## 
## Covariances:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##  .AR ~~                                                                 
##    .DS                0.119    0.016    7.404    0.000    0.119    0.266
##  .VP ~~                                                                 
##    .BD                0.093    0.015    6.351    0.000    0.093    0.237
##  .IN ~~                                                                 
##    .AR                0.042    0.011    3.694    0.000    0.042    0.128
##  .MR ~~                                                                 
##    .PCm              -0.038    0.016   -2.404    0.016   -0.038   -0.083
##  .SI ~~                                                                 
##    .BD                0.029    0.011    2.573    0.010    0.029    0.087
## 
## Variances:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##    .VC                0.238    0.016   15.089    0.000    0.238    0.238
##    .CO                0.311    0.016   19.464    0.000    0.311    0.311
##    .SI                0.277    0.015   18.693    0.000    0.277    0.277
##    .IN                0.259    0.014   18.205    0.000    0.259    0.260
##    .SS                0.247    0.030    8.269    0.000    0.247    0.248
##    .CA                0.601    0.029   20.453    0.000    0.601    0.603
##    .VP                0.387    0.019   19.990    0.000    0.387    0.387
##    .BD                0.400    0.020   20.163    0.000    0.400    0.401
##    .FW                0.291    0.018   16.566    0.000    0.291    0.291
##    .MR                0.399    0.020   20.070    0.000    0.399    0.399
##    .PCm               0.519    0.024   21.321    0.000    0.519    0.519
##    .AR                0.414    0.020   20.656    0.000    0.414    0.415
##    .CD                0.378    0.022   16.977    0.000    0.378    0.378
##    .DS                0.483    0.030   16.300    0.000    0.483    0.485
##    .LN                0.615    0.031   19.994    0.000    0.615    0.615
##    .F1                1.000                               0.241    0.241
##    .F2                1.000                               0.124    0.124
##    .F3                1.000                               0.290    0.290
##    .F4                1.000                               0.061    0.061
##     g                 1.000                               1.000    1.000
MagEFA5Model <- '
F1 =~ VC + CO + SI + IN + SS
F2 =~ VP + BD + FW + PCm + MR
F3 =~ SS + CD + CA
F4 =~ DS + LN
F5 =~ AR + FW + IN

g =~ F1 + F2 + F3 + F4 + F5

DS ~~ AR
VP ~~ BD
VC ~~ IN
MR ~~ CD
PCm ~~ MR
VP ~~ FW
SS ~~ AR
CD ~~ DS
VC ~~ PCm'

MagEFA5ModCut <- cfa(MagEFA5Model, sample.cov = MagWAIS.cor, sample.nobs = NMag, std.lv = T)

summary(MagEFA5ModCut, stand = T, fit = T)
## lavaan 0.6.17 ended normally after 85 iterations
## 
##   Estimator                                         ML
##   Optimization method                           NLMINB
##   Number of model parameters                        47
## 
##   Number of observations                          1112
## 
## Model Test User Model:
##                                                       
##   Test statistic                               192.858
##   Degrees of freedom                                73
##   P-value (Chi-square)                           0.000
## 
## Model Test Baseline Model:
## 
##   Test statistic                             10677.169
##   Degrees of freedom                               105
##   P-value                                        0.000
## 
## User Model versus Baseline Model:
## 
##   Comparative Fit Index (CFI)                    0.989
##   Tucker-Lewis Index (TLI)                       0.984
## 
## Loglikelihood and Information Criteria:
## 
##   Loglikelihood user model (H0)             -18418.236
##   Loglikelihood unrestricted model (H1)     -18321.807
##                                                       
##   Akaike (AIC)                               36930.471
##   Bayesian (BIC)                             37166.125
##   Sample-size adjusted Bayesian (SABIC)      37016.841
## 
## Root Mean Square Error of Approximation:
## 
##   RMSEA                                          0.038
##   90 Percent confidence interval - lower         0.032
##   90 Percent confidence interval - upper         0.045
##   P-value H_0: RMSEA <= 0.050                    0.998
##   P-value H_0: RMSEA >= 0.080                    0.000
## 
## Standardized Root Mean Square Residual:
## 
##   SRMR                                           0.021
## 
## Parameter Estimates:
## 
##   Standard errors                             Standard
##   Information                                 Expected
##   Information saturated (h1) model          Structured
## 
## Latent Variables:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##   F1 =~                                                                 
##     VC                0.460    0.020   23.511    0.000    0.850    0.850
##     CO                0.455    0.019   23.441    0.000    0.840    0.840
##     SI                0.467    0.020   23.840    0.000    0.863    0.863
##     IN                0.400    0.023   17.118    0.000    0.739    0.739
##     SS               -0.134    0.024   -5.597    0.000   -0.248   -0.249
##   F2 =~                                                                 
##     VP                0.267    0.024   10.930    0.000    0.774    0.775
##     BD                0.270    0.025   10.976    0.000    0.784    0.784
##     FW                0.214    0.027    7.812    0.000    0.620    0.621
##     PCm               0.244    0.023   10.635    0.000    0.708    0.709
##     MR                0.273    0.025   10.898    0.000    0.791    0.792
##   F3 =~                                                                 
##     SS                0.610    0.027   22.301    0.000    1.022    1.025
##     CD                0.467    0.023   20.009    0.000    0.782    0.783
##     CA                0.370    0.022   17.132    0.000    0.620    0.620
##   F4 =~                                                                 
##     DS                0.195    0.054    3.632    0.000    0.717    0.717
##     LN                0.171    0.047    3.633    0.000    0.629    0.629
##   F5 =~                                                                 
##     AR                0.482    0.096    5.037    0.000    0.907    0.907
##     FW                0.121    0.027    4.442    0.000    0.227    0.228
##     IN                0.078    0.017    4.668    0.000    0.146    0.146
##   g =~                                                                  
##     F1                1.552    0.086   18.059    0.000    0.841    0.841
##     F2                2.722    0.271   10.051    0.000    0.939    0.939
##     F3                1.344    0.085   15.840    0.000    0.802    0.802
##     F4                3.537    0.991    3.571    0.000    0.962    0.962
##     F5                1.593    0.324    4.924    0.000    0.847    0.847
## 
## Covariances:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##  .DS ~~                                                                 
##    .AR                0.096    0.016    5.832    0.000    0.096    0.327
##  .VP ~~                                                                 
##    .BD                0.093    0.015    6.098    0.000    0.093    0.236
##  .VC ~~                                                                 
##    .IN                0.037    0.013    2.907    0.004    0.037    0.131
##  .MR ~~                                                                 
##    .CD                0.039    0.014    2.804    0.005    0.039    0.104
##  .PCm ~~                                                                
##    .MR               -0.061    0.016   -3.810    0.000   -0.061   -0.142
##  .VP ~~                                                                 
##    .FW                0.055    0.013    4.132    0.000    0.055    0.150
##  .SS ~~                                                                 
##    .AR               -0.029    0.014   -2.128    0.033   -0.029   -0.145
##  .CD ~~                                                                 
##    .DS                0.053    0.015    3.497    0.000    0.053    0.123
##  .VC ~~                                                                 
##    .PCm              -0.027    0.013   -2.083    0.037   -0.027   -0.073
## 
## Variances:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##    .VC                0.277    0.017   16.749    0.000    0.277    0.278
##    .CO                0.294    0.016   18.073    0.000    0.294    0.294
##    .SI                0.255    0.015   16.764    0.000    0.255    0.255
##    .IN                0.280    0.016   17.444    0.000    0.280    0.279
##    .SS                0.231    0.031    7.343    0.000    0.231    0.232
##    .VP                0.399    0.021   18.922    0.000    0.399    0.400
##    .BD                0.385    0.020   19.110    0.000    0.385    0.385
##    .FW                0.336    0.017   19.384    0.000    0.336    0.337
##    .PCm               0.496    0.025   20.164    0.000    0.496    0.498
##    .MR                0.373    0.020   18.253    0.000    0.373    0.373
##    .CD                0.386    0.024   16.364    0.000    0.386    0.388
##    .CA                0.615    0.029   21.341    0.000    0.615    0.615
##    .DS                0.486    0.030   16.460    0.000    0.486    0.486
##    .LN                0.604    0.031   19.681    0.000    0.604    0.605
##    .AR                0.177    0.090    1.959    0.050    0.177    0.177
##    .F1                1.000                               0.293    0.293
##    .F2                1.000                               0.119    0.119
##    .F3                1.000                               0.356    0.356
##    .F4                1.000                               0.074    0.074
##    .F5                1.000                               0.283    0.283
##     g                 1.000                               1.000    1.000

Cutting the nonsignificant parameters did not degrade fit, but it made the models more parsimonious. Cutting from the network models would make them more parsimonious, but it couldn’t re-establish a fit advantage. To do that would require fitting a new network model. The problem with that is the same as the one in the American WAIS-IV data: you can just fit another factor model that fits even better, with greater parsimony, and greater cross-cultural replicability.

The major comparative issue here is that the Hungarian network model fit poorly because its data was forced to conform to the exact estimates rather than just the configuration of the U.S. data, whereas the factor model was unfairly advantaged because it was allowed to have loadings and covariances that varied. So for a fair comparison, let’s compare the the metric invariant factor model fits.

TheoryAmMagMetric <- cfa(Theory4, 
                       sample.cov = list(WAISIV.cor, MagWAIS.cor), 
                       sample.nobs = list(NIV, NMag), 
                       std.lv = F, group.equal = "loadings") #Contains no residual covariances

EFAAmMagMetric <- cfa(EFA4, 
                       sample.cov = list(WAISIV.cor, MagWAIS.cor), 
                       sample.nobs = list(NIV, NMag), 
                       std.lv = F, group.equal = "loadings") #Contains no residual covariances

EFAModAmMagMetric4 <- cfa(EFA4Mod, 
                       sample.cov = list(WAISIV.cor, MagWAIS.cor), 
                       sample.nobs = list(NIV, NMag), 
                       std.lv = F, group.equal = c("loadings", "residual.covariances"))
  
EFAModAmMagMetric5 <- cfa(EFA5Mod, 
                       sample.cov = list(WAISIV.cor, MagWAIS.cor), 
                       sample.nobs = list(NIV, NMag), 
                       std.lv = F, group.equal = c("loadings", "residual.covariances"))

round(cbind("Theory Metric"          = fitMeasures(TheoryAmMagMetric, FITM),
            "EFA Metric"             = fitMeasures(EFAAmMagMetric, FITM),
            "4 EFA Metric, Modified" = fitMeasures(EFAModAmMagMetric4, FITM),
            "5 EFA Metric, Modified" = fitMeasures(EFAModAmMagMetric5, FITM)), 3)
##                Theory Metric EFA Metric 4 EFA Metric, Modified
## chisq               1323.575   1648.012                467.247
## df                   186.000    186.000                168.000
## npar                  54.000     54.000                 72.000
## cfi                    0.956      0.944                  0.988
## rmsea                  0.065      0.073                  0.035
## rmsea.ci.lower         0.062      0.070                  0.031
## rmsea.ci.upper         0.068      0.077                  0.039
## aic                99266.027  99590.464              98445.699
## bic                99588.763  99913.200              98876.014
##                5 EFA Metric, Modified
## chisq                         470.812
## df                            167.000
## npar                           73.000
## cfi                             0.988
## rmsea                           0.035
## rmsea.ci.lower                  0.032
## rmsea.ci.upper                  0.039
## aic                         98451.264
## bic                         98887.555

The AIC and BIC rose due to the fact that this was a multigroup model and the N and number of parameters increased accordingly. Nevertheless, appropriately scaled, each model still outmatches each network model refit. This remains true when the residual variances are constrained to equality as well.

Before moving on, it is worth noting that the theoretical factor model replicated better than even the additionally unmodified EFA-based factor model in this case.

The Programme for International Student Assessment (PISA)

  • Post here later (unlinked)

Woodcock-Johnson I

  • Post here later

Replications Across Age

  • Post here later

Replications Across Race

  • Post here later

Replications Across Sex

  • Post here later

Woodcock-Johnson R

  • Post here later

Replications Across Age

  • Post here later

Replications Across Race

  • Post here later

Replications Across Sex

  • Post here later

Woodcock-Johnson III

  • Post here later

Replications Across Age

  • Post here later

Replications Across Race

  • Post here later

Replications Across Sex

  • Post here later

Holzinger-Swineford

  • Post here later

Spearman’s Courses

  • Post here later

The MIDUS BCATC Across Ages

  • Post here later

Project Talent

  • Post here later

Train

  • Post here later

Test

  • Post here later

Replications Across Race

  • Post here later

Replications Across Sex

  • Post here later

TEDS

This is from Knyspel and Plomin’s (2024) recent study. In that study, they only compared theoretical factor models with network models and concluded that the network models were substantially superior. In terms of fit, they were right about the superiority of the network models to their shown factor models, but as shown above, that’s a low bar when network models are fitted based off of an exploratory procedure and factor models are not.

TEDS <- '
1                                                   
0.37    1                                               
0.41    0.31    1                                           
0.37    0.28    0.46    1                                       
0.41    0.34    0.49    0.47    1                                   
0.39    0.27    0.47    0.42    0.58    1                               
0.2 0.11    0.3 0.29    0.41    0.32    1                           
0.26    0.12    0.33    0.3 0.42    0.35    0.59    1                       
0.47    0.31    0.44    0.45    0.49    0.51    0.34    0.36    1                   
0.51    0.37    0.42    0.4 0.49    0.48    0.25    0.29    0.65    1               
0.42    0.28    0.38    0.38    0.44    0.47    0.31    0.34    0.7 0.59    1           
0.39    0.25    0.41    0.34    0.43    0.42    0.25    0.3 0.45    0.4 0.39    1       
0.39    0.27    0.51    0.48    0.48    0.46    0.31    0.33    0.43    0.39    0.37    0.42    1   
0.35    0.27    0.45    0.41    0.41    0.41    0.25    0.24    0.39    0.36    0.34    0.41    0.46    1'

TEDSNames = list("Rav", "Pic", "Voc", "Gen", "PIAT", "GOAL", "Flu", "TOWRE", "Und", "Non", "Com", "TOAL", "Fig", "Info")

TEDS.cor = getCov(TEDS, names = TEDSNames)

NTEDS <- 5645

First, let’s do EFAs.

fa.parallel(TEDS.cor, n.obs = NTEDS)

## Parallel analysis suggests that the number of factors =  6  and the number of components =  2
print(fa(TEDS.cor, n.obs = NTEDS, nfactors = 4), cut = .3, sort = T)
## Factor Analysis using method =  minres
## Call: fa(r = TEDS.cor, nfactors = 4, n.obs = NTEDS)
## Standardized loadings (pattern matrix) based upon correlation matrix
##       item   MR1   MR3   MR2   MR4   h2   u2 com
## Fig     13  0.76                   0.52 0.48 1.0
## Info    14  0.69                   0.42 0.58 1.0
## Voc      3  0.65                   0.50 0.50 1.1
## Gen      4  0.55                   0.41 0.59 1.0
## TOAL    12  0.44                   0.36 0.64 1.3
## GOAL     6  0.44                   0.48 0.52 1.6
## PIAT     5  0.41                   0.55 0.45 2.2
## Und      9        0.85             0.77 0.23 1.0
## Com     11        0.80             0.64 0.36 1.0
## Non     10        0.56        0.33 0.63 0.37 1.6
## TOWRE    8              0.77       0.60 0.40 1.0
## Flu      7              0.76       0.58 0.42 1.0
## Pic      2                    0.49 0.32 0.68 1.2
## Rav      1                    0.41 0.45 0.55 1.8
## 
##                        MR1  MR3  MR2  MR4
## SS loadings           2.81 2.13 1.41 0.87
## Proportion Var        0.20 0.15 0.10 0.06
## Cumulative Var        0.20 0.35 0.45 0.52
## Proportion Explained  0.39 0.30 0.20 0.12
## Cumulative Proportion 0.39 0.68 0.88 1.00
## 
##  With factor correlations of 
##      MR1  MR3  MR2  MR4
## MR1 1.00 0.68 0.57 0.59
## MR3 0.68 1.00 0.50 0.58
## MR2 0.57 0.50 1.00 0.22
## MR4 0.59 0.58 0.22 1.00
## 
## Mean item complexity =  1.3
## Test of the hypothesis that 4 factors are sufficient.
## 
## df null model =  91  with the objective function =  5.72 with Chi Square =  32273.73
## df of  the model are 41  and the objective function was  0.07 
## 
## The root mean square of the residuals (RMSR) is  0.01 
## The df corrected root mean square of the residuals is  0.02 
## 
## The harmonic n.obs is  5645 with the empirical chi square  212.85  with prob <  4.6e-25 
## The total n.obs was  5645  with Likelihood Chi Square =  407.71  with prob <  6.4e-62 
## 
## Tucker Lewis Index of factoring reliability =  0.975
## RMSEA index =  0.04  and the 90 % confidence intervals are  0.036 0.043
## BIC =  53.54
## Fit based upon off diagonal values = 1
## Measures of factor score adequacy             
##                                                    MR1  MR3  MR2  MR4
## Correlation of (regression) scores with factors   0.92 0.94 0.88 0.80
## Multiple R square of scores with factors          0.86 0.88 0.78 0.64
## Minimum correlation of possible factor scores     0.71 0.75 0.56 0.28
print(fa(TEDS.cor, n.obs = NTEDS, nfactors = 5), cut = .3, sort = T)
## Factor Analysis using method =  minres
## Call: fa(r = TEDS.cor, nfactors = 5, n.obs = NTEDS)
## Standardized loadings (pattern matrix) based upon correlation matrix
##       item   MR1   MR3   MR2   MR4   MR5   h2     u2 com
## Fig     13  0.75                         0.53 0.4684 1.0
## Info    14  0.68                         0.42 0.5759 1.0
## Voc      3  0.64                         0.50 0.4983 1.0
## Gen      4  0.52                         0.41 0.5870 1.1
## TOAL    12  0.41                         0.36 0.6371 1.4
## GOAL     6  0.33                         0.48 0.5165 2.7
## Und      9        0.86                   0.77 0.2329 1.0
## Com     11        0.82                   0.64 0.3588 1.0
## Non     10        0.57                   0.62 0.3756 1.5
## TOWRE    8              1.01             1.00 0.0046 1.0
## Flu      7              0.45             0.41 0.5913 1.7
## PIAT     5                    1.00       1.00 0.0044 1.0
## Rav      1                          0.44 0.50 0.5043 1.8
## Pic      2                          0.38 0.29 0.7073 1.6
## 
##                        MR1  MR3  MR2  MR4  MR5
## SS loadings           2.46 2.18 1.34 1.32 0.64
## Proportion Var        0.18 0.16 0.10 0.09 0.05
## Cumulative Var        0.18 0.33 0.43 0.52 0.57
## Proportion Explained  0.31 0.27 0.17 0.17 0.08
## Cumulative Proportion 0.31 0.58 0.75 0.92 1.00
## 
##  With factor correlations of 
##      MR1  MR3  MR2  MR4  MR5
## MR1 1.00 0.68 0.44 0.67 0.43
## MR3 0.68 1.00 0.41 0.56 0.47
## MR2 0.44 0.41 1.00 0.43 0.06
## MR4 0.67 0.56 0.43 1.00 0.31
## MR5 0.43 0.47 0.06 0.31 1.00
## 
## Mean item complexity =  1.3
## Test of the hypothesis that 5 factors are sufficient.
## 
## df null model =  91  with the objective function =  5.72 with Chi Square =  32273.73
## df of  the model are 31  and the objective function was  0.02 
## 
## The root mean square of the residuals (RMSR) is  0.01 
## The df corrected root mean square of the residuals is  0.01 
## 
## The harmonic n.obs is  5645 with the empirical chi square  72.82  with prob <  3.2e-05 
## The total n.obs was  5645  with Likelihood Chi Square =  121.95  with prob <  9.9e-13 
## 
## Tucker Lewis Index of factoring reliability =  0.992
## RMSEA index =  0.023  and the 90 % confidence intervals are  0.019 0.027
## BIC =  -145.85
## Fit based upon off diagonal values = 1
## Measures of factor score adequacy             
##                                                    MR1  MR3  MR2  MR4  MR5
## Correlation of (regression) scores with factors   0.92 0.94 1.00 1.00 0.74
## Multiple R square of scores with factors          0.84 0.88 1.00 1.00 0.55
## Minimum correlation of possible factor scores     0.69 0.76 0.99 0.99 0.11
print(fa(TEDS.cor, n.obs = NTEDS, nfactors = 6), cut = .3, sort = T)
## Factor Analysis using method =  minres
## Call: fa(r = TEDS.cor, nfactors = 6, n.obs = NTEDS)
## Standardized loadings (pattern matrix) based upon correlation matrix
##       item   MR3   MR1   MR2   MR4   MR5   MR6   h2     u2 com
## Fig     13  0.72                               0.53 0.4688 1.0
## Voc      3  0.62                               0.50 0.4960 1.1
## Gen      4  0.60                               0.44 0.5598 1.1
## Info    14  0.58                               0.41 0.5856 1.1
## GOAL     6                                     0.48 0.5188 3.1
## Und      9        0.85                         0.76 0.2351 1.0
## Com     11        0.81                         0.64 0.3582 1.0
## Non     10        0.57                         0.63 0.3728 1.5
## TOWRE    8              1.00                   1.00 0.0047 1.0
## Flu      7              0.45                   0.41 0.5889 1.8
## PIAT     5                    1.00             1.00 0.0043 1.0
## TOAL    12                          0.79       0.67 0.3340 1.0
## Rav      1                                0.41 0.49 0.5134 1.9
## Pic      2                                0.38 0.30 0.6985 1.6
## 
##                        MR3  MR1  MR2  MR4  MR5  MR6
## SS loadings           2.14 2.11 1.32 1.30 0.82 0.57
## Proportion Var        0.15 0.15 0.09 0.09 0.06 0.04
## Cumulative Var        0.15 0.30 0.40 0.49 0.55 0.59
## Proportion Explained  0.26 0.26 0.16 0.16 0.10 0.07
## Cumulative Proportion 0.26 0.51 0.67 0.83 0.93 1.00
## 
##  With factor correlations of 
##      MR3  MR1  MR2  MR4  MR5  MR6
## MR3 1.00 0.66 0.43 0.66 0.66 0.37
## MR1 0.66 1.00 0.41 0.56 0.57 0.42
## MR2 0.43 0.41 1.00 0.43 0.34 0.02
## MR4 0.66 0.56 0.43 1.00 0.50 0.28
## MR5 0.66 0.57 0.34 0.50 1.00 0.30
## MR6 0.37 0.42 0.02 0.28 0.30 1.00
## 
## Mean item complexity =  1.4
## Test of the hypothesis that 6 factors are sufficient.
## 
## df null model =  91  with the objective function =  5.72 with Chi Square =  32273.73
## df of  the model are 22  and the objective function was  0.01 
## 
## The root mean square of the residuals (RMSR) is  0 
## The df corrected root mean square of the residuals is  0.01 
## 
## The harmonic n.obs is  5645 with the empirical chi square  18.12  with prob <  0.7 
## The total n.obs was  5645  with Likelihood Chi Square =  41.42  with prob <  0.0073 
## 
## Tucker Lewis Index of factoring reliability =  0.998
## RMSEA index =  0.013  and the 90 % confidence intervals are  0.006 0.018
## BIC =  -148.63
## Fit based upon off diagonal values = 1
## Measures of factor score adequacy             
##                                                    MR3  MR1  MR2  MR4  MR5  MR6
## Correlation of (regression) scores with factors   0.91 0.94 1.00 1.00 0.86 0.72
## Multiple R square of scores with factors          0.83 0.88 1.00 1.00 0.73 0.52
## Minimum correlation of possible factor scores     0.67 0.75 0.99 0.99 0.47 0.04

The five-factor configuration doesn’t work, since it suggests a loading of 1 for the PIAT on the fourth factor. The six-factor configuration could work with cross-loadings. For simplicity, let’s just go with four.

TEDS4 <- '
F1 =~ Fig + Info + Voc + Gen + TOAL + GOAL + PIAT
F2 =~ Und + Com + Non
F3 =~ TOWRE + Flu
F4 =~ Pic + Rav

g =~ F1 + F2 + F3 + F4'

TEDS4Mod <- cfa(TEDS4, sample.cov = TEDS.cor, sample.nobs = NTEDS, std.lv = T)

summary(TEDS4Mod, stand = T, fit = T)
## lavaan 0.6.17 ended normally after 48 iterations
## 
##   Estimator                                         ML
##   Optimization method                           NLMINB
##   Number of model parameters                        32
## 
##   Number of observations                          5645
## 
## Model Test User Model:
##                                                       
##   Test statistic                              1323.417
##   Degrees of freedom                                73
##   P-value (Chi-square)                           0.000
## 
## Model Test Baseline Model:
## 
##   Test statistic                             32310.939
##   Degrees of freedom                                91
##   P-value                                        0.000
## 
## User Model versus Baseline Model:
## 
##   Comparative Fit Index (CFI)                    0.961
##   Tucker-Lewis Index (TLI)                       0.952
## 
## Loglikelihood and Information Criteria:
## 
##   Loglikelihood user model (H0)             -96637.950
##   Loglikelihood unrestricted model (H1)     -95976.242
##                                                       
##   Akaike (AIC)                              193339.901
##   Bayesian (BIC)                            193552.334
##   Sample-size adjusted Bayesian (SABIC)     193450.647
## 
## Root Mean Square Error of Approximation:
## 
##   RMSEA                                          0.055
##   90 Percent confidence interval - lower         0.053
##   90 Percent confidence interval - upper         0.058
##   P-value H_0: RMSEA <= 0.050                    0.001
##   P-value H_0: RMSEA >= 0.080                    0.000
## 
## Standardized Root Mean Square Residual:
## 
##   SRMR                                           0.033
## 
## Parameter Estimates:
## 
##   Standard errors                             Standard
##   Information                                 Expected
##   Information saturated (h1) model          Structured
## 
## Latent Variables:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##   F1 =~                                                                 
##     Fig               0.234    0.013   17.634    0.000    0.683    0.683
##     Info              0.208    0.012   17.348    0.000    0.609    0.609
##     Voc               0.236    0.013   17.653    0.000    0.689    0.689
##     Gen               0.219    0.013   17.485    0.000    0.641    0.641
##     TOAL              0.208    0.012   17.344    0.000    0.608    0.608
##     GOAL              0.243    0.014   17.714    0.000    0.709    0.710
##     PIAT              0.253    0.014   17.790    0.000    0.739    0.739
##   F2 =~                                                                 
##     Und               0.461    0.011   41.204    0.000    0.866    0.866
##     Com               0.419    0.010   40.153    0.000    0.788    0.788
##     Non               0.408    0.010   39.601    0.000    0.765    0.765
##   F3 =~                                                                 
##     TOWRE             0.633    0.014   45.737    0.000    0.802    0.802
##     Flu               0.581    0.012   46.488    0.000    0.736    0.736
##   F4 =~                                                                 
##     Pic               0.262    0.014   18.210    0.000    0.510    0.510
##     Rav               0.372    0.021   17.479    0.000    0.725    0.725
##   g =~                                                                  
##     F1                2.748    0.171   16.112    0.000    0.940    0.940
##     F2                1.589    0.050   31.555    0.000    0.846    0.846
##     F3                0.778    0.025   31.265    0.000    0.614    0.614
##     F4                1.673    0.102   16.464    0.000    0.858    0.858
## 
## Variances:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##    .Fig               0.533    0.011   47.031    0.000    0.533    0.533
##    .Info              0.629    0.013   49.042    0.000    0.629    0.629
##    .Voc               0.525    0.011   46.822    0.000    0.525    0.525
##    .Gen               0.589    0.012   48.276    0.000    0.589    0.589
##    .TOAL              0.630    0.013   49.062    0.000    0.630    0.631
##    .GOAL              0.496    0.011   46.055    0.000    0.496    0.497
##    .PIAT              0.454    0.010   44.728    0.000    0.454    0.454
##    .Und               0.250    0.008   30.452    0.000    0.250    0.250
##    .Com               0.380    0.009   40.617    0.000    0.380    0.380
##    .Non               0.414    0.010   42.413    0.000    0.414    0.414
##    .TOWRE             0.357    0.017   20.622    0.000    0.357    0.357
##    .Flu               0.459    0.016   28.756    0.000    0.459    0.459
##    .Pic               0.739    0.016   46.526    0.000    0.739    0.740
##    .Rav               0.474    0.018   26.521    0.000    0.474    0.474
##    .F1                1.000                               0.117    0.117
##    .F2                1.000                               0.284    0.284
##    .F3                1.000                               0.623    0.623
##    .F4                1.000                               0.263    0.263
##     g                 1.000                               1.000    1.000

The networks to be compared to have the following fits.

#Glasso

TEDSGlassoNet <- qgraph(TEDS.cor, sampleSize = NTEDS, graph = "glasso", layout = "spring", legend = FALSE, 
                      palette = "colorblind", theme = "Reddit", 
                      fade = T, threshold = T)
## Note: Network with lowest lambda selected as best network: assumption of sparsity might be violated.

Glasso <- ggmFit(TEDSGlassoNet, TEDS.cor, sampleSize = NTEDS, ebicTuning = 0.5); Glasso
## Refitting network
## 
## ggmFit object:
##  Use plot(object) to plot the network structure 
##  Fit measures stored under object$fitMeasures 
## 
##              Measure     Value
## 1               nvar        14
## 2               nobs       105
## 3               npar        65
## 4                 df        40
## 5               fmin     0.017
## 6              chisq    191.71
## 7             pvalue  < 0.0001
## 8     baseline.chisq  32310.94
## 9        baseline.df        91
## 10   baseline.pvalue         0
## 11               nfi      0.99
## 12               tli      0.99
## 13               rfi      0.99
## 14               ifi       1.0
## 15               rni       1.0
## 16               cfi       1.0
## 17             rmsea     0.026
## 18    rmsea.ci.lower     0.022
## 19    rmsea.ci.upper     0.030
## 20      rmsea.pvalue         1
## 21               rmr     0.013
## 22              srmr     0.013
## 23              logl -96072.10
## 24 unrestricted.logl -95976.24
## 25            aic.ll 192274.19
## 26           aic.ll2 192275.73
## 27             aic.x    111.71
## 28            aic.x2    321.71
## 29               bic 192705.70
## 30              bic2 192499.15
## 31              ebic 193048.78
## 32        ebicTuning      0.50
#Stepwise

StepwiseGGM <- ggmModSelect(TEDS.cor, n = NTEDS, criterion = "bic")
## Running glasso to obtain starting model...
## Testing all edges...
## Changed one edge...
## Testing subset of edges...
## Changed one edge...
## Testing subset of edges...
## Changed one edge...
## Testing subset of edges...
## Changed one edge...
## Testing subset of edges...
## Changed one edge...
## Testing subset of edges...
## Changed one edge...
## Testing subset of edges...
## Changed one edge...
## Testing subset of edges...
## Changed one edge...
## Testing subset of edges...
## Changed one edge...
## Testing subset of edges...
## Changed one edge...
## Testing subset of edges...
## Changed one edge...
## Testing subset of edges...
## Changed one edge...
## Testing subset of edges...
## Changed one edge...
## Testing subset of edges...
## Changed one edge...
## Testing subset of edges...
## Changed one edge...
## Testing subset of edges...
## Changed one edge...
## Testing subset of edges...
## Changed one edge...
## Testing subset of edges...
## Changed one edge...
## Testing subset of edges...
## Changed one edge...
## Testing subset of edges...
## Changed one edge...
## Testing subset of edges...
## Testing all edges...
## Changed one edge...
## Testing subset of edges...
## Changed one edge...
## Testing subset of edges...
## Changed one edge...
## Testing subset of edges...
## Testing all edges...
## Changed one edge...
## Testing subset of edges...
## Changed one edge...
## Testing subset of edges...
## Testing all edges...
qgraph(StepwiseGGM$graph, layout = "spring", legend = FALSE, 
       palette = "colorblind", theme = "Reddit", 
       fade = T)$layout

##             [,1]        [,2]
##  [1,] -0.1380909 -0.41720100
##  [2,] -1.0000000 -0.20025809
##  [3,] -0.6935957  0.53021451
##  [4,]  0.1756742  0.54878304
##  [5,] -0.2604888 -0.00226633
##  [6,]  0.4275603  0.06681167
##  [7,] -0.3662274 -1.00000000
##  [8,] -0.7374277 -0.83347147
##  [9,]  0.9611905 -0.41695883
## [10,]  0.3888447 -0.73947987
## [11,]  1.0000000 -0.81375511
## [12,]  0.9423205  0.42362457
## [13,] -0.2904019  0.85257521
## [14,]  0.3736771  1.00000000
Stepwise <- ggmFit(StepwiseGGM$graph, TEDS.cor, sampleSize = NTEDS, ebicTuning = 0.5); Stepwise
## Refitting network
## 
## ggmFit object:
##  Use plot(object) to plot the network structure 
##  Fit measures stored under object$fitMeasures 
## 
##              Measure     Value
## 1               nvar        14
## 2               nobs       105
## 3               npar        71
## 4                 df        34
## 5               fmin    0.0089
## 6              chisq    100.61
## 7             pvalue  < 0.0001
## 8     baseline.chisq  32310.94
## 9        baseline.df        91
## 10   baseline.pvalue         0
## 11               nfi       1.0
## 12               tli      0.99
## 13               rfi      0.99
## 14               ifi       1.0
## 15               rni       1.0
## 16               cfi       1.0
## 17             rmsea     0.019
## 18    rmsea.ci.lower     0.014
## 19    rmsea.ci.upper     0.023
## 20      rmsea.pvalue         1
## 21               rmr    0.0079
## 22              srmr    0.0079
## 23              logl -96026.55
## 24 unrestricted.logl -95976.24
## 25            aic.ll 192195.09
## 26           aic.ll2 192196.93
## 27             aic.x     32.61
## 28            aic.x2    242.61
## 29               bic 192666.43
## 30              bic2 192440.81
## 31              ebic 193041.18
## 32        ebicTuning      0.50

The number to beat is 40 df for Glasso and 34 df for Stepwise and AICs and BICs of 192195.09 and 192666.43, and we have 73 df with Akaike and Schwarz deltas of just 1144.811 and 885.904.

TEDS4Alt <- '
F1 =~ Fig + Info + Voc + Gen + TOAL + GOAL + PIAT + Non
F2 =~ Und + Com + Non + GOAL + TOAL + PIAT
F3 =~ TOWRE + Flu + PIAT + Pic + Rav
F4 =~ Pic + Rav + Non

g =~ F1 + F2 + F3 + F4

Non ~~ Rav + Pic + PIAT
GOAL ~~ PIAT
Gen ~~ TOAL + Und
PIAT ~~ Pic
Info ~~ TOWRE + TOAL + PIAT
TOAL ~~ Rav
Flu ~~ Rav'

TEDS4ModAlt <- cfa(TEDS4Alt, sample.cov = TEDS.cor, sample.nobs = NTEDS, std.lv = T)

summary(TEDS4ModAlt, stand = T, fit = T)
## lavaan 0.6.17 ended normally after 80 iterations
## 
##   Estimator                                         ML
##   Optimization method                           NLMINB
##   Number of model parameters                        52
## 
##   Number of observations                          5645
## 
## Model Test User Model:
##                                                       
##   Test statistic                               131.644
##   Degrees of freedom                                53
##   P-value (Chi-square)                           0.000
## 
## Model Test Baseline Model:
## 
##   Test statistic                             32310.939
##   Degrees of freedom                                91
##   P-value                                        0.000
## 
## User Model versus Baseline Model:
## 
##   Comparative Fit Index (CFI)                    0.998
##   Tucker-Lewis Index (TLI)                       0.996
## 
## Loglikelihood and Information Criteria:
## 
##   Loglikelihood user model (H0)             -96042.064
##   Loglikelihood unrestricted model (H1)     -95976.242
##                                                       
##   Akaike (AIC)                              192188.128
##   Bayesian (BIC)                            192533.332
##   Sample-size adjusted Bayesian (SABIC)     192368.091
## 
## Root Mean Square Error of Approximation:
## 
##   RMSEA                                          0.016
##   90 Percent confidence interval - lower         0.013
##   90 Percent confidence interval - upper         0.020
##   P-value H_0: RMSEA <= 0.050                    1.000
##   P-value H_0: RMSEA >= 0.080                    0.000
## 
## Standardized Root Mean Square Residual:
## 
##   SRMR                                           0.009
## 
## Parameter Estimates:
## 
##   Standard errors                             Standard
##   Information                                 Expected
##   Information saturated (h1) model          Structured
## 
## Latent Variables:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##   F1 =~                                                                 
##     Fig               0.351    0.012   28.564    0.000    0.710    0.710
##     Info              0.312    0.011   27.316    0.000    0.631    0.631
##     Voc               0.351    0.012   28.572    0.000    0.711    0.711
##     Gen               0.327    0.012   28.028    0.000    0.662    0.662
##     TOAL              0.227    0.012   19.595    0.000    0.460    0.460
##     GOAL              0.231    0.011   21.456    0.000    0.468    0.468
##     PIAT              0.254    0.011   22.298    0.000    0.514    0.515
##     Non               0.237    0.028    8.524    0.000    0.480    0.481
##   F2 =~                                                                 
##     Und               0.529    0.012   44.976    0.000    0.877    0.878
##     Com               0.479    0.011   44.400    0.000    0.795    0.795
##     Non               0.524    0.022   24.178    0.000    0.868    0.869
##     GOAL              0.160    0.011   13.957    0.000    0.265    0.265
##     TOAL              0.112    0.012    9.041    0.000    0.186    0.186
##     PIAT              0.066    0.011    5.808    0.000    0.109    0.109
##   F3 =~                                                                 
##     TOWRE             0.588    0.014   43.142    0.000    0.787    0.788
##     Flu               0.558    0.013   43.418    0.000    0.748    0.748
##     PIAT              0.133    0.012   11.335    0.000    0.178    0.178
##     Pic              -0.207    0.018  -11.597    0.000   -0.277   -0.277
##     Rav              -0.147    0.018   -8.036    0.000   -0.197   -0.197
##   F4 =~                                                                 
##     Pic               0.179    0.028    6.300    0.000    0.667    0.668
##     Rav               0.218    0.035    6.179    0.000    0.814    0.814
##     Non              -0.160    0.038   -4.227    0.000   -0.599   -0.600
##   g =~                                                                  
##     F1                1.761    0.074   23.899    0.000    0.870    0.870
##     F2                1.323    0.041   31.947    0.000    0.798    0.798
##     F3                0.892    0.031   28.376    0.000    0.666    0.666
##     F4                3.599    0.634    5.678    0.000    0.964    0.964
## 
## Covariances:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##  .Non ~~                                                                
##    .Rav               0.191    0.021    9.166    0.000    0.191    0.449
##    .Pic               0.159    0.018    8.839    0.000    0.159    0.315
##  .PIAT ~~                                                               
##    .Non               0.044    0.007    6.436    0.000    0.044    0.107
##  .GOAL ~~                                                               
##    .PIAT              0.103    0.008   12.843    0.000    0.103    0.202
##  .Gen ~~                                                                
##    .TOAL             -0.052    0.009   -5.678    0.000   -0.052   -0.088
##    .Und               0.034    0.007    5.053    0.000    0.034    0.096
##  .PIAT ~~                                                               
##    .Pic               0.068    0.009    7.766    0.000    0.068    0.115
##  .Info ~~                                                               
##    .TOWRE            -0.044    0.008   -5.146    0.000   -0.044   -0.092
##    .TOAL              0.040    0.010    4.116    0.000    0.040    0.064
##    .PIAT             -0.027    0.008   -3.392    0.001   -0.027   -0.050
##  .TOAL ~~                                                               
##    .Rav               0.032    0.009    3.574    0.000    0.032    0.057
##  .Flu ~~                                                                
##    .Rav              -0.035    0.009   -3.817    0.000   -0.035   -0.075
## 
## Variances:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##    .Fig               0.495    0.011   43.824    0.000    0.495    0.495
##    .Info              0.603    0.013   46.251    0.000    0.603    0.602
##    .Voc               0.494    0.011   43.782    0.000    0.494    0.494
##    .Gen               0.561    0.012   45.680    0.000    0.561    0.562
##    .TOAL              0.635    0.013   47.906    0.000    0.635    0.635
##    .GOAL              0.538    0.011   47.907    0.000    0.538    0.538
##    .PIAT              0.485    0.011   45.018    0.000    0.485    0.486
##    .Non               0.359    0.028   12.890    0.000    0.359    0.359
##    .Und               0.229    0.009   26.027    0.000    0.229    0.230
##    .Com               0.367    0.009   39.033    0.000    0.367    0.368
##    .TOWRE             0.379    0.015   24.569    0.000    0.379    0.379
##    .Flu               0.441    0.015   29.784    0.000    0.441    0.441
##    .Pic               0.714    0.017   42.400    0.000    0.714    0.715
##    .Rav               0.504    0.017   29.896    0.000    0.504    0.504
##    .F1                1.000                               0.244    0.244
##    .F2                1.000                               0.364    0.364
##    .F3                1.000                               0.557    0.557
##    .F4                1.000                               0.072    0.072
##     g                 1.000                               1.000    1.000

A model with the configuration

F1 =~ Fig + Info + Voc + Gen + TOAL + GOAL + PIAT

F2 =~ Und + Com + Non + GOAL + TOAL

F3 =~ TOWRE + Flu + PIAT + Pic + Rav

F4 =~ Pic + Rav

g =~ F1 + F2 + F3 + F4

Non ~~ Rav + Pic + PIAT

GOAL ~~ PIAT

Gen ~~ TOAL

PIAT ~~ Pic

Matches the best-fitting network model in terms of BIC, but is inferior in terms of AIC. It has 62 df. Adding a loading for Non on F1 makes it strictly superior in terms of BIC, but still not AIC. It ultimately took going down to 53 df to decisively beat the best-fitting network models on both dimensions. Because BIC favors parsimony, I suspect the parameters added after making the factor model the BIC leader won’t replicate as well in a holdout sample.

Discussion

Network models are new, they feel like an alternative to an old paradigm, and so they’re attractive, but they are being used carelessly. They have not been found to be superior to traditional factor models, neither in terms of fit nor in terms of their predictions. In fact, while the factor-analytic tradition has been quite generative over the last century, the network-analytic one hasn’t yet borne fruit, and it has arguably been degenerate since its inception due.

Because the good fit of network models does not seem to be the result of explaining the data, but of being fit to it, the more systematic unmodeled complexity, the more they’ll be favored over factor models that aren’t also fitted in an exploratory fashion. At the same time, the more their fit is based on fitting parameters for what ought to be considered unmodeled complexity, the less replicable they’ll be compared to models of more substantive dimensions. Network models seem to have limited robustness, and this could help to explain that.

To figure out which model is more correct will require going beyond cross-sectional model fit comparisons; we’ll need theory that generates predictions that we can go out and test head-to-head. If those comparisons never come, then we’ll simply never know which model is more right. But if I had to bet based on parsimony*, it looks like the factor model wins.

References

Meredith, W. (1964). Notes on factorial invariance. Psychometrika, 29(2), 177–185. https://doi.org/10.1007/BF02289699

Meredith, W., & Millsap, R. E. (1992). On the misuse of manifest variables in the detection of measurement bias. Psychometrika, 57(2), 289–311. https://doi.org/10.1007/BF02294510

Millsap, R. E., & Meredith, W. (1992). Inferential Conditions in the Statistical Detection of Measurement Bias. Applied Psychological Measurement, 16(4), 389–402. https://doi.org/10.1177/014662169201600411

Meredith, W. (1993). Measurement invariance, factor analysis and factorial invariance. Psychometrika, 58(4), 525–543. https://doi.org/10.1007/BF02294825

Knyspel, J., & Plomin, R. (2024). Comparing factor and network models of cognitive abilities using twin data. Intelligence, 104, 101833. https://doi.org/10.1016/j.intell.2024.101833

Regarding Knyspel and Plomin’s study, it is bound to be misinterpreted. The authors have not yet provided their code (their preregistration says they will, so presumably it’s just a matter of time), nor have they provided their full model fits. Their fits must be wrong unless they misfit their models such that they did not decompose the residual variances of the indicators in their factor models or correlate any of them in any way. If they did not do that at all, then it means the fits of the common and independent pathway models were unnecessarily penalized relative to the network models because the residual variances for each indicator are almost-certainly not entirely unsystematic. In the most extreme case, failing to model out any residual variances means that if g doesn’t mediate all genetic and environmental effects then a model correctly depicting g mediating common genetic and environmental effects will be inappropriately depicted as having bad fit. I assume this happened, because the factor models had fits much worse than seems plausible if the variance decomposition was done correctly, speaking as someone who has done more than three dozen common and independent pathway model comparisons for a publication in review.

To grasp this better, consider supplementary tables 6a and 6b, the factor loadings and variance decomposition of g in the one-factor common pathway model. The loadings of the A, C, and E factors on g were .79, .54, and .31. The total variance explained in each of the indicators (with rounding) that we know of was simply too low for the model to fit well if the residual variances were kept constrained or systematic influences were otherwise unmodeled. For example, in that model, just 37% of the “Rav” variable was explained, because the only loading on it was from g, and it was .61, and .61^2 = .37. The amount explained by g-mediated A, C, and E amounts to (.79*.610)^2 = 23.2%, (.54*.61)^2 = 10.9%, (.31*.61) = 3.6%, but we have no idea what the unmediated A, C, and E values from this model were. To piece those together, we have to look at the other models. We do know from the other factor models and the network models that there was additional unmodeled heritability (and unmodeled C and E variance), and that some of it was coheritability (and coenvironmentality), so it is highly probable that this model was fit poorly, thus explaining its poor fit (and leaving the network-based ACE estimates as partialed estimates that are incomparable to estimates from poorly fit factor models).

Given the other model fitting issues, perhaps this is not surprising.

Colom, R., Rebollo, I., Palacios, A., Juan-Espinosa, M., & Kyllonen, P. C. (2004). Working memory is (almost) perfectly predicted by g. Intelligence, 32(3), 277–296. https://doi.org/10.1016/j.intell.2003.12.002

Malanchini, M., Rimfeld, K., Shakeshaft, N. G., McMillan, A., Schofield, K. L., Rodic, M., Rossi, V., Kovas, Y., Dale, P. S., Tucker-Drob, E. M., & Plomin, R. (2020). Evidence for a unitary structure of spatial cognition beyond general intelligence. Npj Science of Learning, 5(1), 1–13. https://doi.org/10.1038/s41539-020-0067-8

Thus study is bizarre, because it mentions a unitary structure of spatial cognition and it never once checked whether an independent pathway model was supported.

Procopio, F., Zhou, Q., Wang, Z., Gidziela, A., Rimfeld, K., Malanchini, M., & Plomin, R. (2022). The genetics of specific cognitive abilities. Intelligence, 95, 101689. https://doi.org/10.1016/j.intell.2022.101689

The principle issue with this paper is that it claimed to analyze abilities net of g, but did not look at factors defined to be orthogonal from g or factors residualized of g variance. It thus provided zero determinate details about the genetic or environmental determinants of non-g abilities, as they were never distinguished from the influences on g.

Kan, K.-J., van der Maas, H. L. J., & Levine, S. Z. (2019). Extending psychometric network analysis: Empirical evidence against g in favor of mutualism? Intelligence, 73, 52–62. https://doi.org/10.1016/j.intell.2018.12.004

Schmank, C. J., Goring, S. A., Kovacs, K., & Conway, A. R. A. (2019). Psychometric Network Analysis of the Hungarian WAIS. Journal of Intelligence, 7(3), Article 3. https://doi.org/10.3390/jintelligence7030021

Weiss, L. G., Keith, T. Z., Zhu, J., & Chen, H. (2013). WAIS-IV and Clinical Validation of the Four- and Five-Factor Interpretative Approaches. Journal of Psychoeducational Assessment, 31(2), 94–113. https://doi.org/10.1177/0734282913478030

Staffaroni, A. M., Eng, M. E., Moses Jr., J. A., Zeiner, H. K., & Wickham, R. E. (2018). Four- and five-factor models of the WAIS-IV in a clinical sample: Variations in indicator configuration and factor correlational structure. Psychological Assessment, 30(5), 693–706. https://doi.org/10.1037/pas0000518

A postscript note: The first person to publicly note this comparison issue was Luis Eduardo Garrido, who did so on Twitter in Eiko Fried’s comment section circa 2019: https://web.archive.org/web/20190306201553/https:/twitter.com/LuisEdGarrido/status/1090977720119513093.

* Some people have argued that the g factor model is less theoretically parsimonious than a network model because it proposes a causal construct, or at least, that different forces operate in line with how such a construct looks statistically. This argument makes very little sense, as the alternatives propose complex interactions and developmental processes. Anyone could argue any option is less parsimonious than any other on some sort of theoretical grounds, all you have to do is elaborate someone else’s theory further than you elaborate your own. The question of theoretical parsimony can only be decided through the power of theory. That is, whether a theory predicts phenomena and provides simple explanations for them.

sessionInfo()
## R version 4.4.0 (2024-04-24)
## Platform: aarch64-apple-darwin20
## Running under: macOS Sonoma 14.5
## 
## Matrix products: default
## BLAS:   /Library/Frameworks/R.framework/Versions/4.4-arm64/Resources/lib/libRblas.0.dylib 
## LAPACK: /Library/Frameworks/R.framework/Versions/4.4-arm64/Resources/lib/libRlapack.dylib;  LAPACK version 3.12.0
## 
## locale:
## [1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8
## 
## time zone: America/Chicago
## tzcode source: internal
## 
## attached base packages:
## [1] stats     graphics  grDevices utils     datasets  methods   base     
## 
## other attached packages:
##  [1] knitr_1.46        qpcR_1.4-1        Matrix_1.7-0      robustbase_0.99-3
##  [5] rgl_1.3.1         minpack.lm_1.2-4  MASS_7.3-60.2     igraph_2.0.3     
##  [9] qgraph_1.9.8      psych_2.4.3       lavaan_0.6-17     pacman_0.5.1     
## 
## loaded via a namespace (and not attached):
##  [1] gtable_0.3.5         xfun_0.43            bslib_0.7.0         
##  [4] ggplot2_3.5.1        htmlwidgets_1.6.4    lattice_0.22-6      
##  [7] quadprog_1.5-8       vctrs_0.6.5          tools_4.4.0         
## [10] generics_0.1.3       stats4_4.4.0         parallel_4.4.0      
## [13] tibble_3.2.1         fansi_1.0.6          highr_0.10          
## [16] DEoptimR_1.1-3       cluster_2.1.6        pkgconfig_2.0.3     
## [19] data.table_1.15.4    checkmate_2.3.1      lifecycle_1.0.4     
## [22] GPArotation_2024.3-1 compiler_4.4.0       stringr_1.5.1       
## [25] munsell_0.5.1        mnormt_2.1.1         htmltools_0.5.8.1   
## [28] glasso_1.11          sass_0.4.9           fdrtool_1.2.17      
## [31] yaml_2.3.8           htmlTable_2.4.2      Formula_1.2-5       
## [34] pillar_1.9.0         jquerylib_0.1.4      cachem_1.0.8        
## [37] Hmisc_5.1-2          rpart_4.1.23         abind_1.4-5         
## [40] nlme_3.1-164         gtools_3.9.5         tidyselect_1.2.1    
## [43] digest_0.6.35        stringi_1.8.4        dplyr_1.1.4         
## [46] reshape2_1.4.4       fastmap_1.1.1        grid_4.4.0          
## [49] colorspace_2.1-0     cli_3.6.2            magrittr_2.0.3      
## [52] base64enc_0.1-3      utf8_1.2.4           pbivnorm_0.6.0      
## [55] foreign_0.8-86       corpcor_1.6.10       scales_1.3.0        
## [58] backports_1.4.1      rmarkdown_2.26       jpeg_0.1-10         
## [61] nnet_7.3-19          gridExtra_2.3        png_0.1-8           
## [64] pbapply_1.7-2        evaluate_0.23        rlang_1.1.3         
## [67] Rcpp_1.0.12          glue_1.7.0           rstudioapi_0.16.0   
## [70] jsonlite_1.8.8       R6_2.5.1             plyr_1.8.9