Setup

Packages

library(pacman)
p_load(rootSolve, nlshrink, Matrix, dplyr, psych, ggplot2, EGAnet, lavaan, egg, qpcR, semPlot)

Functions

#Automatic Kaiser-Guttman Test

KGA <- function(x){
  SL = x$e.values
  which.min(SL >= 1) - 1}

#Automatic Warne & Larsen (2014) Kaiser-Guttman revision

MAPA <- function(x, a = 0.05, s = 2, d = 3){
  LE = x$e.values[x$factors]; LECIL = LE - abs((qnorm(a/s) * sqrt((2*LE^2)/(x$n.obs)))); LECIU = LE + abs((qnorm(a/s) * sqrt((2*LE^2)/(x$n.obs)))) 
  DEC <- ifelse(LECIL < 1, "dropped.", "retained.")
  S1 <- cat(paste("The smallest factor had an eigenvalue of", round((LE), d), "with a confidence interval of", round((LECIL), d), "to", round((LECIU), d), "so the factor should be", DEC))}

#Manual Warne & Larsen (2014) Kaiser-Guttman revision

MAPM <- function(x, a = 0.05, s = 2, d = 3, n){
  LECIL = x - abs((qnorm(a/s) * sqrt((2*x^2)/(n)))); LECIU = x + abs((qnorm(a/s) * sqrt((2*x^2)/(n))))
  DEC <- ifelse(LECIL < 1, "dropped.", "retained.")
  S1 <- cat(paste("The smallest factor had an eigenvalue of", round((x), d), "with a confidence interval of", round((LECIL), d), "to", round((LECIU), d), "so the factor should be", DEC))}

#Automatic Percent of the Total Variance

PTV <- function(x){
  TV = x$e.values/length(x$e.values)
  return(TV)}

#Tucker's Congruence Coefficient 

CONGO <- function(F1, F2) {
  PHI = sum(F1*F2) / sqrt(sum(F1^2)*sum(F2^2))
  return(PHI)}

#Total Variance of a Loading Vector

TVF <- function(x){
  OUT <- sum((x^2)/length(x))
  return(OUT)}

The formula for eigenvalue confidence intervals is

\[\lambda_i \pm z_{(1-\frac{\alpha}{2})} \times \left ( \sqrt{\frac{2\lambda_i^2}{n}} \right )\]

Data

#Matrix

lowerSLBF <-'
1                                                   
0.52    1                                               
0.36    0.29    1                                           
0.46    0.36    0.27    1                                       
0.61    0.49    0.41    0.39    1                                   
0.3 0.29    0.22    0.18    0.31    1                               
0.4 0.33    0.27    0.26    0.4 0.33    1                           
0.79    0.45    0.33    0.45    0.53    0.27    0.33    1                       
0.4 0.29    0.13    0.22    0.33    0.38    0.28    0.38    1                   
0.21    0.22    0.14    0.14    0.21    0.2 0.12    0.19    0.11    1               
0.32    0.24    0.17    0.37    0.27    0.35    0.24    0.31    0.29    0.11    1           
0.48    0.44    0.38    0.33    0.58    0.3 0.35    0.42    0.24    0.2 0.24    1       
0.24    0.22    0.15    0.2 0.24    0.53    0.18    0.24    0.35    0.21    0.44    0.21    1   
0.4 0.29    0.18    0.39    0.4 0.25    0.39    0.37    0.28    0.1 0.26    0.34    0.13    1'

#Sample size

nSLBF <- 780

SLBF.cor = getCov(lowerSLBF, names = c("VC", "VAL", "SPAR", "SBL", "CONF", "VISM", "NUMR", "GINF", "RETF", "PICR", "AUDA", "ANSY", "DESP", "MEWO"))

Rationale

In 1931, Thurstone introduced multiple factor analysis. With this invention people began to propose that Spearman's g was vanquished because they could recover very simple structure with many orthogonal factors. Later, it was found that obliquely rotating these factors usually produced a better fit and, atop the resulting factor covariance matrix, a higher-order g factor tended to fit quite well. This g factor was interpreted in much the same way Spearman's earlier tetrad criterion-based unified (i.e., first- or lower-order) g factor was (Thompson and Thurstone discussed it earlier and Wherry claimed to have discovered it independently with Winer). In 1957, Schmid & Leiman introduced a simple means to obtain the direct g loadings and the loadings of the group factors separate from g from the higher-order model parameters. To obtain the g loadings, simply follow path-tracing rules:

\[\lambda_g = \Gamma_g \times \lambda_{non_g}\]

And to obtain the non-g loading, multiply the group factor's loading by its variances residualized of g as so:

\[\epsilon\lambda_{non-g} = (1-\Gamma_g^2) \times \lambda_{non-g}\]

Shortly after Thurstone's introduction of multiple factor analysis, Holzinger & Swineford, Eysenck, and Spearman each independently invented methods to estimate the g factor and group factors as first-order factors identified by forcing them to orthogonality. The major initial work on this method was conducted by Holzinger & Swineford. This method has a notable advantage over the Schmid-Leiman transformation in that, in as much as the shared variance among the factor indicators is forced to the higher-order format, it may show misfit that artefactually alters many research-relevant model parameters. The most common example of this type of misfit is the proportionality constraint, where the higher-order model suffers as a description of the data to the extent that the variance in a set of indicators due to their group factor is not identical to the variance attributable to g. These methods do not usually differ much in their results, as the misfit in favor of the bifactor model is not typically that considerable. Both types of method for obtaining the g and group factor loadings tend to produce the same qualitative result: cognitive batteries are frequently only interpretable at the level of the general factor and not in terms of specific abilities.

Decker et al. (2020) took issue with this general conclusion and believe they gave reasons to disbelieve it. Unfortunately, all they have done is commit themselves to a now 89-year-old error and presented numerous reasons to balk at the idea that they could be experts. Stated briefly, this error is that the ability to model the battery without a g implies that it, or something like it, is absent or unnecessary.

The Wrong Model of Cognition?

It is difficult to make sense of the bifactor model (Hood, 2010). This does not mean that the bifactor is an impossible model, nor does it even make it difficult to comprehend a possible bifactor description of intelligence (Beaujean, 2015). In fact, though it has been repeatedly demonstrated that the bifactor model can be statistically preferable in cases where the true model is known to be otherwise, this does not mean that the bifactor is an incorrect model. This remains to be assessed. One method to assess it would be to fit common pathway models, a form of etiological model for common factors. These have been conducted several times, but always for unified or higher-order models (e.g., Shikishima et al., 2009; Panizzon et al., 2014; for more on why g seems likely beyond factor models, see Jensen, 1987); many more have been conducted and found to fit data well, but they have not tested how independent pathway models fare in comparison. The examples which have, however, have found support for common pathways, supporting g at the etiological level. Unfortunately, even these tests for g did not test which form of g was more appropriate, bifactor or higher-order. Simulations have supported the ability for these models to differentiate factor structures under realistic conditions but they did not assess whether there was a statistical advantage in favor of bifactor common pathway models compared to first- or higher-order ones. I will be conducting these shortly; at present, the possibility remains that these can be used to help assess which g is more appropriate if there is to be a g - and thus a corresponding common pathway model (Franic et al., 2013) - at all. In whatever case, the use of the bifactor model does not actually require taking a realist stance with regards to g or the division between g and group factors. In fact, the bifactor can be used in order to assess the dimensionality of a well-defined and identified battery and will, often enough, be relatively unbiased in doing so. Simulating matrices with a known structure based on an empirical one is one way to derive estimates of the effects of misfit, if one really believes this is likely to affect their data. Even more simply, the results of the bifactor and higher-order model can be compared. In well-constructed tests like the Woodcock-Johnson, they will usually rarely disagree. On similar grounds to the common pathway modeling suggestion, longitudinal factor models tend to support g moreso than group factors when using higher-order or unified factor models (I am not aware of good longitudinal bifactor tests). If it can be shown that the bifactor is more strongly supported than the higher-order across time, this would offer relative support for it. The same may be true for testing invariance across groups provided the bifactor does not model away bias (as some have suggested but not, to my knowledge, yet tested).

One complaint levied against the bifactor model is that it unduly depletes the variance of orthogonal factors. This may or may not be the case. When large batteries are used, group factors are no less depleted of their reliable variance despite greater breadth, but the predictive validity of these orthogonal factors for various criteria still tends to be greater than the validity of the residualized group factors from a higher-order model regardless of potential for variance depletion (in gold-standard batteries, it is also almost universally the case that the group factors constitute less of the total variance and have reduced validity compared to their bifactor equivalents). Techniques for assessing the number of factors in a battery can be used to assist in determining if the bifactor has artefactually sapped the interpretability of group factors in a dataset. Although imperfect, this is a helpful stopgap against underfactoring. The validity of group factors etiologically can also be assessed in common pathway models and in common pathway models featuring a higher-order factor, they must.

Batteries of very different sizes and breadth of content produce roughly similarly-sized g factors. This could, in principle, occur with a positive manifold derived from correlated group factors. It is, however, strange. Despite the great variety of content that appears in different tests and the variegated identities of the group factors that result, that batteries produce a g factor of a reasonably consistent size is suggestive evidence for its presence. This leads into the next point.

Concurrent validation of a factor or set of factors happens by modeling them in different batteries administered to the same samples and assessing their similarity. Many proposed group factors are present in some but not other batteries, but batteries are nonetheless universally related. This finding disqualifies a content-based explanation for the positive manifold and it makes many formative explanations unlikely. The more disparate the group factors and the cognitive processes required to perform well on their indicators, the less likely it is that a they can "form" a higher-order dimension summarized by g, shared between batteries. With group factors possessing similar content in two different batteries, each with their own higher-order g (a g only being justified with appreciable unidimensionality apparent in the battery), the relationship between them (the totality of which is not squared to assess similarity) would not represent construction unless the process, motivations, methods, and inspirations were similar enough; moreover, a common cause of performance (as a factor shared between batteries or just shared variance) is supported. On the other hand, if a pair of higher-order g factors from putatively unidimensional batteries is still highly-related or identical and their group factors do not share common content or processes, a bifactor g is supported (this is debatable) even though a higher-order model is used. This is the case because the variance associated with g, mediated through the group factors is, as a result, not unique to those (types of) group factors, but instead, to the g, which appeared in both batteries despite differences in process, content, etc. Process and content similarity is mostly a judgment call at present involving things like examining what tasks are involved in tests or the transformation of information implied by a given test item, but it can theoretically be verified with neuro or behavioral data. In order to support a g and how it can emerge similarly in extremely different batteries, I'll examine the data from Lim (1988), who administered Raven's Advanced Progressive Matrices (RAPM), the AH4 test - which consisted of 65 verbal and numerical items about directions, verbal opposites, numerical series, verbal analogies, simple arithmetical computations and synonyms, followed by 65 nonverbal items involving analogies, similarities, subtractions, series, and superimpositions -, Shayer's Science Reasoning Tasks - which included three subtests where covering volume and heaviness, the pendulum, and equilibrium in a balance -, and the Arlin Test of Formal Reasoning - a 32-item multiple-choice test with eight subtests measuring a specific formal operation stage ability, including volume, probability, correlations, combinations, proportions, momentum, mechanical equilibrium, and frames of reference tests. I will consider the RAPM and AH4 together to be a "psychometric battery" since that is how they're construed; the other tests are Piagetian. The content, knowledge, culture, and process overlap is clearly basically nil for these sorts of tests, so it will be a good test of the generality of g.

Irresponsible Errors

The first irresponsible error is in failing to understand why test researchers recommend interpretation at the level of g. The second is to imply the greater correctness of a procedure that makes it impossible to adequately assess bias.

Even if the true causal model underlying cognitive test relationships is a series of correlated group factors giving the appearance of g, this does not militate against the use of interpretability indices for assessing the unidimensionality of factor models; the unidimensionality remains in the factor covariances even if they are not a product of a g factor per se. This unidimensionality appears and affects the interpretation of scores; to suggest otherwise is to offer clinicians bad advice. The track record of attempting to interpret cognitive profiles - as in patterns of differences net of the FSIQ - is bad (Watkins, 2000, 2003; McGill, Dombrowski & Canivez, 2018). If we have unidimensionality - regardless of its source - specific skill estimates (e.g. a summary verbal or nonverbal score) can also simply reflect that. When tests are evaluated and found to be unidimensional, it is not a statement that this is a desired state of affairs, it is a statement of reality: clinicians are not likely to get predictive validity net of the sumscore when the battery appears unidimensional. The more unidimensional the battery, the less informative differences in cognitive profiles (which do exist because no battery is purely unidimensional) are for clinicians and the more likely they are to reflect measurement error and other forms of noise or, just as well, specific abilities whose predictive validity is limited. If some disability is characterized by a specific pattern of deficits, this is another question (which is by no means neglected), but those sorts of disabilities are relatively rare and cognitive profiles are even more rarely diagnostically useful for them. A much larger reason for cognitive profile differences than a lack of unidimensionality is the aforementioned issue of measurement error. Some tests are less reliable than others and for many of them, their variability is due moreso to that than to anything meaningful. There is no practical way for clinicians to distinguish these. Scores for specific abilities often fail to index them (Gignac, 2013).

\(\omega_t\), a measure of total common factor saturation. It tends to be roughly the same in correlated group factor and bifactor models and somewhat lower in higher-order models which themselves generally explain less of the total variance than bifactor models. The higher-order model also maintains all-positive residual relationships, which leads to tautological support for group factors net of g, even if that support is minor. Usually that support is more minor than the bifactor model, from which group factors that can be estimated (often enough they cannot be because many of the group factor loadings drop to zero) usually have greater predictive validity. The relative omega - the ratio of \(\omega_h\) (the equivalent measure for the general factor) to \(\omega_t\) - is typically a bit higher in the higher-order than the bifactor model since the \(\omega_h\) is not very different; transforming unrotated first-order factor structures or correlated group factors doesn't change this unidimensionality. Simply conceptualizing the model as group factors does not, and cannot, obviate this fact unless there really are quite limited correlations among them.

The point of this is not to say that a position with a g is endorsed by the methods, it is to say that the dimensionality is undeniable and the subsequent interpretive consequences are as well. If batteries were relatively less unidimensional this may not be true, but there is no point talking about a different world. People often try to make batteries without a g or have construction philosophies (and facet theories) otherwise deemphasizing it, like the British Ability Scales, but they have invariable ended up with one of two outcomes:

  1. A large g factor remains anyway and the common (and predictive) variance of the battery is essentially unidimensional;
  2. The test is worthless for prediction.

Individuals wishing to deemphasize g should not simply repeat the dictum that a mostly unidimensional matrix; it is as tired, boring, and fruitless as it has been for a century. They should (provided they really care) strive to develop a facet theory that really does produce a test which is simultaneously useful and not unidimensional. That this has not yet been done despite ample time and effort expended for that express purpose by incredibly competent people suggests it is far more difficult than the questioners seem to suggest.

One method Decker et al. (2020) mustered as a sort of proof that batteries are underfactored was to claim that oblique factor solutions with different numbers of factors produced different g factors of progressively smaller size as more group factors were brought in. Two problems:

  1. The first factor of an exploratory analysis is not g if you extract multiple (oblique) factors;
  2. The factors produced after the first are not guaranteed to themselves be reliable, questioning why the procedure was used in the first place (example with WJ)

The first problem may stem from a misunderstanding about exploratory factor analysis. The theoretical description of g as the first factor or component of an EFA or a PCA respectively does not apply if multiple factors are taken out because the method attempts to explain as much variance as possible with a specified number of factors, which involves, after the first factor is taken out, splitting what could be much of its variance alone to achieve the greatest total variance. If two dimensions are to explain as much as possible, one way to do that may be to take the same dimension from the single factor case and throw an orthogonal dimension on top, but in doing an oblique rotation, we try to explain as much variance as possible and we allow the extracted dimensions to be correlated (that is, oblique); when the single dimension maximization is flummoxed by two oblique dimensions, the variance for g is contained in their covariance, not the variance explained by one or the other of them, which could, technically, only have the centre of the hypercone with g as its center as their respective center. When extracting multiple factors, the result is the group factors from which a CFA should commence, not g first and then other factors unless you are performing a bifactor EFA. That they seem to have concluded that g shrank is rank ignorance and inexcusable. When g is modeled within a CFA with varying numbers of factors, it does not shrink, as it remains in the model; claiming it is reduced when it is not modeled is baffling.

When overfactoring occurs, the results are not likely to be reliable. The results of extracting between one and seven factors are presented below. The result is also unsurprising: even if we take Decker et al.'s incorrect understanding of what EFA is (i.e., "it first extracts g and then other factors"), the resulting "factors" are indicated by single indicators, cross-loaded, or have low loadings overall. These models obviously cannot be tested because, for example, a sub-one loading on a factor from an EFA will be represented as a loading of 1 in a CFA; the reason it appears smaller than one in the EFA is because for that factor, there are non-zero (and usually clinically worthless and sampling error-derived) loadings on other indicators. The zero non-indicator loading assumption of CFAs is tested and often confirmed by methods like the exploratory bifactor rotation. In some cases, it may not be correct, but these are rare, especially for gold-standard tests. The utility of such factors is basically null and even taking them as factors results in basically the same higher-order g (a bifactor g not being recommended because the identification constraints for overfactored models or in cases with too few indicators can be absurd and unduly disfavor the bifactor).

Decker et al. (2020) also presented a difference in loadings between a unified factor model (which they erroneously called "uni bifactor" - a name which would only be correct if they were two-factor theorists because the g would be a "bifactor" relative to the unique factor variances in the indicator residuals, hence the "bi-" in "bifactor") and a bifactor model as evidence for g loading instability and overestimation. The key error here is to suggest that solutions affected by psychometric sampling variance represent the "true" g loadings of a battery. They also took these overestimated loadings as proof that the loadings are unstable, which is not really the case for adequate solutions. The higher-order and bifactor solution in their battery produce practically the same g loadings, explain the same amount of the total variance, and with different numbers of factors, produce marginally (modestly rather than marginally when far too few group factors are used to let g properly 'funnel' to the top of the hierarchy) stronger g factors the more group factors are modeled, when this is conducted properly. The issue of psychometric sampling variance in loading estimation is important and has been discussed since at least the 1950s. There may have been earlier papers discussing the topic, but I am unaware of them. The idea was of course alluded to as early as the 1920s (perhaps earlier, but again, I am unaware of instances of this; one could construe Thomson's 1916 article as suggesting concern with this, but that's highly debatable). The gist is that the unified factor model will overestimate the g loadings because the loadings will increase to the degree that tests in the battery share content. In bifactor and higher-order models, the loadings will be closer to the "true" loadings of a battery because they avoid this psychometric sampling error to a greater degree. If the orthogonal group factors in the bifactor model are not well-identified, the inflation found in the unidimensional model can still appear and in cases where there is a large degree of proportionality constraint or violation of the tetrad criterion, the bifactor and higher-order models will be discrepant, but the difference will not often be systematic with respect to the size of the g loadings. With regards to the extent of psychometric sampling error effects, they are usually not massive. For example, the g loading of the Wechsler Adult Intelligence Scale (WAIS) vocabulary test is 0.87 among the eleven other subtests; it is 0.82 when it is estimated among the nonverbal subtests and 0.92 when estimated among the verbal subtests (Jensen, 2001). Thorndike (1987), Floyd et al. (2009), and Major et al. (2012) have reported manifestly similar results, but we can test this with the data from Lim (1988). If the psychometric sampling error effect is considerable at the group factor level as Decker et al. (2020) appear to suggest it should be, then combining the g factors from the batteries used in that study should lead to a shift in the loadings. It does not.

Overfactoring often leads to the necessity - as it did for Decker et al. (2020) - to add a number of constraints to achieve identification. These constraints are usually arbitrary. If these contrained parameters are biased, bias will go unnoticed when the test is being assessed for invariance (Yoon & Millsap, 2007). The recommended dimensionality-ignorant procedure of just taking as many factors as possible irrespective of how variables seem to be related is not only crude, it is potentially harmful. Often enough, the effects of these constraints can also propagate and mask or recover real or false bias in an instrument, so it does not suffice to, say, fit a model where all latent variances and an additional loading are constrained for a model with an orthogonal g and seven group factors with two indicators each (this is not recommended for identification reasons) and then to swap said loading. Having to add these constraints frequently leads to bias against the models with them compared to those without them, making, say, an extremely constrained bifactor comparison with a correlated group factor model - as Decker et al. (2020) actually did for some reason - pointless. The degree to which the better fit of their correlated group factor model can be explained by improper modeling can be assessed empirically by using normal methods for finding the dimensionality of the battery (or using a theory-informed model) and comparing the same format of model (e.g., bifactor versus correlated group factors) without having to use ridiculous constraints. The obviousness of the inapplicability of their chosen factoring strategy (i.e., just take out seven factors for no reason) is apparent by looking at their diagram for the bifactor model: the Glr loadings weren't even estimated. Moreover, their constraints seem to have assumed that the loadings on the group factors were homogeneous within them, but their correlated group factors model shows this is not the case!

How Many Factors?

Decker et al. (2020) made very brief remarks about determining the number of factors underlying performance in a battery of tests. They assumed, for some reason, that batteries tend to be underfactored. This would not alter the generality/unidimensionality of the battery, so it is curious why they even made this statement. When they made it, they made it in the most extactly incorrect way possible: with regards to the \(\chi^2\) test. This particular method of testing the number of factors will always lead to a greater quantity as sample size increases. Why they said this method underestimates the number of factors, I'm not sure. Methods often overestimate the number of factors. For example, Ruscio & Roche (2012) calculated that the Kaiser-Guttman eigenvalue-greater-than-one rule (the most commonly and often the sole method for determining the number of factors used) overestimated the number of factors in 89.87% of cases in 10,000 simulated datasets and underestimated them in only 1.36%. They found that parallel analysis overestimated in 10.91% of cases, underestimated in 12.67%, minimum average partial (MAP) never overestimated but it underestimated in 40.4% of cases, AIC overestimated in 22.44% and underestimated in 4.91%, BIC never overestimated but underestimated in 39.55%, and the \(\chi^2\) test over estimated in 31.29% of cases and underestimated in 9.83%. This last method will obviously tend to understimate with small samples and to overestimate with large samples. Other analyses have confirmed that the most common method fares incredibly poorly (e.g., Fabrigar et al., 1999; Warne & Larsen, 2014; many more), tending to overfactor by a considerable amount. In intelligence research, the locus of study is the positive manifold, and hence, the number of factors tends to be underestimated by methods like parallel analysis and MAP more often because of the high intercorrelations of items (Keith et al., 2016). But this just reiterates the problem with overfactoring: the data are very unidimensional in this field, so what we get will inevitably be interpretable primarily in general. The use of multiple methods to determine the number of factors is better than using just one anyway, so below, I will estimate the number of factors with many methods to find a more appropriate number for the data Decker et al. used.

Methods

I will use Velicer's MAP, Horn's Parallel Analysis, Larsen & Warne's modified Kaiser-Guttman rule (2014), the original Kaiser-Guttman rule, Golino & Epskamp's Exploratory Graph Analysis (EGA; 2017; Golino & Demetriou, 2017), Cattell's Scree test, del Giudice's Effective Dimensionality (2020), a \(\chi^2\) test, the Bayesian Information Criterion (BIC/Schwarz' criterion) in its original and sample size-adjusted forms, and Thurstone's Very Simple Structure (VSS) to estimate the number of factors underlying the empirical matrix used by Decker et al. (2020) and simulated datasets. The simulated datasets feature:

  1. Merely correlated group factors;
  2. More strongly correlated group factors;
  3. A true higher-order g, and;
  4. A true bifactor g.

I will not be varying the sample sizes and numbers of indicators because time is precious and I don't want to spend it doing a simulation study on this right now. These are clearly not realistic models, they are just quick and simple.

Analysis

Simulation

CGF.sim <- '
F1 =~ X1 + X2
F2 =~ X3 + X4
F3 =~ X5 + X6
F4 =~ X7 + X8
F5 =~ X9 + X10
F6 =~ X11 + X12
F7 =~ X13 + X14

F1 ~~ F2 + F3 + F4 + F5 + F6 + F7
F2 ~~ F3 + F4 + F5 + F6 + F7
F3 ~~ F4 + F5 + F6 + F7
F4 ~~ F5 + F6 + F7
F5 ~~ F6 + F7
F6 ~~ F7'

CGFG.guess <- '
F1 =~ X1 + X2
F2 =~ X3 + X4
F3 =~ X5 + X6
F4 =~ X7 + X8
F5 =~ X9 + X10
F6 =~ X11 + X12
F7 =~ X13 + X14

F1 ~~ 0.3*F2 + 0.3*F3 + 0.3*F4 + 0.3*F5 + 0.3*F6 + 0.3*F7
F2 ~~ 0.3*F3 + 0.3*F4 + 0.3*F5 + 0.3*F6 + 0.3*F7
F3 ~~ 0.3*F4 + 0.3*F5 + 0.3*F6 + 0.3*F7
F4 ~~ 0.3*F5 + 0.3*F6 + 0.3*F7
F5 ~~ 0.3*F6 + 0.3*F7
F6 ~~ 0.3*F7'

CGF4G.guess <- '
F1 =~ X1 + X2 + x3
F2 =~ X4 + X5 + x6
F3 =~ X7 + X8 + x9
F4 =~ x10 + x11 + x12

F1 ~~ 0.3*F2 + 0.3*F3 + 0.3*F4  
F2 ~~ 0.3*F3 + 0.3*F4  
F3 ~~ 0.3*F4'

CGF5G.guess <- '
F1 =~ X1 + X2 + x3
F2 =~ X4 + X5 + x6
F3 =~ X7 + X8 + x9
F4 =~ x10 + x11 + x12
F5 =~ X13 + X14 + x15

F1 ~~ 0.3*F2 + 0.3*F3 + 0.3*F4 + 0.3*F5  
F2 ~~ 0.3*F3 + 0.3*F4 + 0.3*F5  
F3 ~~ 0.3*F4 + 0.3*F5 
F4 ~~ 0.3*F5'

CGF6G.guess <- '
F1 =~ X1 + X2 + x3
F2 =~ X4 + X5 + x6
F3 =~ X7 + X8 + x9
F4 =~ x10 + x11 + x12
F5 =~ X13 + X14 + x15
F6 =~ X16 + X17 + x18

F1 ~~ 0.3*F2 + 0.3*F3 + 0.3*F4 + 0.3*F5 + 0.3*F6 
F2 ~~ 0.3*F3 + 0.3*F4 + 0.3*F5 + 0.3*F6 
F3 ~~ 0.3*F4 + 0.3*F5 + 0.3*F6 
F4 ~~ 0.3*F5 + 0.3*F6
F5 ~~ 0.3*F6'

CGF4.sim <- '
F1 =~ X1 + X2 + x3
F2 =~ X4 + X5 + x6
F3 =~ X7 + X8 + x9
F4 =~ x10 + x11 + x12

F1 ~~ F2 + F3 + F4
F2 ~~ F3 + F4
F3 ~~ F4'

CGF5.sim <- '
F1 =~ X1 + X2 + x3
F2 =~ X4 + X5 + x6
F3 =~ X7 + X8 + x9
F4 =~ x10 + x11 + x12
F5 =~ X13 + X14 + x15

F1 ~~ F2 + F3 + F4 + F5
F2 ~~ F3 + F4 + F5
F3 ~~ F4 + F5
F4 ~~ F5'

CGF6.sim <- '
F1 =~ X1 + X2 + x3
F2 =~ X4 + X5 + x6
F3 =~ X7 + X8 + x9
F4 =~ x10 + x11 + x12
F5 =~ X13 + X14 + x15
F6 =~ X16 + X17 + x18

F1 ~~ F2 + F3 + F4 + F5 + F6 
F2 ~~ F3 + F4 + F5 + F6 
F3 ~~ F4 + F5 + F6 
F4 ~~ F5 + F6
F5 ~~ F6'

HOF.sim <- '
F1 =~ X1 + X2
F2 =~ X3 + X4
F3 =~ X5 + X6
F4 =~ X7 + X8
F5 =~ X9 + X10
F6 =~ X11 + X12
F7 =~ X13 + X14

g =~ F1 + F2 + F3 + F4 + F5 + F6 + F7'

BF.sim <- '
F1 =~ X1 + X2
F2 =~ X3 + X4
F3 =~ X5 + X6
F4 =~ X7 + X8
F5 =~ X9 + X10
F6 =~ X11 + X12
F7 =~ X13 + X14

g =~ X1 + X2 + X3 + X4 + X5 + X6 + X7 + X8 + X9 + X10 + X11 + X12 + X13 + X14'

set.seed(42069)
CGFData <- simulateData(CGF.sim, sample.nobs = 780); CGFGData <- simulateData(CGFG.guess, sample.nobs = 780)
CGF4Data <- simulateData(CGF4.sim, sample.nobs = 780); CGF4GData <- simulateData(CGF4G.guess, sample.nobs = 780)
CGF5Data <- simulateData(CGF5.sim, sample.nobs = 780); CGF5GData <- simulateData(CGF5G.guess, sample.nobs = 780)
CGF6Data <- simulateData(CGF6.sim, sample.nobs = 780); CGF6GData <- simulateData(CGF6G.guess, sample.nobs = 780)
HOFData <- simulateData(HOF.sim, sample.nobs = 780); BFData <- simulateData(BF.sim, sample.nobs = 780)
PAR <- fa.parallel(CGFData)

## Parallel analysis suggests that the number of factors =  7  and the number of components =  7
nfactors(CGFData, n = 8)

## 
## Number of factors
## Call: vss(x = x, n = n, rotate = rotate, diagonal = diagonal, fm = fm, 
##     n.obs = n.obs, plot = FALSE, title = title, use = use, cor = cor)
## VSS complexity 1 achieves a maximimum of 0.77  with  8  factors
## VSS complexity 2 achieves a maximimum of 0.79  with  8  factors
## The Velicer MAP achieves a minimum of 0.03  with  1  factors 
## Empirical BIC achieves a minimum of  -86.3  with  7  factors
## Sample Size adjusted BIC achieves a minimum of  -37  with  7  factors
## 
## Statistics by number of factors 
##   vss1 vss2   map dof  chisq
## 1 0.12 0.00 0.027  77 1311.0
## 2 0.22 0.24 0.037  64 1030.6
## 3 0.36 0.36 0.048  52  802.3
## 4 0.47 0.47 0.057  41  594.9
## 5 0.57 0.58 0.065  31  364.3
## 6 0.68 0.68 0.073  22  184.5
## 7 0.77 0.78 0.078  14   11.8
## 8 0.77 0.79 0.107   7    1.9
##                                                                                                                                                                                                                                 prob
## 1 0.00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000011
## 2 0.00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000247193044525155505419106161113518282945733517408371
## 3 0.00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000509157189206790598140495163193008920643478631973266601562500000000000000000000000000000000
## 4 0.00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000222802985248092286002172257575182356958976015448570251464843750000000000000000000000000000000000000000000000000000000000000000
## 5 0.00000000000000000000000000000000000000000000000000000000015295594669406476243285475380417892665718682110309600830078125000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
## 6 0.00000000000000000000000000116971424393325399012592141767541420449560973793268203735351562500000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
## 7 0.62035140613632933082044473849236965179443359375000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
## 8 0.96747926739613854252297642233315855264663696289062500000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
##   sqresid  fit RMSEA BIC SABIC complex eChisq   SRMR  eCRMS eBIC
## 1    15.3 0.12 0.143 798  1043     1.0 2260.6 0.1262 0.1372 1748
## 2    13.2 0.24 0.139 604   808     1.2 1792.4 0.1124 0.1340 1366
## 3    11.1 0.37 0.136 456   621     1.3 1377.1 0.0985 0.1303 1031
## 4     9.1 0.48 0.132 322   452     1.3 1005.3 0.0842 0.1254  732
## 5     7.3 0.58 0.117 158   256     1.2  614.7 0.0658 0.1127  408
## 6     5.4 0.69 0.097  38   108     1.1  295.2 0.0456 0.0927  149
## 7     3.8 0.78 0.000 -81   -37     1.0    6.9 0.0070 0.0178  -86
## 8     3.6 0.80 0.000 -45   -23     1.1    1.0 0.0027 0.0098  -46
BAA <- fa(CGFData, nfactors = PAR$ncomp)
## Loading required namespace: GPArotation
estimate.ED(CGFData)
## [1] ED estimated from the correlation matrix; no disattenuation; no small-sample correction.
## $n1
## [1] 12.3
## 
## $n2
## [1] 11.2
## 
## $nInf
## [1] 8.33
## 
## $nC
## [1] 13.8
EGA(CGFData, plot.EGA = T)
## Network estimated with:
##  • gamma = 0.5
##  • lambda.min.ratio = 0.1

## EGA Results:
## 
## Number of Dimensions:
## [1] 7
## 
## Items per Dimension:
##     items dimension
## X7     X7         1
## X8     X8         1
## X1.    X1         2
## X2.    X2         2
## X3.    X3         3
## X4.    X4         3
## X13   X13         4
## X14   X14         4
## X11   X11         5
## X12   X12         5
## X5     X5         6
## X6     X6         6
## X9     X9         7
## X10   X10         7
MAPA(BAA); KGA(BAA)
## The smallest factor had an eigenvalue of 1.327 with a confidence interval of 1.195 to 1.458 so the factor should be retained.
## [1] 7
PAR <- fa.parallel(CGFGData)

## Parallel analysis suggests that the number of factors =  7  and the number of components =  7
nfactors(CGFGData, n = 8)

## 
## Number of factors
## Call: vss(x = x, n = n, rotate = rotate, diagonal = diagonal, fm = fm, 
##     n.obs = n.obs, plot = FALSE, title = title, use = use, cor = cor)
## VSS complexity 1 achieves a maximimum of Although the vss.max shows  7  factors, it is probably more reasonable to think about  1  factors
## VSS complexity 2 achieves a maximimum of 0.7  with  8  factors
## The Velicer MAP achieves a minimum of 0.02  with  1  factors 
## Empirical BIC achieves a minimum of  -91.1  with  7  factors
## Sample Size adjusted BIC achieves a minimum of  -44.5  with  7  factors
## 
## Statistics by number of factors 
##   vss1 vss2   map dof  chisq
## 1 0.49 0.00 0.022  77 1001.1
## 2 0.40 0.56 0.030  64  827.4
## 3 0.44 0.57 0.039  52  642.4
## 4 0.47 0.61 0.048  41  484.6
## 5 0.52 0.63 0.058  31  313.9
## 6 0.56 0.66 0.068  22  157.6
## 7 0.62 0.69 0.077  14    4.2
## 8 0.62 0.70 0.107   7    2.7
##                                                                                                                                                                  prob
## 1 0.00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000009
## 2 0.00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000037173693739398285917473890860
## 3 0.00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000104101445243916267905780886904310023055586498230695724487305
## 4 0.00000000000000000000000000000000000000000000000000000000000000000000000000003711873812872043217926665281680698171840049326419830322265625000000000000000000000000
## 5 0.00000000000000000000000000000000000000000000000158745051914731968834525177580729859982966445386409759521484375000000000000000000000000000000000000000000000000000
## 6 0.00000000000000000000017373675941937084019929538047932737754308618605136871337890625000000000000000000000000000000000000000000000000000000000000000000000000000000
## 7 0.99383605707399857998041170503711327910423278808593750000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
## 8 0.91378458053790889703549282785388641059398651123046875000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
##   sqresid  fit RMSEA BIC SABIC complex eChisq   SRMR eCRMS eBIC
## 1    11.0 0.49 0.124 488   733     1.0 1302.5 0.0958 0.104  790
## 2     9.6 0.56 0.124 401   604     1.5 1028.2 0.0851 0.101  602
## 3     8.2 0.62 0.121 296   461     1.6  781.1 0.0742 0.098  435
## 4     6.9 0.68 0.118 212   342     1.6  555.6 0.0626 0.093  283
## 5     5.6 0.74 0.108 107   206     1.6  356.4 0.0501 0.086  150
## 6     4.4 0.80 0.089  11    81     1.5  170.5 0.0347 0.070   24
## 7     3.4 0.84 0.000 -89   -45     1.3    2.2 0.0039 0.010  -91
## 8     3.0 0.86 0.000 -44   -22     1.3    1.4 0.0031 0.011  -45
BAA <- fa(CGFGData, nfactors = PAR$nfact)
estimate.ED(CGFGData)
## [1] ED estimated from the correlation matrix; no disattenuation; no small-sample correction.
## $n1
## [1] 11.3
## 
## $n2
## [1] 8.99
## 
## $nInf
## [1] 4.14
## 
## $nC
## [1] 13.4
EGA(CGFGData, plot.EGA = T)
## Network estimated with:
##  • gamma = 0.5
##  • lambda.min.ratio = 0.1

## EGA Results:
## 
## Number of Dimensions:
## [1] 7
## 
## Items per Dimension:
##     items dimension
## X7     X7         1
## X8     X8         1
## X3.    X3         2
## X4.    X4         2
## X5     X5         3
## X6     X6         3
## X13   X13         4
## X14   X14         4
## X11   X11         5
## X12   X12         5
## X9     X9         6
## X10   X10         6
## X1.    X1         7
## X2.    X2         7
MAPA(BAA); KGA(BAA)
## The smallest factor had an eigenvalue of 1.057 with a confidence interval of 0.952 to 1.161 so the factor should be dropped.
## [1] 7
PAR <- fa.parallel(CGF4Data)

## Parallel analysis suggests that the number of factors =  4  and the number of components =  4
nfactors(CGF4Data, n = 8)

## 
## Number of factors
## Call: vss(x = x, n = n, rotate = rotate, diagonal = diagonal, fm = fm, 
##     n.obs = n.obs, plot = FALSE, title = title, use = use, cor = cor)
## VSS complexity 1 achieves a maximimum of Although the vss.max shows  8  factors, it is probably more reasonable to think about  5  factors
## VSS complexity 2 achieves a maximimum of 0.85  with  7  factors
## The Velicer MAP achieves a minimum of 0.05  with  4  factors 
## Empirical BIC achieves a minimum of  -151  with  4  factors
## Sample Size adjusted BIC achieves a minimum of  -67  with  4  factors
## 
## Statistics by number of factors 
##   vss1 vss2   map dof        chisq
## 1 0.26 0.00 0.058  54 1711.3913688
## 2 0.46 0.47 0.070  43 1065.5850781
## 3 0.65 0.65 0.066  33  592.0794382
## 4 0.82 0.83 0.046  24   16.6524598
## 5 0.82 0.83 0.068  16    8.4399395
## 6 0.78 0.85 0.100   9    2.9775778
## 7 0.78 0.85 0.134   3    0.9224005
## 8 0.83 0.85 0.199  -2    0.0000073
##                                                                                                                                                                                                                                                                                                                                   prob
## 1 0.0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000001
## 2 0.0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000030056537620386141245951150402504481462528929114341735839843750000000000000000000000000000000000000000000000000000000000000000000
## 3 0.0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000001110738506762030070019152405702556052347063086926937103271484375000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
## 4 0.8631011975063462848822837258921936154365539550781250000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
## 5 0.9346777582247028925976906066352967172861099243164062500000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
## 6 0.9651772923811081650313781210570596158504486083984375000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
## 7 0.8200189653108856946417404287785757333040237426757812500000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
## 8                                                                                                                                                                                                                                                                                                                                   NA
##   sqresid  fit RMSEA  BIC  SABIC complex       eChisq     SRMR eCRMS eBIC
## 1    13.5 0.26  0.20 1352 1523.3     1.0 3311.1226886 0.179330 0.198 2952
## 2     9.7 0.47  0.17  779  915.8     1.1 2047.7137228 0.141026 0.175 1761
## 3     6.2 0.66  0.15  372  477.1     1.1 1001.4205497 0.098622 0.139  782
## 4     3.0 0.83  0.00 -143  -67.0     1.0    8.5731413 0.009125 0.015 -151
## 5     2.9 0.84  0.00  -98  -47.3     1.0    4.3394654 0.006492 0.013 -102
## 6     2.5 0.86  0.00  -57  -28.4     1.1    1.4723400 0.003782 0.010  -58
## 7     2.4 0.87  0.00  -19   -9.5     1.2    0.4670094 0.002130 0.010  -20
## 8     2.4 0.87    NA   NA     NA     1.2    0.0000037 0.000006    NA   NA
BAA <- fa(CGF4Data, nfactors = PAR$nfact)
estimate.ED(CGF4Data)
## [1] ED estimated from the correlation matrix; no disattenuation; no small-sample correction.
## $n1
## [1] 9.47
## 
## $n2
## [1] 7.91
## 
## $nInf
## [1] 5.2
## 
## $nC
## [1] 11.5
EGA(CGF4Data, plot.EGA = T)
## Network estimated with:
##  • gamma = 0.5
##  • lambda.min.ratio = 0.1

## EGA Results:
## 
## Number of Dimensions:
## [1] 4
## 
## Items per Dimension:
##     items dimension
## X4.    X4         1
## X5     X5         1
## x6     x6         1
## X7     X7         2
## X8     X8         2
## x9     x9         2
## X1.    X1         3
## X2.    X2         3
## x3     x3         3
## x10   x10         4
## x11   x11         4
## x12   x12         4
MAPA(BAA); KGA(BAA)
## The smallest factor had an eigenvalue of 1.747 with a confidence interval of 1.573 to 1.92 so the factor should be retained.
## [1] 4
PAR <- fa.parallel(CGF4GData)

## Parallel analysis suggests that the number of factors =  4  and the number of components =  4
nfactors(CGF4GData, n = 8)

## 
## Number of factors
## Call: vss(x = x, n = n, rotate = rotate, diagonal = diagonal, fm = fm, 
##     n.obs = n.obs, plot = FALSE, title = title, use = use, cor = cor)
## VSS complexity 1 achieves a maximimum of 0.71  with  4  factors
## VSS complexity 2 achieves a maximimum of 0.79  with  7  factors
## The Velicer MAP achieves a minimum of 0.04  with  1  factors 
## Empirical BIC achieves a minimum of  -144  with  4  factors
## Sample Size adjusted BIC achieves a minimum of  -52.7  with  4  factors
## 
## Statistics by number of factors 
##   vss1 vss2   map dof        chisq
## 1 0.51 0.00 0.044  54 1256.5202282
## 2 0.55 0.64 0.049  43  817.7204199
## 3 0.64 0.73 0.055  33  446.1319971
## 4 0.71 0.79 0.047  24   30.9265548
## 5 0.67 0.79 0.068  16   13.8150482
## 6 0.67 0.78 0.095   9    2.4661764
## 7 0.67 0.79 0.137   3    0.2341827
## 8 0.67 0.78 0.197  -2    0.0000043
##                                                                                                                                                                                                                                     prob
## 1 0.000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000021
## 2 0.000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000008895765275156673843742760610098230245057493448257446289062500000000000000000000000000
## 3 0.000000000000000000000000000000000000000000000000000000000000000000000000069237340515250085570472804441521930129965767264366149902343750000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
## 4 0.155911676755469991162428300413012038916349411010742187500000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
## 5 0.612490174829204203632571079651825129985809326171875000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
## 6 0.981779315966714305474738466728013008832931518554687500000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
## 7 0.971891052861292403797222050343407317996025085449218750000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
## 8                                                                                                                                                                                                                                     NA
##   sqresid  fit RMSEA  BIC SABIC complex       eChisq      SRMR  eCRMS eBIC
## 1     9.9 0.51 0.169  897  1068     1.0 1914.8021658 0.1363728 0.1508 1555
## 2     7.2 0.64 0.152  531   668     1.3 1133.1842120 0.1049098 0.1300  847
## 3     5.1 0.75 0.127  226   331     1.2  554.7166365 0.0734009 0.1038  335
## 4     3.1 0.85 0.019 -129   -53     1.1   15.8848933 0.0124210 0.0206 -144
## 5     2.7 0.87 0.000  -93   -42     1.2    7.0094616 0.0082510 0.0168 -100
## 6     2.5 0.88 0.000  -57   -29     1.4    1.2331268 0.0034607 0.0094  -59
## 7     2.4 0.88 0.000  -20   -10     1.3    0.1134435 0.0010497 0.0049  -20
## 8     2.2 0.89    NA   NA    NA     1.4    0.0000021 0.0000045     NA   NA
BAA <- fa(CGF4GData, nfactors = PAR$nfact)
estimate.ED(CGF4GData)
## [1] ED estimated from the correlation matrix; no disattenuation; no small-sample correction.
## $n1
## [1] 9.11
## 
## $n2
## [1] 7.08
## 
## $nInf
## [1] 3.61
## 
## $nC
## [1] 11.3
EGA(CGF4GData, plot.EGA = T)
## Network estimated with:
##  • gamma = 0.5
##  • lambda.min.ratio = 0.1

## EGA Results:
## 
## Number of Dimensions:
## [1] 4
## 
## Items per Dimension:
##     items dimension
## X7     X7         1
## X8     X8         1
## x9     x9         1
## X1.    X1         2
## X2.    X2         2
## x3     x3         2
## x10   x10         3
## x11   x11         3
## x12   x12         3
## X4.    X4         4
## X5     X5         4
## x6     x6         4
MAPA(BAA); KGA(BAA)
## The smallest factor had an eigenvalue of 1.413 with a confidence interval of 1.273 to 1.553 so the factor should be retained.
## [1] 4
PAR <- fa.parallel(CGF5Data)

## Parallel analysis suggests that the number of factors =  5  and the number of components =  5
nfactors(CGF5Data, n = 8)

## 
## Number of factors
## Call: vss(x = x, n = n, rotate = rotate, diagonal = diagonal, fm = fm, 
##     n.obs = n.obs, plot = FALSE, title = title, use = use, cor = cor)
## VSS complexity 1 achieves a maximimum of 0.83  with  5  factors
## VSS complexity 2 achieves a maximimum of 0.86  with  8  factors
## The Velicer MAP achieves a minimum of 0.04  with  5  factors 
## Empirical BIC achieves a minimum of  -249  with  5  factors
## Sample Size adjusted BIC achieves a minimum of  -103  with  5  factors
## 
## Statistics by number of factors 
##   vss1 vss2   map dof  chisq
## 1 0.18 0.00 0.047  90 2237.7
## 2 0.35 0.36 0.055  76 1674.8
## 3 0.51 0.52 0.062  63 1119.8
## 4 0.67 0.68 0.050  51  594.2
## 5 0.83 0.84 0.037  40   35.9
## 6 0.79 0.85 0.050  30   17.9
## 7 0.79 0.85 0.065  21    7.9
## 8 0.79 0.86 0.088  13    3.6
##                                                                                                                                                                                                                                                                                                             prob
## 1 0.000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
## 2 0.000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000023
## 3 0.000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000320192588190703439311388112109568737650988623499870300292968750000000000000000000000000000000000000000000000
## 4 0.000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000001256639161762864455719294898283777683900552801787853240966796875000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
## 5 0.655504874061826559028531846706755459308624267578125000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
## 6 0.960040592427580108392248803284019231796264648437500000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
## 7 0.995467703286461613743085763417184352874755859375000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
## 8 0.994581857899415111567975600337376818060874938964843750000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
##   sqresid  fit RMSEA  BIC SABIC complex eChisq   SRMR eCRMS eBIC
## 1    18.7 0.18  0.17 1638  1924     1.0 4775.2 0.1707 0.184 4176
## 2    14.7 0.36  0.16 1169  1410     1.3 3481.2 0.1458 0.171 2975
## 3    10.9 0.52  0.15  700   900     1.3 2291.2 0.1183 0.153 1872
## 4     7.3 0.68  0.12  255   417     1.1 1164.6 0.0843 0.121  825
## 5     3.6 0.84  0.00 -230  -103     1.0   17.6 0.0104 0.017 -249
## 6     3.3 0.86  0.00 -182   -87     1.1    8.5 0.0072 0.013 -191
## 7     3.1 0.86  0.00 -132   -65     1.1    3.5 0.0046 0.010 -136
## 8     2.9 0.87  0.00  -83   -42     1.2    1.6 0.0032 0.009  -85
BAA <- fa(CGF5Data, nfactors = PAR$nfact)
estimate.ED(CGF5Data)
## [1] ED estimated from the correlation matrix; no disattenuation; no small-sample correction.
## $n1
## [1] 11.8
## 
## $n2
## [1] 9.82
## 
## $nInf
## [1] 6.75
## 
## $nC
## [1] 14.5
EGA(CGF5Data, plot.EGA = T)
## Network estimated with:
##  • gamma = 0.5
##  • lambda.min.ratio = 0.1

## EGA Results:
## 
## Number of Dimensions:
## [1] 5
## 
## Items per Dimension:
##     items dimension
## X13   X13         1
## X14   X14         1
## x15   x15         1
## X1.    X1         2
## X2.    X2         2
## x3     x3         2
## x10   x10         3
## x11   x11         3
## x12   x12         3
## X7     X7         4
## X8     X8         4
## x9     x9         4
## X4.    X4         5
## X5     X5         5
## x6     x6         5
MAPA(BAA); KGA(BAA)
## The smallest factor had an eigenvalue of 1.886 with a confidence interval of 1.699 to 2.073 so the factor should be retained.
## [1] 5
PAR <- fa.parallel(CGF5GData)

## Parallel analysis suggests that the number of factors =  5  and the number of components =  5
nfactors(CGF5GData, n = 8)

## 
## Number of factors
## Call: vss(x = x, n = n, rotate = rotate, diagonal = diagonal, fm = fm, 
##     n.obs = n.obs, plot = FALSE, title = title, use = use, cor = cor)
## VSS complexity 1 achieves a maximimum of Although the vss.max shows  6  factors, it is probably more reasonable to think about  1  factors
## VSS complexity 2 achieves a maximimum of 0.77  with  5  factors
## The Velicer MAP achieves a minimum of 0.03  with  1  factors 
## Empirical BIC achieves a minimum of  -239  with  5  factors
## Sample Size adjusted BIC achieves a minimum of  -83.4  with  5  factors
## 
## Statistics by number of factors 
##   vss1 vss2   map dof  chisq
## 1 0.49 0.00 0.034  90 1739.4
## 2 0.48 0.59 0.039  76 1280.2
## 3 0.55 0.68 0.044  63  891.3
## 4 0.59 0.75 0.043  51  497.9
## 5 0.70 0.77 0.037  40   56.0
## 6 0.70 0.76 0.050  30   37.0
## 7 0.67 0.76 0.069  21   23.6
## 8 0.64 0.76 0.090  13    9.8
##                                                                                                                                                                                                                                                                                                                 prob
## 1 0.0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000017
## 2 0.0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000520994896207862109166475406141216808464378118515014648437500000000000000000000000000000
## 3 0.0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000131210443806182733359876751988437604268256109207868576049804687500000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
## 4 0.0000000000000000000000000000000000000000000000000000000000000000000000000136056306568830720775571674341364314386737532913684844970703125000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
## 5 0.0478800164523904073221594046572135994210839271545410156250000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
## 6 0.1765092572705641293939748948105261661112308502197265625000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
## 7 0.3126415400786062281923705086228437721729278564453125000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
## 8 0.7081956107672926492568876710720360279083251953125000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
##   sqresid  fit RMSEA  BIC SABIC complex eChisq   SRMR eCRMS eBIC
## 1    13.4 0.49 0.153 1140  1426     1.0 2745.0 0.1295 0.140 2146
## 2    10.7 0.59 0.143  774  1015     1.4 1937.0 0.1087 0.128 1431
## 3     8.1 0.69 0.130  472   672     1.4 1193.7 0.0854 0.110  774
## 4     5.6 0.78 0.106  158   320     1.4  528.2 0.0568 0.081  189
## 5     3.8 0.86 0.023 -210   -83     1.1   27.1 0.0129 0.021 -239
## 6     3.6 0.86 0.017 -163   -67     1.2   17.3 0.0103 0.019 -182
## 7     3.3 0.87 0.013 -116   -50     1.3   11.0 0.0082 0.018 -129
## 8     3.0 0.89 0.000  -77   -35     1.3    4.6 0.0053 0.015  -82
BAA <- fa(CGF5GData, nfactors = PAR$nfact)
estimate.ED(CGF5GData)
## [1] ED estimated from the correlation matrix; no disattenuation; no small-sample correction.
## $n1
## [1] 11.3
## 
## $n2
## [1] 8.62
## 
## $nInf
## [1] 4.11
## 
## $nC
## [1] 14.3
EGA(CGF5GData, plot.EGA = T)
## Network estimated with:
##  • gamma = 0.5
##  • lambda.min.ratio = 0.1

## EGA Results:
## 
## Number of Dimensions:
## [1] 5
## 
## Items per Dimension:
##     items dimension
## X4.    X4         1
## X5     X5         1
## x6     x6         1
## X7     X7         2
## X8     X8         2
## x9     x9         2
## x10   x10         3
## x11   x11         3
## x12   x12         3
## X1.    X1         4
## X2.    X2         4
## x3     x3         4
## X13   X13         5
## X14   X14         5
## x15   x15         5
MAPA(BAA); KGA(BAA)
## The smallest factor had an eigenvalue of 1.342 with a confidence interval of 1.209 to 1.475 so the factor should be retained.
## [1] 5
PAR <- fa.parallel(CGF6Data)

## Parallel analysis suggests that the number of factors =  6  and the number of components =  6
nfactors(CGF6Data, n = 8)

## 
## Number of factors
## Call: vss(x = x, n = n, rotate = rotate, diagonal = diagonal, fm = fm, 
##     n.obs = n.obs, plot = FALSE, title = title, use = use, cor = cor)
## VSS complexity 1 achieves a maximimum of 0.8  with  6  factors
## VSS complexity 2 achieves a maximimum of 0.83  with  7  factors
## The Velicer MAP achieves a minimum of 0.03  with  6  factors 
## Empirical BIC achieves a minimum of  -371  with  6  factors
## Sample Size adjusted BIC achieves a minimum of  -155  with  6  factors
## 
## Statistics by number of factors 
##   vss1 vss2   map dof chisq
## 1 0.16 0.00 0.034 135  2662
## 2 0.31 0.32 0.038 118  2127
## 3 0.42 0.46 0.040 102  1625
## 4 0.55 0.59 0.041  87  1110
## 5 0.68 0.70 0.039  73   529
## 6 0.80 0.82 0.030  60    54
## 7 0.78 0.83 0.039  48    34
## 8 0.76 0.82 0.050  37    22
##                                                                                                                                                                                                                                                                                  prob
## 1 0.000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
## 2 0.000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
## 3 0.000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000013
## 4 0.000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000004105161955677564901423526322332691051997244358062744140625000000000000000000000000000000000000000
## 5 0.000000000000000000000000000000000000000000000000000000000000000000000233289626040760700526402460219799195328960195183753967285156250000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
## 6 0.690371739939247497375163220567628741264343261718750000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
## 7 0.938556877345286366676191391889005899429321289062500000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
## 8 0.978641991046107717622248856059741228818893432617187500000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
##   sqresid  fit RMSEA  BIC SABIC complex eChisq   SRMR eCRMS eBIC
## 1    22.5 0.16 0.155 1763  2191     1.0   5519 0.1521 0.162 4619
## 2    18.3 0.32 0.148 1341  1716     1.3   4183 0.1324 0.151 3397
## 3    14.4 0.46 0.138  946  1270     1.5   2946 0.1111 0.136 2267
## 4    10.8 0.60 0.123  531   807     1.3   1837 0.0877 0.116 1258
## 5     7.6 0.72 0.089   43   275     1.3    859 0.0600 0.087  373
## 6     4.8 0.82 0.000 -345  -155     1.0     29 0.0110 0.018 -371
## 7     4.3 0.84 0.000 -286  -133     1.1     18 0.0086 0.015 -302
## 8     4.1 0.85 0.000 -225  -107     1.1     11 0.0069 0.014 -235
BAA <- fa(CGF6Data, nfactors = PAR$nfact)
estimate.ED(CGF6Data)
## [1] ED estimated from the correlation matrix; no disattenuation; no small-sample correction.
## $n1
## [1] 14.3
## 
## $n2
## [1] 12.1
## 
## $nInf
## [1] 7.95
## 
## $nC
## [1] 17.5
EGA(CGF6Data, plot.EGA = T)
## Network estimated with:
##  • gamma = 0.5
##  • lambda.min.ratio = 0.1

## EGA Results:
## 
## Number of Dimensions:
## [1] 6
## 
## Items per Dimension:
##     items dimension
## X7     X7         1
## X8     X8         1
## x9     x9         1
## X13   X13         2
## X14   X14         2
## x15   x15         2
## X4.    X4         3
## X5     X5         3
## x6     x6         3
## X1.    X1         4
## X2.    X2         4
## x3     x3         4
## x10   x10         5
## x11   x11         5
## x12   x12         5
## X16   X16         6
## X17   X17         6
## x18   x18         6
MAPA(BAA); KGA(BAA)
## The smallest factor had an eigenvalue of 1.652 with a confidence interval of 1.488 to 1.816 so the factor should be retained.
## [1] 6
PAR <- fa.parallel(CGF6GData)

## Parallel analysis suggests that the number of factors =  6  and the number of components =  6
nfactors(CGF6GData, n = 8)

## 
## Number of factors
## Call: vss(x = x, n = n, rotate = rotate, diagonal = diagonal, fm = fm, 
##     n.obs = n.obs, plot = FALSE, title = title, use = use, cor = cor)
## VSS complexity 1 achieves a maximimum of Although the vss.max shows  6  factors, it is probably more reasonable to think about  2  factors
## VSS complexity 2 achieves a maximimum of 0.74  with  8  factors
## The Velicer MAP achieves a minimum of 0.03  with  1  factors 
## Empirical BIC achieves a minimum of  -366  with  6  factors
## Sample Size adjusted BIC achieves a minimum of  -143  with  6  factors
## 
## Statistics by number of factors 
##   vss1 vss2   map dof chisq
## 1 0.49 0.00 0.026 135  2042
## 2 0.50 0.57 0.030 118  1638
## 3 0.48 0.60 0.033 102  1207
## 4 0.56 0.66 0.031  87   811
## 5 0.60 0.70 0.032  73   395
## 6 0.67 0.74 0.031  60    66
## 7 0.64 0.74 0.040  48    48
## 8 0.64 0.74 0.049  37    36
##                                                                                                                                                                                                                                                                            prob
## 1 0.000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
## 2 0.000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000079
## 3 0.000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000024921159327918747177724873020920881572237703949213027954101562500000000000000000
## 4 0.000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000562867975965789687402621166434357746766181662678718566894531250000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
## 5 0.000000000000000000000000000000000000000000001260049781495340190233013510923143485342734493315219879150390625000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
## 6 0.267054611169309397133986294647911563515663146972656250000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
## 7 0.469357164203459298335019411752000451087951660156250000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
## 8 0.525560177342665468813720508478581905364990234375000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
##   sqresid  fit   RMSEA  BIC SABIC complex eChisq   SRMR eCRMS eBIC
## 1    16.3 0.49 0.13455 1143  1571     1.0   3222 0.1162 0.124 2323
## 2    13.8 0.57 0.12851  852  1227     1.2   2524 0.1028 0.117 1738
## 3    11.3 0.65 0.11787  528   852     1.7   1803 0.0869 0.106 1124
## 4     8.9 0.72 0.10331  232   508     1.5   1139 0.0691 0.092  559
## 5     6.7 0.79 0.07513  -92   140     1.5    542 0.0477 0.069   56
## 6     4.7 0.85 0.01159 -333  -143     1.2     34 0.0118 0.019 -366
## 7     4.4 0.86 0.00081 -272  -119     1.2     25 0.0101 0.018 -295
## 8     4.2 0.87 0.00000 -211   -93     1.3     18 0.0086 0.018 -229
BAA <- fa(CGF6GData, nfactors = PAR$nfact)
estimate.ED(CGF6GData)
## [1] ED estimated from the correlation matrix; no disattenuation; no small-sample correction.
## $n1
## [1] 13.5
## 
## $n2
## [1] 10.2
## 
## $nInf
## [1] 4.47
## 
## $nC
## [1] 17.2
EGA(CGF6GData, plot.EGA = T)
## Network estimated with:
##  • gamma = 0.5
##  • lambda.min.ratio = 0.1

## EGA Results:
## 
## Number of Dimensions:
## [1] 6
## 
## Items per Dimension:
##     items dimension
## X1.    X1         1
## X2.    X2         1
## x3     x3         1
## X7     X7         2
## X8     X8         2
## x9     x9         2
## X4.    X4         3
## X5     X5         3
## x6     x6         3
## X16   X16         4
## X17   X17         4
## x18   x18         4
## X13   X13         5
## X14   X14         5
## x15   x15         5
## x10   x10         6
## x11   x11         6
## x12   x12         6
MAPA(BAA); KGA(BAA)
## The smallest factor had an eigenvalue of 1.451 with a confidence interval of 1.307 to 1.595 so the factor should be retained.
## [1] 6
PAR <- fa.parallel(HOFData)

## Parallel analysis suggests that the number of factors =  7  and the number of components =  1
nfactors(HOFData, n = 8)

## 
## Number of factors
## Call: vss(x = x, n = n, rotate = rotate, diagonal = diagonal, fm = fm, 
##     n.obs = n.obs, plot = FALSE, title = title, use = use, cor = cor)
## VSS complexity 1 achieves a maximimum of 0.83  with  1  factors
## VSS complexity 2 achieves a maximimum of 0.85  with  2  factors
## The Velicer MAP achieves a minimum of 0.03  with  1  factors 
## Empirical BIC achieves a minimum of  -91.4  with  7  factors
## Sample Size adjusted BIC achieves a minimum of  -40.3  with  7  factors
## 
## Statistics by number of factors 
##   vss1 vss2   map dof  chisq
## 1 0.83 0.00 0.030  77 1615.0
## 2 0.69 0.85 0.040  64 1308.8
## 3 0.55 0.80 0.049  52 1048.0
## 4 0.48 0.75 0.061  41  802.8
## 5 0.45 0.74 0.069  31  519.3
## 6 0.41 0.59 0.076  22  254.3
## 7 0.42 0.55 0.078  14    8.4
## 8 0.41 0.54 0.109   7    4.0
##                                                                                                                                                                                                                                                                                                prob
## 1 0.00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000026
## 2 0.00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000159489746311929682658255214011688849495840258896350860596
## 3 0.00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000001721939476139614867743546366973816930112661793828010559082031250000000000000000000000000000000000000000
## 4 0.00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000053545595320299589877158230599718535813735798001289367675781250000000000000000000000000000000000000000000000000000000000000000000000000000000000000
## 5 0.00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000545669500187988198731253719486744557798374444246292114257812500000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
## 6 0.00000000000000000000000000000000000000002015581969248190243731988502418062125798314809799194335937500000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
## 7 0.86463185272764087052621562179410830140113830566406250000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
## 8 0.78541809532908368751691341458354145288467407226562500000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
##   sqresid  fit RMSEA  BIC SABIC complex  eChisq   SRMR  eCRMS  eBIC
## 1     7.1 0.83  0.16 1102  1347     1.0 1181.17 0.0912 0.0992 668.4
## 2     6.0 0.85  0.16  883  1086     1.3  936.98 0.0812 0.0969 510.8
## 3     5.0 0.88  0.16  702   867     1.7  721.49 0.0713 0.0943 375.2
## 4     4.0 0.90  0.15  530   660     1.8  525.49 0.0608 0.0906 252.5
## 5     3.2 0.92  0.14  313   411     1.7  336.68 0.0487 0.0834 130.2
## 6     2.3 0.94  0.12  108   178     1.9  152.65 0.0328 0.0667   6.1
## 7     1.5 0.96  0.00  -85   -40     1.7    1.82 0.0036 0.0091 -91.4
## 8     1.3 0.97  0.00  -43   -20     1.7    0.87 0.0025 0.0089 -45.7
BAA <- fa(HOFData, nfactors = PAR$nfact)
estimate.ED(HOFData)
## [1] ED estimated from the correlation matrix; no disattenuation; no small-sample correction.
## $n1
## [1] 8.08
## 
## $n2
## [1] 4.83
## 
## $nInf
## [1] 2.4
## 
## $nC
## [1] 12.1
EGA(HOFData, plot.EGA = T)
## Network estimated with:
##  • gamma = 0.5
##  • lambda.min.ratio = 0.1

## EGA Results:
## 
## Number of Dimensions:
## [1] 6
## 
## Items per Dimension:
##     items dimension
## X3.    X3         1
## X4.    X4         1
## X11   X11         1
## X12   X12         1
## X7     X7         2
## X8     X8         2
## X13   X13         3
## X14   X14         3
## X1.    X1         4
## X2.    X2         4
## X9     X9         5
## X10   X10         5
## X5     X5         6
## X6     X6         6
MAPA(BAA); KGA(BAA)
## The smallest factor had an eigenvalue of 0.851 with a confidence interval of 0.767 to 0.936 so the factor should be dropped.
## [1] 4
PAR <- fa.parallel(BFData)

## Parallel analysis suggests that the number of factors =  7  and the number of components =  1
nfactors(BFData, n = 8)

## 
## Number of factors
## Call: vss(x = x, n = n, rotate = rotate, diagonal = diagonal, fm = fm, 
##     n.obs = n.obs, plot = FALSE, title = title, use = use, cor = cor)
## VSS complexity 1 achieves a maximimum of 0.81  with  1  factors
## VSS complexity 2 achieves a maximimum of 0.84  with  2  factors
## The Velicer MAP achieves a minimum of 0.03  with  1  factors 
## Empirical BIC achieves a minimum of  -90.4  with  7  factors
## Sample Size adjusted BIC achieves a minimum of  -35.8  with  7  factors
## 
## Statistics by number of factors 
##   vss1 vss2   map dof  chisq
## 1 0.81 0.00 0.030  77 1642.7
## 2 0.68 0.84 0.041  64 1341.0
## 3 0.56 0.80 0.052  52 1100.0
## 4 0.49 0.77 0.061  41  827.5
## 5 0.43 0.66 0.074  31  543.6
## 6 0.43 0.63 0.073  22  285.6
## 7 0.44 0.56 0.078  14   13.0
## 8 0.44 0.56 0.106   7    4.2
##                                                                                                                                                                                                                                                                                                      prob
## 1 0.00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000048
## 2 0.00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000033187059261194400048254615764875552486046217381954193115
## 3 0.00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000029546202441996901231179017965899902264936827123165130615234375000000000000000000000000000000000000
## 4 0.00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000427793932454426383644660591976105479261605069041252136230468750000000000000000000000000000000000000000000000000000000000000000000000000000000000000
## 5 0.00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000005745233996612875963620747921112297262880019843578338623046875000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
## 6 0.00000000000000000000000000000000000000000000001008496402079853894884903242612494977947790175676345825195312500000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
## 7 0.52487942002229381888156467539374716579914093017578125000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
## 8 0.75213957341614101625992816479993052780628204345703125000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
##   sqresid  fit RMSEA  BIC SABIC complex  eChisq   SRMR  eCRMS eBIC
## 1     7.3 0.81  0.16 1130  1374     1.0 1230.79 0.0931 0.1012  718
## 2     6.2 0.84  0.16  915  1118     1.3  986.35 0.0834 0.0994  560
## 3     5.3 0.86  0.16  754   919     1.6  786.65 0.0744 0.0985  440
## 4     4.3 0.89  0.16  554   685     1.7  568.51 0.0633 0.0943  295
## 5     3.3 0.92  0.15  337   436     2.0  368.80 0.0510 0.0873  162
## 6     2.4 0.94  0.12  139   209     1.9  173.56 0.0350 0.0711   27
## 7     1.5 0.96  0.00  -80   -36     1.6    2.82 0.0045 0.0114  -90
## 8     1.5 0.96  0.00  -42   -20     1.7    0.97 0.0026 0.0094  -46
BAA <- fa(BFData, nfactors = PAR$nfact)
estimate.ED(BFData)
## [1] ED estimated from the correlation matrix; no disattenuation; no small-sample correction.
## $n1
## [1] 8.25
## 
## $n2
## [1] 5.01
## 
## $nInf
## [1] 2.47
## 
## $nC
## [1] 12.2
EGA(BFData, plot.EGA = T)
## Network estimated with:
##  • gamma = 0.5
##  • lambda.min.ratio = 0.1

## EGA Results:
## 
## Number of Dimensions:
## [1] 6
## 
## Items per Dimension:
##     items dimension
## X9     X9         1
## X10   X10         1
## X1.    X1         2
## X2.    X2         2
## X5     X5         2
## X6     X6         2
## X3.    X3         3
## X4.    X4         3
## X13   X13         4
## X14   X14         4
## X11   X11         5
## X12   X12         5
## X7     X7         6
## X8     X8         6
MAPA(BAA); KGA(BAA)
## The smallest factor had an eigenvalue of 0.892 with a confidence interval of 0.803 to 0.98 so the factor should be dropped.
## [1] 4

Real Data

PAR <- fa.parallel(SLBF.cor, n.obs = nSLBF)

## Parallel analysis suggests that the number of factors =  5  and the number of components =  2
nfactors(SLBF.cor, n = 8, n.obs = nSLBF)

## 
## Number of factors
## Call: vss(x = x, n = n, rotate = rotate, diagonal = diagonal, fm = fm, 
##     n.obs = n.obs, plot = FALSE, title = title, use = use, cor = cor)
## VSS complexity 1 achieves a maximimum of 0.78  with  1  factors
## VSS complexity 2 achieves a maximimum of 0.83  with  2  factors
## The Velicer MAP achieves a minimum of 0.02  with  1  factors 
## Empirical BIC achieves a minimum of  -201  with  4  factors
## Sample Size adjusted BIC achieves a minimum of  -75  with  5  factors
## 
## Statistics by number of factors 
##   vss1 vss2   map dof chisq
## 1 0.78 0.00 0.021  77 779.3
## 2 0.65 0.83 0.022  64 414.2
## 3 0.44 0.77 0.031  52 180.6
## 4 0.40 0.70 0.041  41  95.3
## 5 0.39 0.65 0.056  31  33.0
## 6 0.38 0.62 0.074  22  20.6
## 7 0.35 0.57 0.096  14  10.6
## 8 0.36 0.59 0.134   7   2.8
##                                                                                                                      prob
## 1 0.000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000011
## 2 0.000000000000000000000000000000000000000000000000000103854672487688413917705787525846972130239009857177734375000000000
## 3 0.000000000000000421481861539066105480644264247302999137900769710540771484375000000000000000000000000000000000000000000
## 4 0.000003316331995392058128082485302101645174843724817037582397460937500000000000000000000000000000000000000000000000000
## 5 0.370352694454136766033514049922814592719078063964843750000000000000000000000000000000000000000000000000000000000000000
## 6 0.543593924592050914412766360328532755374908447265625000000000000000000000000000000000000000000000000000000000000000000
## 7 0.719008369937663927018434151250403374433517456054687500000000000000000000000000000000000000000000000000000000000000000
## 8 0.903362951828015514621483816881664097309112548828125000000000000000000000000000000000000000000000000000000000000000000
##   sqresid  fit RMSEA  BIC  SABIC complex eChisq   SRMR eCRMS eBIC
## 1     7.6 0.78 0.108  267 511.09     1.0  735.3 0.0720 0.078  223
## 2     5.8 0.83 0.084  -12 191.22     1.4  260.6 0.0428 0.051 -166
## 3     5.2 0.85 0.056 -166  -0.57     1.8  146.8 0.0322 0.043 -199
## 4     4.6 0.87 0.041 -178 -47.58     2.0   71.8 0.0225 0.033 -201
## 5     4.1 0.88 0.009 -173 -75.02     2.0   20.5 0.0120 0.021 -186
## 6     3.5 0.90 0.000 -126 -56.01     2.1   11.7 0.0091 0.018 -135
## 7     3.3 0.90 0.000  -83 -38.20     2.3    5.3 0.0061 0.016  -88
## 8     3.3 0.90 0.000  -44 -21.59     2.5    1.4 0.0031 0.011  -45
BAA <- fa(SLBF.cor, nfactors = PAR$nfact, n.obs = nSLBF)
estimate.ED(SLBF.cor)
## [1] ED estimated from the correlation matrix; no disattenuation; no small-sample correction.
## $n1
## [1] 9.12
## 
## $n2
## [1] 5.74
## 
## $nInf
## [1] 2.69
## 
## $nC
## [1] 12.6
EGA(SLBF.cor, plot.EGA = T, n = nSLBF)
## Network estimated with:
##  • gamma = 0.5
##  • lambda.min.ratio = 0.1

## EGA Results:
## 
## Number of Dimensions:
## [1] 2
## 
## Items per Dimension:
##     items dimension
## VIS  VISM         1
## RET  RETF         1
## PIC  PICR         1
## AUD  AUDA         1
## DES  DESP         1
## VC     VC         2
## VAL   VAL         2
## SPA  SPAR         2
## SBL   SBL         2
## CON  CONF         2
## NUM  NUMR         2
## GIN  GINF         2
## ANS  ANSY         2
## MEW  MEWO         2
MAPA(BAA); KGA(BAA)
## The smallest factor had an eigenvalue of 0.845 with a confidence interval of 0.761 to 0.929 so the factor should be dropped.
## [1] 3
NUM <- seq(1, 14, 1); EVS <- BAA$e.values; TVP <- PTV(BAA)
bpdf <- data.frame("NUM" = NUM, "EVS" = EVS, "TVP" = TVP)

EVP <- ggplot(bpdf, aes(x = NUM, y = EVS, fill = as.factor(NUM))) + geom_bar(stat = "identity", show.legend = F) + xlab("Number") + ylab("Eigenvalue") + theme_bw() + scale_fill_hue(c = 30) + theme(text = element_text(size = 12, family = "serif"),  plot.title = element_text(hjust = 0.5)) + geom_hline(yintercept = 1, color = "darkgray", linetype = "dashed", size = 1)

VPT <- ggplot(bpdf, aes(x = factor(1), y = TVP, fill = as.factor(NUM))) + geom_bar(stat = "identity", show.legend = F) + ylab("Percentage of Total Variance") + theme_bw() + scale_fill_hue(c = 50) + theme(text = element_text(size = 12, family = "serif"),  plot.title = element_text(hjust = 0.5), axis.title.y = element_blank(), axis.text.y = element_blank(), axis.ticks.y = element_blank()) + coord_polar("y")

ggarrange(EVP, VPT, ncol = 2)

Accuracy?

Method K - Empirical K - CGF (Nominal) K - CGF K - HOF K - BF
Kaiser-Guttman Rule 3 7 7 4 4
Modified Kaiser-Guttman Rule 2 7 6 1 3
Parallel Analysis 5 7 7 7 7
Very Simple Structure - C1 1 8 8 (1D 'max') 1 1
Very Simple Structure - C2 2 8 7 2 2
Exploratory Graph Analysis 2 7 7 6 6
Schwarz' Criterion 4 7 7 7 7
Effective Dimensionality - \(\eta_1\) 9.12 12.3 11.3 8.08 8.25
Effective Dimensionality - \(\eta_2\) 5.74 11.2 8.99 4.83 5.01
Effective Dimensionality - \(\eta_{\infty}\) 2.69 8.33 4.14 2.40 2.47
Effective Dimensionality - \(\eta_c\) 12.6 13.8 13.4 12.1 12.2
Minimum Average Partial 1 1 1 1 1
Sample Size-adjusted BIC 5 7 7 7 7
\(\chi^2\) 4 6 6 6 6

The various methods all produce reasonably sensible results. For example, when group factors are minimally correlated, the Kaiser-Guttman rule performs admirably. The plotted appearances of the correlated group factors data clearly show differences from typical data or the data with a g, but this only means those scenarios are excluded; scenarios where there are more strongly correlated group factors (and thus more unidimensionality, with or without a g) can converge to a g-like appearance. None of the methods except the VSS highlighted an eighth factor - a g -, which is what we should expect if the matrix is made primarily unidimensionally, regardless of g being orthogonal or sitting atop a hierarchy of group factors. That the VSS seemed to work out to eight factors might be reasonable if those models had eight factors, but they had seven. Perhaps the correlations among them were construed as another factor. The effective dimensionality results are hard to interpret. From some playing around I've done with this method elsewhere, I found that it would suggest residual covariances and unmodeled factors (in real data, factors generating residual covariances which cannot be identified are not unheard of) were partial dimensions, so it is difficult to interpret this. It seems to have, in some cases, confused covariances among factors with additional dimensions, making its performance hard to questionable.

The empirical data indicated a modest to low number of factors. Certainly nowhere near the seven extracted - and because they could not be identified without constraints, biased against the bifactor and higher-order models - by Decker et al. (2020). I will, next, see which model fits best to the various simulated data in CFA and use Akaike weights to compare their fits since they will obviously fit extremely well in terms of approximate fit indices. The g models will not even converge with the weakly intercorrelated group factor model data, so I will use the more strongly intercorrelated data.

Confirmatory Factor Analysis of Simulated Data

#True CGF, strong intercorrelations; weak provides a more g-favorable result
CGCGF.fit <- cfa(CGF.sim, data = CGFGData, std.lv = T, orthogonal = F)
CGHOF.fit <- cfa(HOF.sim, data = CGFGData, std.lv = T, orthogonal = T)
CGBF.fit <- suppressWarnings(cfa(BF.sim, data = CGFGData, std.lv = T, orthogonal = T))

#True HOF
HOCGF.fit <- cfa(CGF.sim, data = HOFData, std.lv = T, orthogonal = F)
HOHOF.fit <- cfa(HOF.sim, data = HOFData, std.lv = T, orthogonal = T)
HOBF.fit <- suppressWarnings(cfa(BF.sim, data = HOFData, std.lv = T, orthogonal = T))

#True BF
BFCGF.fit <- cfa(CGF.sim, data = BFData, std.lv = T, orthogonal = F)
BFHOF.fit <- cfa(HOF.sim, data = BFData, std.lv = T, orthogonal = T)
BFBF.fit <- suppressWarnings(cfa(BF.sim, data = BFData, std.lv = T, orthogonal = T))

round(cbind("Correlated GF" = fitMeasures(CGCGF.fit, FITM),
            "Higher-order" = fitMeasures(CGHOF.fit, FITM),
            "Bifactor" = fitMeasures(CGBF.fit, FITM)), 3)
##                Correlated GF Higher-order  Bifactor
## chisq                 45.382       63.733    63.733
## df                    56.000       70.000    63.000
## npar                  49.000       35.000    42.000
## cfi                    1.000        1.000     1.000
## rmsea                  0.000        0.000     0.004
## rmsea.ci.lower         0.000        0.000     0.000
## rmsea.ci.upper         0.013        0.017     0.022
## aic                36649.150    36639.502 36653.502
## bic                36877.456    36802.577 36849.192
## srmr                   0.017        0.024     0.024
## tli                    1.009        1.004     0.999
round(cbind("Correlated GF" = fitMeasures(HOCGF.fit, FITM),
            "Higher-order" = fitMeasures(HOHOF.fit, FITM),
            "Bifactor" = fitMeasures(HOBF.fit, FITM)), 3)
##                Correlated GF Higher-order  Bifactor
## chisq                 51.317       66.788    66.788
## df                    56.000       70.000    63.000
## npar                  49.000       35.000    42.000
## cfi                    1.000        1.000     0.999
## rmsea                  0.000        0.000     0.009
## rmsea.ci.lower         0.000        0.000     0.000
## rmsea.ci.upper         0.019        0.019     0.024
## aic                38418.220    38405.691 38419.691
## bic                38646.525    38568.766 38615.381
## srmr                   0.013        0.019     0.019
## tli                    1.002        1.001     0.999
round(cbind("Correlated GF" = fitMeasures(BFCGF.fit, FITM),
            "Higher-order" = fitMeasures(BFHOF.fit, FITM),
            "Bifactor" = fitMeasures(BFBF.fit, FITM)), 3)
##                Correlated GF Higher-order  Bifactor
## chisq                 64.601       73.729    73.729
## df                    56.000       70.000    63.000
## npar                  49.000       35.000    42.000
## cfi                    0.998        0.999     0.998
## rmsea                  0.014        0.008     0.015
## rmsea.ci.lower         0.000        0.000     0.000
## rmsea.ci.upper         0.027        0.023     0.027
## aic                38425.726    38406.854 38420.854
## bic                38654.031    38569.929 38616.544
## srmr                   0.014        0.017     0.017
## tli                    0.997        0.999     0.997
AICs <- c(fitMeasures(CGCGF.fit, "aic"), fitMeasures(CGHOF.fit, "aic"), fitMeasures(CGBF.fit, "aic"))
AICweights <- akaike.weights(AICs)$weights

Mods <- data.frame(Model = factor(rep(c("Correlated GF", "Higher-order", "Bifactor"))),
                   CRIT = factor(rep('AIC', each = 3)),
                   values = AICweights)

CGFB <- ggplot(Mods, aes(Model, values, fill = Model)) + geom_bar(stat = "identity", position = "dodge", show.legend = F) + coord_cartesian(ylim = c(0,1)) + xlab('True Correlated GF') + ylab('Normalized Probability') + theme_bw() + theme(text = element_text(size = 12, family = "serif"),  plot.title = element_text(hjust = 0.5))

AICs <- c(fitMeasures(HOCGF.fit, "aic"), fitMeasures(HOHOF.fit, "aic"), fitMeasures(HOBF.fit, "aic"))
AICweights <- akaike.weights(AICs)$weights

Mods <- data.frame(Model = factor(rep(c("Correlated GF", "Higher-order", "Bifactor"))),
                   CRIT = factor(rep('AIC', each = 3)),
                   values = AICweights)

HOFB <- ggplot(Mods, aes(Model, values, fill = Model)) + geom_bar(stat = "identity", position = "dodge", show.legend = F) + coord_cartesian(ylim = c(0,1)) + xlab('True Higher-order') + ylab('Normalized Probability') + theme_bw() + theme(text = element_text(size = 12, family = "serif"),  plot.title = element_text(hjust = 0.5))

AICs <- c(fitMeasures(BFCGF.fit, "aic"), fitMeasures(BFHOF.fit, "aic"), fitMeasures(BFBF.fit, "aic"))
AICweights <- akaike.weights(AICs)$weights

Mods <- data.frame(Model = factor(rep(c("Correlated GF", "Higher-order", "Bifactor"))),
                   CRIT = factor(rep('AIC', each = 3)),
                   values = AICweights)

BFB <- ggplot(Mods, aes(Model, values, fill = Model)) + geom_bar(stat = "identity", position = "dodge", show.legend = F) + coord_cartesian(ylim = c(0,1)) + xlab('True Bifactor') + ylab('Normalized Probability') + theme_bw() + theme(text = element_text(size = 12, family = "serif"),  plot.title = element_text(hjust = 0.5))

ggarrange(CGFB, HOFB, BFB, ncol = 3)

Because it is the most parsimonious fit to data that are basically perfectly approximated by each model, the higher-order model is consistently preferred, even when it is not the correct model. This possibility is basically absent with real data, but it remains a possibility nonetheless. So what can be done to differentiate models? Let's examine the unidimensionality - as measured by \(\omega_h\) - of the simulated data, as assessed by the bifactor and higher-order models. With a true correlated group factor model, the higher-order and bifactor models estimate \(\omega_h\) at 0.705 and 0.647; with a true higher-order model, 0.867 and 0.830, and; with a true bifactor model 0.861 and 0.822. The bifactor model consistently explained 20-25% more total variance and the higher-order model produced a greater estimate of the explained common variance attributable to g alongside 5-10% lower \(\omega_t\) and group factors explaining smaller total variance. In data where there was no g factor, the \(\omega_h\) was considerably smaller, the disparity in the relative \(\omega\) between the higher-order and bifactor models was larger, the H for g was 15-16% lower, the total variance explained by g was halved, and the explained common variance for g was below common guidelines to consider a battery unidimensional (i.e., \(\geq 0.70\)) for both the higher-order (0.682) and bifactor (0.311) models. The model which is multidimensional with decent intercorrelations was suitably described by the interpretability indices as multidimensional; the models which were unidimensional were adequately described as unidimensional, although the judgment for this has to be holistic and should, for example, include simulated versions of empirical matrices in order to offer clarity regarding the dimensionality and should use multiple indices, as the explained common variance for the bifactor model in the true higher-order and bifactor data was still only around 0.50. Worth noting is that despite a different in the levels of the indices between different simulated models, all the batteries may only be interpretable in general in whatever case; the difference here is only really relevant for theoretical rather than clinical purposes. Common cutoffs for some of these indices may be wrong. The cutoff for \(\omega_h\) of 0.5, for example, could wrongly indicate a lack of interpretive difference between true correlated group factor data without general covariances (i.e. data unlike intelligence data) and the rest (and this is, really, probably accurate given the weakness of the group factors and relative strength of general variance). Differences of the magnitudes indicated by Revelle & Wilt (2013) may indicate unidimensionality or multidimensionality on each end of this cutoff, but clearly, \(\omega_h\) is not enough.

Model Comparisons

Not all equally important group factors!

#Replicating their exploratory factor analyses

FA1 <- fa(SLBF.cor, n.obs = nSLBF, nfactors = 1)
PCA1 <- pca(SLBF.cor, n.obs = nSLBF, nfactors = 1)
FA2 <- fa(SLBF.cor, n.obs = nSLBF, nfactors = 2)
FA3 <- fa(SLBF.cor, n.obs = nSLBF, nfactors = 3)
FA4 <- fa(SLBF.cor, n.obs = nSLBF, nfactors = 4)
FA5 <- fa(SLBF.cor, n.obs = nSLBF, nfactors = 5)
FA6 <- fa(SLBF.cor, n.obs = nSLBF, nfactors = 6)
FA7 <- fa(SLBF.cor, n.obs = nSLBF, nfactors = 7)

print(FA1$loadings, cutoff = 0.25)
## 
## Loadings:
##      MR1  
## VC   0.803
## VAL  0.621
## SPAR 0.458
## SBL  0.559
## CONF 0.740
## VISM 0.504
## NUMR 0.532
## GINF 0.734
## RETF 0.499
## PICR 0.285
## AUDA 0.475
## ANSY 0.633
## DESP 0.424
## MEWO 0.525
## 
##                  MR1
## SS loadings    4.588
## Proportion Var 0.328
print(PCA1$loadings, cutoff = 0.25)
## 
## Loadings:
##      PC1  
## VC   0.807
## VAL  0.661
## SPAR 0.506
## SBL  0.604
## CONF 0.760
## VISM 0.558
## NUMR 0.580
## GINF 0.753
## RETF 0.549
## PICR 0.325
## AUDA 0.528
## ANSY 0.671
## DESP 0.478
## MEWO 0.573
## 
##                  PC1
## SS loadings    5.195
## Proportion Var 0.371
print(FA2$loadings, cutoff = 0.25)
## 
## Loadings:
##      MR1    MR2   
## VC    0.859       
## VAL   0.603       
## SPAR  0.471       
## SBL   0.544       
## CONF  0.765       
## VISM         0.625
## NUMR  0.468       
## GINF  0.766       
## RETF  0.288  0.341
## PICR              
## AUDA         0.435
## ANSY  0.636       
## DESP         0.835
## MEWO  0.510       
## 
##                  MR1   MR2
## SS loadings    3.857 1.437
## Proportion Var 0.276 0.103
## Cumulative Var 0.276 0.378
print(FA3$loadings, cutoff = 0.25)
## 
## Loadings:
##      MR1    MR3    MR2   
## VC    0.816              
## VAL          0.418       
## SPAR         0.496       
## SBL   0.405              
## CONF         0.674       
## VISM                0.636
## NUMR         0.417       
## GINF  0.877              
## RETF  0.303         0.366
## PICR                     
## AUDA                0.456
## ANSY         0.759       
## DESP                0.808
## MEWO         0.288       
## 
##                  MR1   MR3   MR2
## SS loadings    1.897 1.845 1.451
## Proportion Var 0.136 0.132 0.104
## Cumulative Var 0.136 0.267 0.371
print(FA4$loadings, cutoff = 0.25)
## 
## Loadings:
##      MR1    MR3    MR2    MR4   
## VC    0.818                     
## VAL          0.408              
## SPAR         0.517              
## SBL   0.320                0.268
## CONF         0.628              
## VISM                0.602       
## NUMR         0.296         0.317
## GINF  0.885                     
## RETF                0.341       
## PICR                            
## AUDA                0.432       
## ANSY         0.720              
## DESP                0.849       
## MEWO                       0.751
## 
##                  MR1   MR3   MR2   MR4
## SS loadings    1.737 1.563 1.430 0.847
## Proportion Var 0.124 0.112 0.102 0.061
## Cumulative Var 0.124 0.236 0.338 0.398
print(FA5$loadings, cutoff = 0.25)
## 
## Loadings:
##      MR1    MR3    MR2    MR4    MR5   
## VC    0.890                            
## VAL          0.387                     
## SPAR         0.522                     
## SBL                        0.724       
## CONF         0.602                     
## VISM                0.569              
## NUMR         0.266                0.406
## GINF  0.844                            
## RETF  0.351         0.299         0.265
## PICR                                   
## AUDA                0.417  0.313       
## ANSY         0.711                     
## DESP                0.850              
## MEWO                              0.510
## 
##                  MR1   MR3   MR2   MR4   MR5
## SS loadings    1.722 1.482 1.354 0.712 0.585
## Proportion Var 0.123 0.106 0.097 0.051 0.042
## Cumulative Var 0.123 0.229 0.326 0.376 0.418
print(FA6$loadings, cutoff = 0.25)
## 
## Loadings:
##      MR1    MR2    MR6    MR3    MR4    MR5   
## VC    0.877                                   
## VAL                 0.460                     
## SPAR                       0.999              
## SBL                               0.709       
## CONF                0.599                     
## VISM         0.564                            
## NUMR                                     0.446
## GINF  0.853                                   
## RETF  0.336  0.295                            
## PICR                0.299                     
## AUDA         0.413                0.318       
## ANSY                0.714                     
## DESP         0.848                            
## MEWO                              0.259  0.469
## 
##                 MR1   MR2   MR6   MR3   MR4   MR5
## SS loadings    1.68 1.336 1.247 1.018 0.697 0.576
## Proportion Var 0.12 0.095 0.089 0.073 0.050 0.041
## Cumulative Var 0.12 0.216 0.305 0.377 0.427 0.468
print(FA7$loadings, cutoff = 0.25)
## 
## Loadings:
##      MR1    MR2    MR6    MR3    MR4    MR5    MR7   
## VC    0.881                                          
## VAL                 0.295                            
## SPAR                       0.996                     
## SBL                               0.833              
## CONF                0.720                            
## VISM         0.498                       0.286       
## NUMR                                     0.463       
## GINF  0.858                                          
## RETF  0.326  0.285                                   
## PICR                                            0.289
## AUDA         0.439                                   
## ANSY                0.727                            
## DESP         0.854                                   
## MEWO                                     0.431       
## 
##                  MR1   MR2   MR6   MR3   MR4   MR5   MR7
## SS loadings    1.692 1.270 1.199 1.011 0.840 0.567 0.193
## Proportion Var 0.121 0.091 0.086 0.072 0.060 0.041 0.014
## Cumulative Var 0.121 0.212 0.297 0.369 0.429 0.470 0.484

The models below compare models with and without g for different numbers of factors, for the empirical data. I will compute the interpretability indices based on the higher-order models, since messing around with identifying constraints biases bifactor estimates at high numbers of group factors. Because models like HOF2 will be unidentified, strictly speaking, I will show an empirical regularity with the Lim (1988) data, that a lack of identification doesn't need to (and in fact does not often) have a large effect on the resulting loadings. I can do this because, in the two-factor higher-order case, the model will be unidentified, but modeled with another battery, the model will be identified.

#The result is somewhat worse-fitting using the EGA-suggested clustering for the group factors for this model. The g loadings do not practically differ as a result of relocating a single variable (PICR)

HOF2.model <- '
g =~ F1 + F2

F1 =~ VC + VAL + SPAR + SBL + CONF + NUMR + GINF + ANSY + MEWO + PICR
F2 =~ VISM + RETF + AUDA + DESP'

HOF3.model <- '
g =~ F1 + F2 + F3

F1 =~ VC + SBL + GINF + RETF
F2 =~ VAL + SPAR + CONF + NUMR + ANSY + PICR + MEWO
F3 =~ VISM + RETF + AUDA + DESP'

HOF4.model <- '
g =~ F1 + F2 + F3 + F4

F1 =~ VC + SBL + GINF
F2 =~ VAL + SPAR + CONF + ANSY + PICR
F3 =~ VISM + RETF + AUDA + DESP
F4 =~ NUMR + MEWO'

HOF5.model <- '
g =~ F1 + F2 + F3 + F4 + F5

F1 =~ VC + GINF + RETF
F2 =~ SPAR + CONF + ANSY + VAL + PICR
F3 =~ SBL + AUDA
F4 =~ VISM + AUDA + DESP
F5 =~ NUMR + MEWO'

HOF6.model <- '
g =~ F1 + F2 + F3 + F4 + F5 + F6

F1 =~ VC + GINF + RETF
F2 =~ VISM + AUDA + DESP
F3 =~ VAL + CONF + ANSY
F4 =~ SPAR + PICR
F5 =~ SBL + AUDA
F6 =~ NUMR + MEWO'

HOF7.model <- '
g =~ F1 + F2 + F3 + F4 + F5 + F6 + F7

F1 =~ VC + GINF + RETF
F2 =~ VISM + AUDA + DESP
F3 =~ CONF + ANSY
F4 =~ SPAR 
F5 =~ SBL
F6 =~ NUMR + MEWO
F7 =~ PICR + VAL'

HOF2.fit <- suppressWarnings(cfa(HOF2.model, sample.cov = SLBF.cor, sample.nobs = nSLBF, std.lv = T, mimic = "EQS")) #mimicking EQS to approximate Amos results
HOF3.fit <- cfa(HOF3.model, sample.cov = SLBF.cor, sample.nobs = nSLBF, std.lv = T, mimic = "EQS")
HOF4.fit <- cfa(HOF4.model, sample.cov = SLBF.cor, sample.nobs = nSLBF, std.lv = T, mimic = "EQS")
HOF5.fit <- cfa(HOF5.model, sample.cov = SLBF.cor, sample.nobs = nSLBF, std.lv = T, mimic = "EQS")
HOF6.fit <- cfa(HOF6.model, sample.cov = SLBF.cor, sample.nobs = nSLBF, std.lv = T, mimic = "EQS")
HOF7.fit <- cfa(HOF7.model, sample.cov = SLBF.cor, sample.nobs = nSLBF, std.lv = T, mimic = "EQS")

CGF2.model <- '
F1 =~ VC + VAL + SPAR + SBL + CONF + NUMR + GINF + ANSY + MEWO + PICR
F2 =~ VISM + RETF + AUDA + DESP'

CGF3.model <- '
F1 =~ VC + SBL + GINF + RETF
F2 =~ VAL + SPAR + CONF + NUMR + ANSY + PICR + MEWO
F3 =~ VISM + RETF + AUDA + DESP'

CGF4.model <- '
F1 =~ VC + SBL + GINF
F2 =~ VAL + SPAR + CONF + ANSY + PICR
F3 =~ VISM + RETF + AUDA + DESP
F4 =~ NUMR + MEWO'

CGF5.model <- '
F1 =~ VC + GINF + RETF
F2 =~ SPAR + CONF + ANSY + VAL + PICR
F3 =~ SBL + AUDA
F4 =~ VISM + AUDA + DESP
F5 =~ NUMR + MEWO'

CGF6.model <- '
F1 =~ VC + GINF + RETF
F2 =~ VISM + AUDA + DESP
F3 =~ VAL + CONF + ANSY
F4 =~ SPAR + PICR
F5 =~ SBL + AUDA
F6 =~ NUMR + MEWO

F3 ~~ 1*F4'

CGF7.model <- '
F1 =~ VC + GINF + RETF
F2 =~ VISM + AUDA + DESP
F3 =~ CONF + ANSY
F4 =~ SPAR 
F5 =~ SBL
F6 =~ NUMR + MEWO
F7 =~ PICR + VAL'

CGF2.fit <- cfa(CGF2.model, sample.cov = SLBF.cor, sample.nobs = nSLBF, std.lv = T, mimic = "EQS")
CGF3.fit <- cfa(CGF3.model, sample.cov = SLBF.cor, sample.nobs = nSLBF, std.lv = T, mimic = "EQS")
CGF4.fit <- cfa(CGF4.model, sample.cov = SLBF.cor, sample.nobs = nSLBF, std.lv = T, mimic = "EQS")
CGF5.fit <- cfa(CGF5.model, sample.cov = SLBF.cor, sample.nobs = nSLBF, std.lv = T, mimic = "EQS")
CGF6.fit <- suppressWarnings(cfa(CGF6.model, sample.cov = SLBF.cor, sample.nobs = nSLBF, std.lv = T, mimic = "EQS"))
CGF7.fit <- cfa(CGF7.model, sample.cov = SLBF.cor, sample.nobs = nSLBF, std.lv = T, mimic = "EQS")

round(cbind("HOF2" = fitMeasures(HOF2.fit, FITM),
            "HOF3" = fitMeasures(HOF3.fit, FITM),
            "HOF4" = fitMeasures(HOF4.fit, FITM),
            "HOF5" = fitMeasures(HOF5.fit, FITM),
            "HOF6" = fitMeasures(HOF6.fit, FITM),
            "HOF7" = fitMeasures(HOF7.fit, FITM)), 3)
##                     HOF2      HOF3      HOF4      HOF5      HOF6      HOF7
## chisq            496.818   277.355   286.386   252.628   262.924   303.534
## df                75.000    73.000    73.000    71.000    70.000    72.000
## npar              30.000    32.000    32.000    34.000    35.000    33.000
## cfi                0.885     0.944     0.942     0.950     0.947     0.937
## rmsea              0.085     0.060     0.061     0.057     0.059     0.064
## rmsea.ci.lower     0.078     0.053     0.054     0.050     0.052     0.057
## rmsea.ci.upper     0.092     0.068     0.069     0.065     0.067     0.072
## aic            27797.591 27581.847 27590.890 27561.087 27573.397 27610.059
## bic            27937.370 27730.944 27739.987 27719.503 27736.473 27763.816
## srmr               0.056     0.049     0.051     0.049     0.050     0.054
## tli                0.860     0.930     0.927     0.936     0.931     0.920
round(cbind("CGF2" = fitMeasures(CGF2.fit, FITM),
            "CGF3" = fitMeasures(CGF3.fit, FITM),
            "CGF4" = fitMeasures(CGF4.fit, FITM),
            "CGF5" = fitMeasures(CGF5.fit, FITM),
            "CGF6" = fitMeasures(CGF6.fit, FITM),
            "CGF7" = fitMeasures(CGF7.fit, FITM)), 3)
##                     CGF2      CGF3      CGF4      CGF5      CGF6      CGF7
## chisq            496.818   277.355   280.959   240.755   233.875   269.363
## df                76.000    73.000    71.000    66.000    62.000    58.000
## npar              29.000    32.000    34.000    39.000    43.000    47.000
## cfi                0.885     0.944     0.943     0.952     0.953     0.942
## rmsea              0.084     0.060     0.062     0.058     0.060     0.068
## rmsea.ci.lower     0.077     0.053     0.054     0.050     0.052     0.060
## rmsea.ci.upper     0.091     0.068     0.069     0.066     0.068     0.077
## aic            27795.591 27581.847 27589.455 27559.200 27560.310 27603.845
## bic            27930.711 27730.944 27747.871 27740.912 27760.660 27822.831
## srmr               0.056     0.049     0.050     0.048     0.047     0.052
## tli                0.862     0.930     0.926     0.934     0.931     0.909

The fit seems to incrementally improve up to five factors and then to worsen afterwards, although not consistently across indices. To achieve a 5% false-positive rate, the scaled p value is 0.000169 (Naaman, 2016). The two best-fitting models, the five-factor models, show better fit for the more parsimonious model, with a higher-order g. These fit better than the unwieldy, practically useless seven-factor models whose identification constraints were left unstated that were presented by Decker et al. (2020). With that out of the way, next I will contradict their abuse of EFA and consideration of multifactor EFA solutions as g plus group factors (how they arrived at this is baffling but probably emerged from reading of g as the first PC/factor of a PCA/EFA and simply not thinking further and never bothering the understand the method they were using) by showing the g loadings for the different higher-order models with their various numbers of group factors. I will compare these to the single-factor EFA and PCA solutions.

F2 <- c(0.648, 0.468, 0.347, 0.419, 0.556, 0.377, 0.602, 0.468, 0.382, 0.203, 0.551, 0.446, 0.458, 0.544)
F3 <- c(0.759, 0.635, 0.488, 0.430, 0.789, 0.533, 0.695, 0.681, 0.514, 0.282, 0.402, 0.303, 0.316, 0.399)
F4 <- c(0.784, 0.602, 0.468, 0.447, 0.758, 0.530, 0.721, 0.653, 0.516, 0.269, 0.441, 0.350, 0.358, 0.428)
F5 <- c(0.793, 0.594, 0.462, 0.569, 0.744, 0.523, 0.724, 0.641, 0.529, 0.264, 0.377, 0.386, 0.292, 0.244)
F6 <- c(0.780, 0.614, 0.491, 0.560, 0.764, 0.521, 0.711, 0.658, 0.519, 0.285, 0.379, 0.378, 0.288, 0.384)
F7 <- c(0.781, 0.645, 0.487, 0.566, 0.761, 0.520, 0.711, 0.643, 0.518, 0.279, 0.403, 0.378, 0.335, 0.410)

EFA1 <- c(0.803, 0.621, 0.458, 0.559, 0.740, 0.532, 0.734, 0.633, 0.525, 0.285, 0.504, 0.499, 0.475, 0.424)
PCA1 <- c(0.807, 0.661, 0.506, 0.604, 0.760, 0.580, 0.753, 0.671, 0.573, 0.325, 0.558, 0.549, 0.528, 0.478)

CONGO(F2, F3); CONGO(F2, F4); CONGO(F2, F5); CONGO(F2, F6); CONGO(F2, F7); CONGO(F2, EFA1); CONGO(F2, PCA1)
## [1] 0.967
## [1] 0.978
## [1] 0.959
## [1] 0.968
## [1] 0.974
## [1] 0.987
## [1] 0.989
CONGO(F3, F4); CONGO(F3, F5); CONGO(F3, F6); CONGO(F3, F7); CONGO(F3, EFA1); CONGO(F3, PCA1)
## [1] 0.999
## [1] 0.993
## [1] 0.997
## [1] 0.997
## [1] 0.99
## [1] 0.987
CONGO(F4, F5); CONGO(F4, F6); CONGO(F4, F7); CONGO(F4, EFA1); CONGO(F4, PCA1)
## [1] 0.993
## [1] 0.997
## [1] 0.998
## [1] 0.995
## [1] 0.993
CONGO(F5, F6); CONGO(F5, F7); CONGO(F5, EFA1); CONGO(F5, PCA1)
## [1] 0.997
## [1] 0.996
## [1] 0.991
## [1] 0.987
CONGO(F6, F7); CONGO(F6, EFA1); CONGO(F6, PCA1)
## [1] 0.999
## [1] 0.993
## [1] 0.99
CONGO(F7, EFA1); CONGO(F7, PCA1)
## [1] 0.995
## [1] 0.993
CONGO(EFA1, PCA1)
## [1] 0.999
cor(F2, F3); cor(F2, F4); cor(F2, F5); cor(F2, F6); cor(F2, F7); cor(F2, EFA1); cor(F2, PCA1)
## [1] 0.575
## [1] 0.68
## [1] 0.513
## [1] 0.564
## [1] 0.613
## [1] 0.769
## [1] 0.778
cor(F3, F4); cor(F3, F5); cor(F3, F6); cor(F3, F7); cor(F3, EFA1); cor(F3, PCA1)
## [1] 0.988
## [1] 0.927
## [1] 0.966
## [1] 0.968
## [1] 0.896
## [1] 0.885
cor(F4, F5); cor(F4, F6); cor(F4, F7); cor(F4, EFA1); cor(F4, PCA1)
## [1] 0.931
## [1] 0.964
## [1] 0.971
## [1] 0.942
## [1] 0.932
cor(F5, F6); cor(F5, F7); cor(F5, EFA1); cor(F5, PCA1)
## [1] 0.978
## [1] 0.97
## [1] 0.932
## [1] 0.921
cor(F6, F7); cor(F6, EFA1); cor(F6, PCA1)
## [1] 0.995
## [1] 0.925
## [1] 0.913
cor(F7, EFA1); cor(F7, PCA1)
## [1] 0.943
## [1] 0.934
cor(EFA1, PCA1)
## [1] 0.997
cor(F2, F3, method = "spearman"); cor(F2, F4, method = "spearman"); cor(F2, F5, method = "spearman"); cor(F2, F6, method = "spearman"); cor(F2, F7, method = "spearman"); cor(F2, EFA1, method = "spearman"); cor(F2, PCA1, method = "spearman")
## [1] 0.568
## [1] 0.576
## [1] 0.51
## [1] 0.594
## [1] 0.594
## [1] 0.647
## [1] 0.647
cor(F3, F4, method = "spearman"); cor(F3, F5, method = "spearman"); cor(F3, F6, method = "spearman"); cor(F3, F7, method = "spearman"); cor(F3, EFA1, method = "spearman"); cor(F3, PCA1, method = "spearman")
## [1] 0.996
## [1] 0.921
## [1] 0.96
## [1] 0.956
## [1] 0.903
## [1] 0.903
cor(F4, F5, method = "spearman"); cor(F4, F6, method = "spearman"); cor(F4, F7, method = "spearman"); cor(F4, EFA1, method = "spearman"); cor(F4, PCA1, method = "spearman")
## [1] 0.925
## [1] 0.965
## [1] 0.96
## [1] 0.908
## [1] 0.908
cor(F5, F6, method = "spearman"); cor(F5, F7, method = "spearman"); cor(F5, EFA1, method = "spearman"); cor(F5, PCA1, method = "spearman")
## [1] 0.947
## [1] 0.943
## [1] 0.96
## [1] 0.96
cor(F6, F7, method = "spearman"); cor(F6, EFA1, method = "spearman"); cor(F6, PCA1, method = "spearman")
## [1] 0.996
## [1] 0.934
## [1] 0.934
cor(F7, EFA1, method = "spearman"); cor(F7, PCA1, method = "spearman")
## [1] 0.93
## [1] 0.93
cor(EFA1, PCA1, method = "spearman")
## [1] 1
dfhoftvs <- data.frame("TVF" = c(TVF(F2),  TVF(F3),  TVF(F4),  TVF(F5),  TVF(F6),  TVF(F7), TVF(EFA1), TVF(PCA1)), "NUM" = c(2, 3, 4, 5, 6, 7, "EFA", "PCA"))
dfhoftvs

Clearly there is not an issue with the identities of the g loadings, nor even much of a problem with their absolute values with different numbers of group factors, especially after fitting sufficient numbers of factors (i.e. reaching an acceptable solution). Moreover, increasing the number of factors tended to increase the variance attributable to g going from the least acceptable to an acceptable case (two to three group factors), after which it marginally increased with more group factors. The exploratory solutions both produced estimated variance attributable to g which were larger. Going from the two to the seven factor models, \(\frac{\omega_h}{\omega_t}\) was 0.74/0.82, 0.83/0.85, 0.83/0.85, 0.83/0.85, 0.84/0.86, and 0.84/0.86. The H values and explained common variances were 0.82/0.85, 0.88/0.92, 0.88/0.91, 0.88/0.90, 0.88/0.89, and 0.89/0.86. The PUC was only acceptable (>0.70) with four or more factors, In no case did a group factor explain more than 2.9% of the total variance.

We can also fit a unified factor model with, say, six large modification index residual covariances and observe that a unified factor model - though theoretically inappropriate - can produce better model fit than the other models. This is obviously a factor solution that is susceptible to bias in the choice of modification indices (overfitting, erroneous choice) and it does not allow estimation of the contributions of group factors, but in many cases, their contributions will be marginal anyway (that is crux of the issue under contention). If we go by the model fits that Decker et al. (2020) variously slighted and utilized (for inappropriate comparisons), this is acceptable. We can also assess how "off" the loadings from this model will be when treated in this way.

UNIFIED.model <- '
g =~ VC + VAL + SPAR + SBL + CONF + VISM + NUMR + GINF + RETF + PICR + AUDA + ANSY + DESP + MEWO

VC ~~ GINF
CONF ~~ ANSY
DESP ~~ VISM + AUDA
AUDA ~~ VISM + SBL'

UNIFIED.fit <- cfa(UNIFIED.model, sample.cov = SLBF.cor, sample.nobs = nSLBF, std.lv = T, mimic = "EQS")

fitMeasures(UNIFIED.fit, FITM)
##          chisq             df           npar            cfi          rmsea 
##        242.354         71.000         34.000          0.953          0.056 
## rmsea.ci.lower rmsea.ci.upper            aic            bic           srmr 
##          0.048          0.063      27550.801      27709.217          0.043 
##            tli 
##          0.940
UNI <- c(0.877, 0.612, 0.445, 0.548, 0.711, 0.496, 0.818, 0.582, 0.503, 0.263, 0.404, 0.473, 0.394, 0.322)

CONGO(UNI, F2); CONGO(UNI, F3); CONGO(UNI, F4); CONGO(UNI, F5); CONGO(UNI, F6); CONGO(UNI, F7); CONGO(UNI, EFA1); CONGO(UNI, PCA1)
## [1] 0.974
## [1] 0.988
## [1] 0.992
## [1] 0.995
## [1] 0.993
## [1] 0.994
## [1] 0.995
## [1] 0.992
cor(UNI, F2); cor(UNI, F3); cor(UNI, F4); cor(UNI, F5); cor(UNI, F6); cor(UNI, F7); cor(UNI, EFA1); cor(UNI, PCA1)
## [1] 0.673
## [1] 0.873
## [1] 0.916
## [1] 0.951
## [1] 0.928
## [1] 0.935
## [1] 0.97
## [1] 0.956
cor(UNI, F2, method = "spearman"); cor(UNI, F3, method = "spearman"); cor(UNI, F4, method = "spearman"); cor(UNI, F5, method = "spearman"); cor(UNI, F6, method = "spearman"); cor(UNI, F7, method = "spearman"); cor(UNI, EFA1, method = "spearman"); cor(UNI, PCA1, method = "spearman")
## [1] 0.572
## [1] 0.899
## [1] 0.908
## [1] 0.982
## [1] 0.943
## [1] 0.947
## [1] 0.965
## [1] 0.965
TVF(UNI)
## [1] 0.312

Even the (well-fitting) unidimensional solution, despite its loading inflation by psychometric sampling error (which differs in importance between batteries) only produced estimated total variance attributable to g that was between 0.6 and 1.8% too great (i.e. in total almost 2 to 6% too large, exempting the two-factor solution's loadings which are probably quite a bit 'off' because of a different variety of error). Next, I will compare the five-factor (since this was the best solution) correlated group factor, bifactor, and higher-order models to one another. Because of the lack of within group-factor homogeneity and a wish to avoid the effects of constraints on model fits and parameter estimates, I removed the two group factors defined by only two indicators from the bifactor model, replaced them with residual covariances, and computed their (now unmodeled) factor's loadings with path-tracing rules (this of course assumes homogeneity, but it does not bias the other parameters to estimate) and the PICR variable was not assigned to a group factor. I will assess the amount of bias this generates and the extent of proportionality/tetrad constraint effects by comparing the unidimensionality and g saturation of the higher-order and bifactor models.

CGF5.model <- '
Gc =~ VC + GINF + RETF
Glr =~ SPAR + CONF + ANSY + VAL + PICR
Ga =~ SBL + AUDA
Gs =~ VISM + AUDA + DESP
Gsm =~ NUMR + MEWO'

BIFACTOR.model <- '
g =~ VC + VAL + SPAR + SBL + CONF + VISM + NUMR + GINF + RETF + PICR + AUDA + ANSY + DESP + MEWO

Gc =~ VC + GINF + RETF
Glr =~ SPAR + CONF + ANSY + VAL
SBL ~~ AUDA
Gs =~ VISM + AUDA + DESP
NUMR ~~ MEWO

GINF ~~ 0*GINF'

HOF.model <- '
g =~ Gc + Glr + Ga + Gs + Gsm

Gc =~ VC + GINF + RETF
Glr =~ SPAR + CONF + ANSY + VAL + PICR
Ga =~ SBL + AUDA
Gs =~ VISM + AUDA + DESP
Gsm =~ NUMR + MEWO'

CGF5.fit <- cfa(CGF5.model, sample.cov = SLBF.cor, sample.nobs = nSLBF, std.lv = T, mimic = "EQS") #mimic EQS to be closer to the orignal paper's Amos results
BIFACTOR.fit <- cfa(BIFACTOR.model, sample.cov = SLBF.cor, sample.nobs = nSLBF, std.lv = T, orthogonal = T, mimic = "EQS")
HOF.fit <- cfa(HOF.model, sample.cov = SLBF.cor, sample.nobs = nSLBF, std.lv = T, mimic = "EQS")

round(cbind("Correlated GF" = fitMeasures(CGF5.fit, FITM),
            "Bifactor" = fitMeasures(BIFACTOR.fit, FITM),
            "Higher-order" = fitMeasures(HOF.fit, FITM)), 3)
##                Correlated GF  Bifactor Higher-order
## chisq                240.755   199.343      252.628
## df                    66.000    66.000       71.000
## npar                  39.000    39.000       34.000
## cfi                    0.952     0.964        0.950
## rmsea                  0.058     0.051        0.057
## rmsea.ci.lower         0.050     0.043        0.050
## rmsea.ci.upper         0.066     0.059        0.065
## aic                27559.200 27517.735    27561.087
## bic                27740.912 27699.447    27719.503
## srmr                   0.048     0.040        0.049
## tli                    0.934     0.950        0.936

The higher-order and bifactor model stats are listed in order 67.8 versus 54.6% unique variance; a g factor explaining 29 versus 30.9% of the total variance; \(\frac{\omega_h}{\omega_t}\) of 0.83/0.85 versus 0.82/0.89; PUCs of 0.82; H values of 0.88 and 0.88; explained common variance of 90.1 and 68%; and Gc, Glr, Ga, Gs, and Gsm statistics which were universally weaker in the higher-order model, with total variances ranging from 0.2 to 1.2%, H values from 0.03 to 0.16, explained common variances from 0.8 to 3.7%, and \(\omega_{hs}\) from 0.02 to 0.13 alongside \(\omega_{ts}\) from 0.26 to 0.71; in the bifactor model, these were 0.3 to 6%, 0.05 to 0.65, 0.7 to 13.2%, 0.04 to 0.52, and 0.47 to 0.83. If this generalizes, then in batteries with well-defined group factors, we should expect the bifactor model to deliver more interpretable group factors, which may explain why they have shown greater criterion validity than otherwise residualized group factors in past analyses.

BFG <- c(.794, .633, .439, .561, .739, .448, .529, .707, .481, .282, .412, .610, .350, .530) 
HOFG <- c(.793, .594, .462, .569, .744, .377, .523, .724, .386, .264, .292, .641, .244, .529)

cor(HOFG, BFG); cor(HOFG, BFG, method = "spearman"); CONGO(HOFG, BFG); TVF(HOFG); TVF(BFG)
## [1] 0.971
## [1] 0.978
## [1] 0.995
## [1] 0.29
## [1] 0.309

The factor covariance matrix from the group factor model used by Decker et al. can also be used to showcase its unidimensionality in the typical way:

CGF.model <- '
Gc =~ VC + GINF
Glr =~ VAL + RETF
Gv =~ SPAR + PICR
Ga =~ SBL + AUDA
Gf =~ CONF + ANSY
Gs =~ VISM + DESP
Gsm =~ NUMR + MEWO'
CGF.fit <- cfa(CGF.model, sample.cov = SLBF.cor, sample.nobs = nSLBF, std.lv = T, mimic = "EQS")
## Warning in lav_object_post_check(object): lavaan WARNING: covariance matrix of latent variables
##                 is not positive definite;
##                 use lavInspect(fit, "cov.lv") to investigate.
ASL <- lavInspect(CGF.fit, "cov.lv")
suppressWarnings(fa.parallel(ASL, n.obs = nSLBF))

## Parallel analysis suggests that the number of factors =  4  and the number of components =  1
suppressWarnings(nfactors(ASL, n.obs = nSLBF))

## 
## Number of factors
## Call: vss(x = x, n = n, rotate = rotate, diagonal = diagonal, fm = fm, 
##     n.obs = n.obs, plot = FALSE, title = title, use = use, cor = cor)
## VSS complexity 1 achieves a maximimum of 0.97  with  1  factors
## VSS complexity 2 achieves a maximimum of 0.99  with  2  factors
## The Velicer MAP achieves a minimum of
## Warning in min(x$map, na.rm = TRUE): no non-missing arguments to min; returning
## Inf
## Inf  with    factors 
## Empirical BIC achieves a minimum of  8.59  with  3  factors
## Sample Size adjusted BIC achieves a minimum of  96276072166  with  3  factors
## 
## Statistics by number of factors 
##   vss1 vss2 map dof        chisq prob sqresid  fit RMSEA          BIC
## 1 0.97 0.00 NaN  14 529949320721    0  0.8917 0.97  6966 529949320628
## 2 0.83 0.99 NaN   8 372634043174    0  0.3994 0.99  7728 372634043121
## 3 0.47 0.90 NaN   3  96276072176    0  0.2328 0.99  6414  96276072156
## 4 0.32 0.71 NaN  -1  68607679417   NA  0.1004 1.00    NA           NA
## 5 0.29 0.64 NaN  -4   8774387891   NA  0.0097 1.00    NA           NA
## 6 0.29 0.64 NaN  -6   8766815604   NA  0.0097 1.00    NA           NA
## 7 0.29 0.64  NA  -7   8759243317   NA  0.0097 1.00    NA           NA
##          SABIC complex eChisq  SRMR eCRMS eBIC
## 1 529949320672     1.0  160.8 0.070 0.086 67.6
## 2 372634043146     1.4   85.7 0.051 0.083 32.5
## 3  96276072166     1.9   28.6 0.030 0.078  8.6
## 4           NA     2.3    9.5 0.017    NA   NA
## 5           NA     2.3    5.9 0.013    NA   NA
## 6           NA     2.3    5.9 0.013    NA   NA
## 7           NA     2.3    5.9 0.013    NA   NA
MKA <- suppressWarnings(fa(ASL, n.obs = nSLBF, nfactors = 2)); MAPA(MKA); suppressWarnings(EGA(ASL, n = nSLBF, plot.EGA = T))
## The smallest factor had an eigenvalue of 0.679 with a confidence interval of 0.611 to 0.746 so the factor should be dropped.
## Network estimated with:
##  • gamma = 0.5
##  • lambda.min.ratio = 0.1

## EGA Results:
## 
## Number of Dimensions:
## [1] 2
## 
## Items per Dimension:
##     items dimension
## Gc     Gc         1
## Glr   Glr         1
## Ga     Ga         1
## Gs     Gs         1
## Gsm   Gsm         1
## Gv     Gv         2
## Gf     Gf         2

There's only one clear dimension at this level; the appearance otherwise in the EGA is apparently a result of the ultra-Heywood case between the Gf and Gv group factors - a knock against the model's appropriateness. Using Spearman's method to get the g loadings, the g saturation is not unlike the other g models.

Lim (1988)

lowerLIM <-'
1                                                                                       
0.34    1                                                                                   
0.25    0.48    1                                                                               
0.34    0.58    0.68    1                                                                           
0.43    0.54    0.53    0.6 1                                                                       
0.32    0.55    0.51    0.56    0.54    1                                                                   
0.29    0.52    0.77    0.64    0.59    0.54    1                                                               
0.41    0.38    0.35    0.43    0.45    0.31    0.37    1                                                           
0.35    0.29    0.26    0.34    0.32    0.29    0.27    0.56    1                                                       
0.41    0.33    0.24    0.36    0.38    0.23    0.27    0.62    0.59    1                                                   
0.34    0.37    0.37    0.41    0.39    0.33    0.34    0.64    0.67    0.59    1                                               
0.44    0.37    0.38    0.42    0.43    0.32    0.38    0.63    0.58    0.61    0.67    1                                           
0.23    0.23    0.17    0.21    0.29    0.21    0.23    0.31    0.14    0.21    0.23    0.25    1                                       
0.26    0.3 0.24    0.28    0.27    0.29    0.25    0.29    0.21    0.25    0.28    0.31    0.24    1                                   
0.01    0.1 -0.01   0.04    0   0.1 0.02    -0.07   0   -0.01   0   -0.03   -0.09   0.04    1                               
0.29    0.29    0.27    0.27    0.35    0.27    0.28    0.27    0.18    0.21    0.27    0.27    0.24    0.29    0.01    1                           
0.35    0.36    0.28    0.37    0.36    0.3 0.32    0.4 0.27    0.39    0.34    0.34    0.28    0.31    -0.02   0.29    1                       
0.13    0.26    0.21    0.21    0.27    0.22    0.25    0.15    0.08    0.21    0.14    0.14    0.17    0.14    0.04    0.14    0.2 1                   
0.02    0.09    0.05    0.04    0.03    -0.01   0.03    0.07    -0.01   0.05    0.06    0.06    0.03    0.06    0.02    0   0.03    0.03    1               
0.28    0.24    0.21    0.22    0.33    0.24    0.21    0.26    0.16    0.2 0.27    0.27    0.24    0.3 0.03    0.28    0.29    0.09    0.12    1           
0.39    0.35    0.3 0.32    0.42    0.32    0.36    0.4 0.26    0.3 0.37    0.37    0.44    0.34    -0.01   0.32    0.41    0.2 0.05    0.33    1       
0.36    0.38    0.26    0.3 0.39    0.33    0.31    0.3 0.24    0.27    0.29    0.29    0.28    0.3 -0.04   0.26    0.37    0.18    0.04    0.3 0.48    1   
0.41    0.41    0.32    0.38    0.43    0.37    0.34    0.43    0.32    0.36    0.42    0.42    0.34    0.37    -0.05   0.31    0.46    0.24    0.03    0.28    0.53    0.45    1'

lowerLIMPSYC <- '
1                                           
0.34    1                                       
0.25    0.48    1                                   
0.34    0.58    0.68    1                               
0.43    0.54    0.53    0.6 1                           
0.32    0.55    0.51    0.56    0.54    1                       
0.29    0.52    0.77    0.64    0.59    0.54    1                   
0.41    0.38    0.35    0.43    0.45    0.31    0.37    1               
0.35    0.29    0.26    0.34    0.32    0.29    0.27    0.56    1           
0.41    0.33    0.24    0.36    0.38    0.23    0.27    0.62    0.59    1       
0.34    0.37    0.37    0.41    0.39    0.33    0.34    0.64    0.67    0.59    1   
0.44    0.37    0.38    0.42    0.43    0.32    0.38    0.63    0.58    0.61    0.67    1'

lowerLIMPIA <- '
1                                       
0.24    1                                   
-0.09   0.04    1                               
0.24    0.29    0.01    1                           
0.28    0.31    -0.02   0.29    1                       
0.17    0.14    0.04    0.14    0.2 1                   
0.03    0.06    0.02    0   0.03    0.03    1               
0.24    0.3 0.03    0.28    0.29    0.09    0.12    1           
0.44    0.34    -0.01   0.32    0.41    0.2 0.05    0.33    1       
0.28    0.3 -0.04   0.26    0.37    0.18    0.04    0.3 0.48    1   
0.34    0.37    -0.05   0.31    0.46    0.24    0.03    0.28    0.53    0.45    1'

nLIM <- 459

LIM.cor = getCov(lowerLIM, names = c("APM", "B1", "B2", "B3", "B4", "B5", "B6", "B7", "B8", "B9", "B10", "B11", "C1", "C2", "C3", "C4", "C5", "C6", "C7", "C8", "T1", "T2", "T3"))
LIMPSYC.cor = getCov(lowerLIMPSYC, names = c("APM", "B1", "B2", "B3", "B4", "B5", "B6", "B7", "B8", "B9", "B10", "B11"))
LIMPIA.cor = getCov(lowerLIMPIA, names = c("C1", "C2", "C3", "C4", "C5", "C6", "C7", "C8", "T1", "T2", "T3"))
Parallel Analysis
fa.parallel(LIM.cor, n.obs = nLIM)
## Warning in fa.stats(r = r, f = f, phi = phi, n.obs = n.obs, np.obs = np.obs, :
## The estimated weights for the factor scores are probably incorrect. Try a
## different factor score estimation method.

## Parallel analysis suggests that the number of factors =  3  and the number of components =  3
fa.parallel(LIMPSYC.cor, n.obs = nLIM)

## Parallel analysis suggests that the number of factors =  2  and the number of components =  2
fa.parallel(LIMPIA.cor, n.obs = nLIM)

## Parallel analysis suggests that the number of factors =  1  and the number of components =  1
Exploratory Factor Analyses
FATOT <- fa(LIM.cor, n.obs = nLIM, nfactors = 3)
FAPSYC <- fa(LIMPSYC.cor, n.obs = nLIM, nfactors = 2)
FAPIA <- fa(LIMPIA.cor, n.obs = nLIM, nfactors = 1)

print(FATOT$loadings, cutoff = 0.3)
## 
## Loadings:
##     MR1    MR2    MR3   
## APM                0.382
## B1   0.536              
## B2   0.877              
## B3   0.766              
## B4   0.531              
## B5   0.614              
## B6   0.855              
## B7          0.686       
## B8          0.841       
## B9          0.769       
## B10         0.795       
## B11         0.730       
## C1                 0.541
## C2                 0.443
## C3                      
## C4                 0.398
## C5                 0.488
## C6                      
## C7                      
## C8                 0.462
## T1                 0.735
## T2                 0.606
## T3                 0.615
## 
##                  MR1   MR2   MR3
## SS loadings    3.127 3.054 2.783
## Proportion Var 0.136 0.133 0.121
## Cumulative Var 0.136 0.269 0.390
print(FAPSYC$loadings, cutoff = 0.3)
## 
## Loadings:
##     MR1    MR2   
## APM         0.395
## B1   0.624       
## B2   0.839       
## B3   0.773       
## B4   0.647       
## B5   0.687       
## B6   0.868       
## B7          0.740
## B8          0.789
## B9          0.811
## B10         0.797
## B11         0.760
## 
##                  MR1   MR2
## SS loadings    3.395 3.249
## Proportion Var 0.283 0.271
## Cumulative Var 0.283 0.554
print(FAPIA$loadings)
## 
## Loadings:
##    MR1   
## C1  0.507
## C2  0.516
## C3       
## C4  0.471
## C5  0.602
## C6  0.296
## C7       
## C8  0.475
## T1  0.731
## T2  0.614
## T3  0.709
## 
##                  MR1
## SS loadings    2.840
## Proportion Var 0.258
NUM <- seq(1, 23, 1); EVS <- FATOT$e.values; TVP <- PTV(FATOT)
bpdf <- data.frame("NUM" = NUM, "EVS" = EVS, "TVP" = TVP)

EVP <- ggplot(bpdf, aes(x = NUM, y = EVS, fill = as.factor(NUM))) + geom_bar(stat = "identity", show.legend = F) + xlab("Number") + ylab("Eigenvalue") + theme_bw() + scale_fill_hue(c = 30) + theme(text = element_text(size = 12, family = "serif"),  plot.title = element_text(hjust = 0.5)) + geom_hline(yintercept = 1, color = "darkgray", linetype = "dashed", size = 1)

VPT <- ggplot(bpdf, aes(x = factor(1), y = TVP, fill = as.factor(NUM))) + geom_bar(stat = "identity", show.legend = F) + ylab("Percentage of Total Variance") + theme_bw() + scale_fill_hue(c = 50) + theme(text = element_text(size = 12, family = "serif"),  plot.title = element_text(hjust = 0.5), axis.title.y = element_blank(), axis.text.y = element_blank(), axis.ticks.y = element_blank()) + coord_polar("y")

ggarrange(EVP, VPT, ncol = 2)

#Bonus EGAs
EGA(LIM.cor, n = nLIM, plot.EGA = T)
## Network estimated with:
##  • gamma = 0.5
##  • lambda.min.ratio = 0.1

## EGA Results:
## 
## Number of Dimensions:
## [1] 3
## 
## Items per Dimension:
##     items dimension
## APM   APM         1
## C1     C1         1
## C2     C2         1
## C4     C4         1
## C5     C5         1
## C7     C7         1
## C8     C8         1
## T1     T1         1
## T2     T2         1
## T3     T3         1
## B1     B1         2
## B2     B2         2
## B3     B3         2
## B4     B4         2
## B5     B5         2
## B6     B6         2
## C3     C3         2
## C6     C6         2
## B7     B7         3
## B8     B8         3
## B9     B9         3
## B10   B10         3
## B11   B11         3
EGA(LIMPSYC.cor, n = nLIM, plot.EGA = T)
## Network estimated with:
##  • gamma = 0.5
##  • lambda.min.ratio = 0.1

## EGA Results:
## 
## Number of Dimensions:
## [1] 2
## 
## Items per Dimension:
##     items dimension
## APM   APM         1
## B7     B7         1
## B8     B8         1
## B9     B9         1
## B10   B10         1
## B11   B11         1
## B1     B1         2
## B2     B2         2
## B3     B3         2
## B4     B4         2
## B5     B5         2
## B6     B6         2
EGA(LIMPIA.cor, n = nLIM, plot.EGA = T)
## Network estimated with:
##  • gamma = 0.5
##  • lambda.min.ratio = 0.1

## EGA Results:
## 
## Number of Dimensions:
## [1] 1
## 
## Items per Dimension:
##    items dimension
## C1    C1         1
## C2    C2         1
## C3    C3         1
## C4    C4         1
## C5    C5         1
## C6    C6         1
## C7    C7         1
## C8    C8         1
## T1    T1         1
## T2    T2         1
## T3    T3         1
Confirmatory Factor Analyses
LIMPS.model <- '
F1 =~ APM + B7 + B8 + B9 + B10 + B11
F2 =~ B1 + B2 + B3 + B4 + B5 + B6

gPSY =~ F1 + F2'

LIMPS.fit <- suppressWarnings(cfa(LIMPS.model, sample.cov = LIM.cor, sample.nobs = nLIM, std.lv = T))

LIMPI.model <- '
gPIA =~ C1 + C2 + C4 + C5 + C6 + C8 + T1 + T2 + T3'

LIMPI.fit <- cfa(LIMPI.model, sample.cov = LIM.cor, sample.nobs = nLIM, std.lv = T)

LIMPII.model <- '
F3 =~ C1 + C2 + C4 + C5 + C6 + T1 + T2 + T3
F4 =~ C2 + C3 + C4 + C7 + C8

gPIA =~ F3 + F4'

LIMPII.fit <- suppressWarnings(cfa(LIMPII.model, sample.cov = LIM.cor, sample.nobs = nLIM, std.lv = T))

round(cbind("Lim Psychometric" = fitMeasures(LIMPS.fit, FITM),
            "Lim PiagetianI" = fitMeasures(LIMPI.fit, FITM),
            "Lim PiagetianII" = fitMeasures(LIMPII.fit, FITM)), 3)
##                Lim Psychometric Lim PiagetianI Lim PiagetianII
## chisq                   222.068         31.493          34.630
## df                       52.000         27.000          40.000
## npar                     26.000         18.000          26.000
## cfi                       0.943          0.995           1.000
## rmsea                     0.084          0.019           0.000
## rmsea.ci.lower            0.073          0.000           0.000
## rmsea.ci.upper            0.096          0.043           0.025
## aic                   12817.827      10900.685       13504.646
## bic                   12925.182      10975.008       13612.002
## srmr                      0.055          0.027           0.026
## tli                       0.928          0.993           1.009

C3 and C7 do not significantly load on F4 (nor, of course, on the lower-order Piagetian g). They may be considerably range restricted or something else.

LIMTOI.model <- '
F1 =~ APM + B7 + B8 + B9 + B10 + B11
F2 =~ B1 + B2 + B3 + B4 + B5 + B6

gPSY =~ F1 + F2

gPIA =~ C1 + C2 + C4 + C5 + C6 + C8 + T1 + T2 + T3

gPSY ~~ gPIA'

LIMTOII.model <- '
F1 =~ APM + B7 + B8 + B9 + B10 + B11
F2 =~ B1 + B2 + B3 + B4 + B5 + B6

gPSY =~ F1 + F2

F3 =~ C1 + C5 + C6 + T1 + T2 + T3
F4 =~ C2 + C4 + C8

gPIA =~ F3 + F4

gPSY ~~ gPIA'

LIMPTOI.fit <- cfa(LIMTOI.model, sample.cov = LIM.cor, sample.nobs = nLIM, std.lv = T)
LIMPTOII.fit <- cfa(LIMTOII.model, sample.cov = LIM.cor, sample.nobs = nLIM, std.lv = T)

round(cbind("Lim LOP" = fitMeasures(LIMPTOI.fit, FITM),
            "Lim HOP" = fitMeasures(LIMPTOII.fit, FITM)), 3)
##                  Lim LOP   Lim HOP
## chisq            417.261   405.657
## df               186.000   184.000
## npar              45.000    47.000
## cfi                0.945     0.947
## rmsea              0.052     0.051
## rmsea.ci.lower     0.045     0.044
## rmsea.ci.upper     0.059     0.058
## aic            23463.559 23455.954
## bic            23649.366 23650.020
## srmr               0.052     0.052
## tli                0.937     0.939
LIMLAT <- list(
  F1 = c("APM", "B7", "B8", "B9", "B10", "B11"),
  F2 = c("B1", "B2", "B3", "B4", "B5", "B6"),
  F3 = c("C1", "C5", "C6", "T1", "T2", "T3"),
  F4 = c("C2", "C4", "C8"))

semPaths(LIMPTOII.fit, "model", "std", title = F, residuals = F, groups = "LIMLAT", pastel = T, mar = c(2, 1, 3, 1), bifactor = c("gPSY", "gPIA"), layout = "tree2", exoCov = T)

The correlation between the higher-order Piagetian g and the higher-order psychometric (AH4 + RAPM) g amounted to r = 0.904 (0.830 - 0.977); with a lower-order Piagetian g the correlation dropped to 0.870 (0.806 - 0.934). There is clearly considerable construct similarity between the g factor produced by Piagetian tasks and the one derived from a battery of the RAPM and the AH4, despite sharing zero content. The former model makes more theoretical sense as it avoids psychometric sampling variance to a greater extent and by most measures it fits better, although it is also less parsimonious by two degrees of freedom; it is clearly better for computing bifactor interpretability indices because it possesses orthogonal factors whereas the model without them obviously does not. For these reasons, that is the model I will use. In the solo models, the B2 and T1 indicators threw negative variances, which I constrained to zero. This was not necessary for the combined model.

The interpretability indices for the solo g factors (psychometric/Piagetian) yielded PUCs of 0.55 and 0.50, H values of 0.90 and 0.82, \(\frac{\omega_h}{\omega_t}\) of 0.74/0.93 and 0.74/0.83, and ECVs of 0.62 and 0.79 (38 and 31% of the total variance). F1, F2, F3, and F4 had H values of 0.77, 0.63, 0.41, and 0.19, \(\frac{\omega_h}{\omega_t}\) of 0.51/0.88, 0.11/0.91, 0.12/0.81, 0.14/0.54, and ECVs of 0.27, 0.11, 0.15, and 0.06 (17, 7, 6, and 2% of their battery's total variance). In the combined model, the PUC was 0.77, the H value was 0.91, \(\frac{\omega_h}{\omega_t}\) was 0.82/0.93, and ECV was 0.62 (31% of the total variance). F1, F2, F3, and F4 had H values of 0.71, 0.73, 0.41, and 0.19, \(\frac{\omega_h}{\omega_t}\) of 0.38/0.89, 0.36/0.90, 0.13/0.77, and 0.14/0.55, with ECVs of 0.15, 0.15, 0.05, and 0.02 (8, 7, 3, and 1% of the total variance). The AH4 and RAPM combination battery and Piagetian scales should be interpreted in general, though they both appear, despite not sharing content, to be quite fluid and, clearly, decently unidimensional, if small.

Comparison of Separate and Together Model Bifactor Loadings
AH4TOG <- c(0.591, 0.634, 0.484, 0.608, 0.694, 0.571, 0.541, 0.618, 0.444, 0.538, 0.562, 0.597)
PIATOG <- c(0.423, 0.485, 0.47, 0.599, 0.321, 0.447, 0.615, 0.567, 0.676)
AH4SEPG <- c(0.479, 0.729, 0.64, 0.77, 0.779, 0.699, 0.711, 0.533, 0.408, 0.442, 0.493, 0.525)
PIASEPG <- c(0.476, 0.492, 0.444, 0.632, 0.312, 0.44, 0.717, 0.608, 0.724)
AH4TOS <- c(0.106, 0.282, 0.732, 0.514, 0.304, 0.357, 0.671, 0.482, 0.643, 0.53, 0.612, 0.53)
PIATOS <- c(0.304, 0.271, 0.228, 0.091, 0.021, 0.305, 0.569, 0.229, 0.21)
AH4SEPS <- c(0.251, 0.018, 0.769, 0.244, 0.041, 0.082, 0.41, 0.573, 0.649, 0.619, 0.662, 0.606)
PIASEPS <- c(0.142, 0.266, 0.269, 0.062, 0.034, 0.314, 0.697, 0.063, 0.016)

TOTTOG <- c(0.591, 0.634, 0.484, 0.608, 0.694, 0.571, 0.541, 0.618, 0.444, 0.538, 0.562, 0.597, 0.423, 0.485, 0.47, 0.599, 0.321, 0.447, 0.615, 0.567, 0.676) 
TOTTOS <- c(0.106, 0.282, 0.732, 0.514, 0.304, 0.357, 0.671, 0.482, 0.643, 0.53, 0.612, 0.53, 0.304, 0.271, 0.228, 0.091, 0.021, 0.305, 0.569, 0.229, 0.21) 
TOTSEPG <- c(0.479, 0.729, 0.64, 0.77, 0.779, 0.699, 0.711, 0.533, 0.408, 0.442, 0.493, 0.525, 0.476, 0.492, 0.444, 0.632, 0.312, 0.44, 0.717, 0.608, 0.724) 
TOTSEPS <- c(0.251, 0.018, 0.769, 0.244, 0.041, 0.082, 0.41, 0.573, 0.649, 0.619, 0.662, 0.606, 0.142, 0.266, 0.269, 0.062, 0.034, 0.314, 0.697, 0.063, 0.016)

message(paste("AH4 g Loadings: Together and Apart? Pearson = ", cor(AH4TOG, AH4SEPG), "Spearman = ", cor(AH4TOG, AH4SEPG, method = "spearman"), "Tucker = ", CONGO(AH4TOG, AH4SEPG)))
## AH4 g Loadings: Together and Apart? Pearson =  0.53029450777992 Spearman =  0.643356643356643 Tucker =  0.984053333832553
message(paste("AH4 s Loadings: Together and Apart? Pearson = ", cor(AH4TOS, AH4SEPS), "Spearman = ", cor(AH4TOS, AH4SEPS, method = "spearman"), "Tucker = ", CONGO(AH4TOS, AH4SEPS)))
## AH4 s Loadings: Together and Apart? Pearson =  0.762463687366407 Spearman =  0.795097541564023 Tucker =  0.933907293128879
message(paste("PIA g Loadings: Together and Apart? Pearson = ", cor(PIATOG, PIASEPG), "Spearman = ", cor(PIATOG, PIASEPG, method = "spearman"), "Tucker = ", CONGO(PIATOG, PIASEPG)))
## PIA g Loadings: Together and Apart? Pearson =  0.974794817307408 Spearman =  0.95 Tucker =  0.998102225087747
message(paste("PIA s Loadings: Together and Apart? Pearson = ", cor(PIATOS, PIASEPS), "Spearman = ", cor(PIATOS, PIASEPS, method = "spearman"), "Tucker = ", CONGO(PIATOS, PIASEPS)))
## PIA s Loadings: Together and Apart? Pearson =  0.876963263898761 Spearman =  0.833333333333333 Tucker =  0.926157608435148
message(paste("Total g Loadings: Together and Apart? Pearson = ", cor(TOTTOG, TOTSEPG), "Spearman = ", cor(TOTTOG, TOTSEPG, method = "spearman"), "Tucker = ", CONGO(TOTTOG, TOTSEPG)))
## Total g Loadings: Together and Apart? Pearson =  0.770700715624018 Spearman =  0.815584415584416 Tucker =  0.989386641210639
message(paste("Total s Loadings: Together and Apart? Pearson = ", cor(TOTTOS, TOTSEPS), "Spearman = ", cor(TOTTOS, TOTSEPS, method = "spearman"), "Tucker = ", CONGO(TOTTOS, TOTSEPS)))
## Total s Loadings: Together and Apart? Pearson =  0.826744119009557 Spearman =  0.7972711234714 Tucker =  0.93204619130534

Loadings were overall very stable and the g loadings were congruent and highly but not extremely correlated in this battery. Next, I checked the solo higher-order, together higher-order, and bifactor psychometric battery loadings from models together and apart to showcase a point about identification.

AH4HOFSEP <- c(0.387, 0.519, 0.622, 0.626, 0.558, 0.522, 0.632, 0.602, 0.573, 0.578, 0.627, 0.619)
AH4HOFTOG <- c(0.386, 0.585, 0.545, 0.558, 0.604, 0.597, 0.509, 0.591, 0.599, 0.547, 0.509, 0.605)

cor(AH4HOFSEP, AH4HOFTOG); cor(AH4HOFSEP, AH4HOFTOG, method = "spearman"); CONGO(AH4HOFSEP, AH4HOFTOG)
## [1] 0.504
## [1] -0.249
## [1] 0.994
cor(AH4HOFSEP, AH4TOG); cor(AH4HOFSEP, AH4TOG, method = "spearman"); CONGO(AH4HOFSEP, AH4TOG)
## [1] -0.213
## [1] -0.308
## [1] 0.984
cor(AH4HOFSEP, AH4SEPG); cor(AH4HOFSEP, AH4SEPG, method = "spearman"); CONGO(AH4HOFSEP, AH4SEPG)
## [1] 0.128
## [1] 0.0839
## [1] 0.974
cor(AH4HOFTOG, AH4SEPG); cor(AH4HOFTOG, AH4SEPG, method = "spearman"); CONGO(AH4HOFTOG, AH4SEPG)
## [1] 0.255
## [1] 0.172
## [1] 0.978
cor(AH4HOFTOG, AH4TOG); cor(AH4HOFTOG, AH4TOG, method = "spearman"); CONGO(AH4HOFTOG, AH4TOG)
## [1] 0.108
## [1] 0.34
## [1] 0.989
TVF(AH4HOFSEP); TVF(AH4HOFTOG); TVF(AH4TOG); TVF(AH4SEPG)
## [1] 0.332
## [1] 0.309
## [1] 0.333
## [1] 0.377

Loadings were quite congruent and total factor variance associated with g was reasonably consistent.

Discussion

The paper by Decker et al. (2020) ranged in position from confused and ignorant to irrelevant to what might be reasonably considered outright well poisoning by rank charlatans. There is very little of value in their paper; charitably, it is a display of ignorance, but without that, it is knowingly failing to account for contradictory evidence coupled with methodological impropriety. Nothing they have said contradicts common practices for defining the dimensionality of tests, nor for how interpretability indices are understood. They clearly do not understand factor analysis and made elementary mistakes, so why their opinion should matter on this topic anyway is beyond me.

The Lim (1988) data supports the existence of a singular g shared between tests covering widely different content and without reference to cultural content. The unidimensionality tests showcase may represent something shared despite not sharing content or similar information transformation requirements. For Piagetian tasks to be cultural would require some interesting arguments, but if they exist, I will gladly humor them. I have achieved similar results with batteries that are composed of what are plausibly elementary cognitive tasks which makes a common g (as factor or variance) permeating all tests extremely likely and thus not worth ignoring.

References

Decker, S. L., Bridges, R. M., Luedke, J. C., & Eason, M. J. (2020). Dimensional Evaluation of Cognitive Measures: Methodological Confounds and Theoretical Concerns: Journal of Psychoeducational Assessment. https://doi.org/10.1177/0734282920940879

Hood, S. B. (2010). Latent Variable Realism in Psychometrics. https://scholarworks.iu.edu/dspace/handle/2022/8120

Beaujean, A. A. (2015). John Carroll's Views on Intelligence: Bi-Factor vs. Higher-Order Models. Journal of Intelligence, 3(4), 121-136. https://doi.org/10.3390/jintelligence3040121

Shikishima, C., Hiraishi, K., Yamagata, S., Sugimoto, Y., Takemura, R., Ozaki, K., Okada, M., Toda, T., & Ando, J. (2009). Is g an entity? A Japanese twin study using syllogisms and intelligence tests. Intelligence, 37(3), 256-267. https://doi.org/10.1016/j.intell.2008.10.010

Panizzon, M. S., Vuoksimaa, E., Spoon, K. M., Jacobson, K. C., Lyons, M. J., Franz, C. E., Xian, H., Vasilopoulos, T., & Kremen, W. S. (2014). Genetic and Environmental Influences of General Cognitive Ability: Is g a valid latent construct? Intelligence, 43, 65-76. https://doi.org/10.1016/j.intell.2014.01.008

Jensen, A. R. (1987). The "g" beyond factor analysis. In The influence of cognitive psychology on testing (pp. 87-142). Lawrence Erlbaum Associates, Inc.

Franic, S., Dolan, C. V., Borsboom, D., Hudziak, J. J., van Beijsterveldt, C. E. M., & Boomsma, D. I. (2013). Can genetics help psychometrics? Improving dimensionality assessment through genetic factor modeling. Psychological Methods, 18(3), 406-433. https://doi.org/10.1037/a0032755

Lim, T. K. (1988). Relationships between standardized psychometric and Piagetian measures of intelligence at the formal operations level. Intelligence, 12(2), 167-182. https://doi.org/10.1016/0160-2896(88)90014-1

Watkins, M. W. (2000). Cognitive profile analysis: A shared professional myth. School Psychology Quarterly, 15(4), 465-479. https://doi.org/10.1037/h0088802

Watkins, M. W. (2003). IQ Subtest Analysis: Clinical Acumen or Clinical Illusion? The Scientific Review of Mental Health Practice: Objective Investigations of Controversial and Unorthodox Claims in Clinical Psychology, Psychiatry, and Social Work, 2(2), 118-141.

McGill, R. J., Dombrowski, S. C., & Canivez, G. L. (2018). Cognitive profile analysis in school psychology: History, issues, and continued concerns. Journal of School Psychology, 71, 108-121. https://doi.org/10.1016/j.jsp.2018.10.007

Gignac, G. E., & Watkins, M. W. (2013). Bifactor Modeling and the Estimation of Model-Based Reliability in the WAIS-IV. Multivariate Behavioral Research, 48(5), 639-662. https://doi.org/10.1080/00273171.2013.804398

Jensen, A. R. (2001). Vocabulary and general intelligence. Behavioral and Brain Sciences, 24(6), 1109-1110. https://doi.org/10.1017/S0140525X01280133

Thorndike, R. L. (1987). Stability of factor loadings. Personality and Individual Differences, 8(4), 585-586. https://doi.org/10.1016/0191-8869(87)90224-8

Floyd, R. G., Shands, E. I., Rafael, F. A., Bergeron, R., & McGrew, K. S. (2009). The dependability of general-factor loadings: The effects of factor-extraction methods, test battery composition, test battery size, and their interactions. Intelligence, 37(5), 453-465. https://doi.org/10.1016/j.intell.2009.05.003

Major, J. T., Johnson, W., & Bouchard, T. J. (2011). The dependability of the general factor of intelligence: Why small, single-factor models do not adequately represent g. Intelligence, 39(5), 418-433. https://doi.org/10.1016/j.intell.2011.07.002

Ruscio, J., & Roche, B. (2012). Determining the number of factors to retain in an exploratory factor analysis using comparison data of known factorial structure. Psychological Assessment, 24(2), 282-292. https://doi.org/10.1037/a0025697

Yoon, M., & Millsap, R. E. (2007). Detecting Violations of Factorial Invariance Using Data-Based Specification Searches: A Monte Carlo Study. Structural Equation Modeling: A Multidisciplinary Journal, 14(3), 435-463. https://doi.org/10.1080/10705510701301677

Fabrigar, L. R., Wegener, D. T., MacCallum, R. C., & Strahan, E. J. (1999). Evaluating the Use of Exploratory Factor Analysis in Psychological Research. Psychological Methods 28.

Warne, R. T., & Larsen, R. (2014). Evaluating a proposed modification of the Guttman rule for determining the number of factors in an exploratory factor analysis. Psychological Test and Assessment Modeling, 56(1), 104-123.

Keith, T. Z., Caemmerer, J. M., & Reynolds, M. R. (2016). Comparison of methods for factor extraction for cognitive test-like data: Which overfactor, which underfactor? Intelligence, 54, 37-54. https://doi.org/10.1016/j.intell.2015.11.003

Golino, H. F., & Epskamp, S. (2017). Exploratory graph analysis: A new approach for estimating the number of dimensions in psychological research. PLoS ONE, 12(6).

Golino, H. F., & Demetriou, A. (2017). Estimating the dimensionality of intelligence like data using Exploratory Graph Analysis. Intelligence, 62, 54-70. https://doi.org/10.1016/j.intell.2017.02.007

Del Giudice, M. (2020). Effective Dimensionality: A Tutorial. Multivariate Behavioral Research, 0(0), 1-16. https://doi.org/10.1080/00273171.2020.1743631

Revelle, W., & Wilt, J. (2013). The General Factor of Personality: A General Critique. Journal of Research in Personality, 47(5), 493-504. https://doi.org/10.1016/j.jrp.2013.04.012

Naaman, M. (2016). Almost sure hypothesis testing and a resolution of the Jeffreys-Lindley paradox. Electronic Journal of Statistics, 10(1), 1526-1550. https://doi.org/10.1214/16-EJS1146

Postscript: Explaining Factor Correlations Between Different Tiers on a Hierarchy of Factor Extractions

Goldberg (2006) proposed a method to allow for explorations of factor hierarchies from the top down by extracting a component (or factor) and correlating it to the factors from an extraction with two, then three, and so on. There are different use cases and the desired models can be defined differently, but the method used in the common way should be edifying for Decker et al. since they seem to have been confused about how the factors from extractions with different numbers of them relate. I will show this for the orthogonal and oblique cases. The orthogonal one should be shocking to Decker et al. Waller (2007) showed that the correlations between the orthogonal components at different levels were given by

\[r(V_i, V_j) = T'_iST_j\]

where S is a selection matrix with an added right column of zeroes. The following PCA uses varimax and the EFA uses minres.

# Seven components and then factors

bassAckward(SLBF.cor, nfactors = 7, fm = "pca", organize = T, cut = 0.3, main = "SLBF, PCA, 7"); bassAckward(SLBF.cor, nfactors = 7, organize = T, cut = 0.3, main = "SLBF, EFA, 7")

## 
## Call: bassAckward(r = SLBF.cor, nfactors = 7, fm = "pca", cut = 0.3, 
##     organize = T, main = "SLBF, PCA, 7")
## 
##  1 C1
##  2 C1
##  3 C1 C2
##  4 C1 C2 C4
##  5 C1 C2 C3 C5
##  6 C1 C2 C4 C3 C5
##  7 C3 C2 C4 C6 C7 C5
## Use print with the short = FALSE option to see the correlations, or use the summary command.

## 
## Call: bassAckward(r = SLBF.cor, nfactors = 7, cut = 0.3, organize = T, 
##     main = "SLBF, EFA, 7")
## 
##  1 F1
##  2 F1
##  3 F1 F3
##  4 F1 F2 F3
##  5 F1 F2 F3 F5
##  6 F1 F3 F2 F5 F6
##  7 F1 F2 F3 F4 F5 F6
## Use print with the short = FALSE option to see the correlations, or use the summary command.
# A more reasonable five factors and four for Lim's matrix

bassAckward(SLBF.cor, nfactors = 5, organize = T, cut = 0.3, fm = "pca", main = "SLBF, PCA, 5"); bassAckward(SLBF.cor, nfactors = 5, organize = T, cut = 0.3, main = "SLBF, EFA, 5")

## 
## Call: bassAckward(r = SLBF.cor, nfactors = 5, fm = "pca", cut = 0.3, 
##     organize = T, main = "SLBF, PCA, 5")
## 
##  1 C1
##  2 C1
##  3 C1 C2
##  4 C1 C2 C4
##  5 C1 C2 C3 C5
## Use print with the short = FALSE option to see the correlations, or use the summary command.

## 
## Call: bassAckward(r = SLBF.cor, nfactors = 5, cut = 0.3, organize = T, 
##     main = "SLBF, EFA, 5")
## 
##  1 F1
##  2 F1
##  3 F1 F3
##  4 F1 F2 F3
##  5 F1 F2 F3 F5
## Use print with the short = FALSE option to see the correlations, or use the summary command.
bassAckward(LIM.cor, nfactors = 4, organize = T, cut = 0.3, fm = "pca", main = "LIM, PCA, 4"); bassAckward(LIM.cor, nfactors = 4, organize = T, cut = 0.3, main = "LIM, EFA, 4")

## 
## Call: bassAckward(r = LIM.cor, nfactors = 4, fm = "pca", cut = 0.3, 
##     organize = T, main = "LIM, PCA, 4")
## 
##  1 C1
##  2 C1
##  3 C1 C2
##  4 C1 C2 C3
## Use print with the short = FALSE option to see the correlations, or use the summary command.

## 
## Call: bassAckward(r = LIM.cor, nfactors = 4, cut = 0.3, organize = T, 
##     main = "LIM, EFA, 4")
## 
##  1 F1
##  2 F1
##  3 F1 F2
##  4 F3 F1 F2
## Use print with the short = FALSE option to see the correlations, or use the summary command.

Notice that when any number of factors or components are extracted, they correlate across levels. These coefficients are not path coefficients, they are, instead, more similar to (but not exactly) part-whole correlations. If g were simply always the first component or factor when a number of them are extracted to maximize the variance explained in some matrix, then we should not see the first level of this hierarchy turn out as it does, with the "g" correlating with both the first and second extracted objects. It is really troubling that neither the editors, nor the reviewers, or indeed, the authors, noticed that what they were saying was patently nonsensical. Worth noting is an additional obvious point: just because Jensen did it doesn't make it true. What I am referring to by this is the fact that Jensen often equated g with the first principal component or factor from a symmetric positive definite matrix. All such matrices have a first component or factor but this obviously does not suggest they have a g factor. EFA does not, in principle, produce estimated scores of anything with a determinate identity whereas PCA does; these results are typically highly correlated - especially for cognitive data because of the typically high and universally positive (when all correlations are reflected to indicate better performance) relationships - but this does not mean that they equate to g (Revelle & Wilt, 2013). In any case, taking this incorrect interpretation of PCA/EFA seriously, look at the one- and two-component solutions for the Lim matrix. In the former, the variance explained is 35% and in the latter, 100%; the sum of squared loadings averages 4.9 in the latter case and is 7.95 in the former. The average correlation of the two components is still r = 0.865; for the SLBF matrix, the average for this level of the hierarchy is 0.805. Were g always and solely the first component regardless of the number extracted, this would not be possible. Obviously I did not need to state this so many times and could have simply provided the formulae for these methods but I've found that sometimes simply telling someone they are wrong does not work to push the point home unless it comes with examples.

Postscript References

Goldberg, L. R. (2006). Doing it all Bass-Ackwards: The development of hierarchical factor structures from the top down. Journal of Research in Personality, 40(4), 347-358. https://doi.org/10.1016/j.jrp.2006.01.001

Waller, N. (2007). A general method for computing hierarchical component structures by Goldberg's Bass-Ackwards method. Journal of Research in Personality, 41(4), 745-752. https://doi.org/10.1016/j.jrp.2006.08.005