library(pacman)
p_load(lavaan, DT, psych, semPlot)
OSBO <- read.csv("OsborneUmx.csv")
OSBOA <- read.csv("OsboAlt.csv")
NMTP <- read.csv("nmgen.csv")
GFIT <- c("logl", "df", "npar", "aic", "bic", "chisq", "pvalue", "tli", "rmsea")
Common and independent pathway models allow behavior geneticists and psychometricians to assess the heritability and environmentality of latent constructs and to provide evidence for their existence. Moreover, they allow them to test whether there are really likely to be "genes for" a given psychological construct. This is discussed in depth in various publications (see Shikishima et al., 2009; Franic, Dolan et al., 2013; Franic, Borsboom et al., 2013; Panizzon et al., 2014; assessing the relationship between specific variance components and g or other factors may also be interesting if an independent pathway model works best). I will be using the wide-format data from Osborne's Georgia Twin Study and Loehlin & Nichols' National Merit Twin Study. In the future, I will use the NLSY sibling links. Loehlin & Nichols' data can be accessed here: https://dataverse.harvard.edu/dataset.xhtml?persistentId=hdl:1902.1/13913. Osborne's data (given to me by Hu who got it from Dolan who got it from Jensen who got it from Osborne) can be downloaded below:
datatable(OSBO, extensions = c("Buttons", "FixedColumns"), options = list(dom = 'Bfrtip', buttons = c('copy', 'csv', 'print'), scrollX = T, fixedColumns = list(leftColumns = 3)))
datatable(OSBOA, extensions = c("Buttons", "FixedColumns"), options = list(dom = 'Bfrtip', buttons = c('copy', 'csv', 'print'), scrollX = T, fixedColumns = list(leftColumns = 3)))
A Cholesky decomposition of the phenotypic covariance matrix into additive environmental, shared environmental and/or dominance, and unshared environmental matrices should precede a common/independent pathway comparison in order to allow exploratory factor analyses to assess the structural comparability of these variance components. Because I am using data with four and five variables, respectively, I have avoided this step. I have also avoided model pruning even though some models clearly require it. Something worth noting is that if researchers do prune their models, they should present them with the unrestricted p values to avoid overfitting (Goldberger, 2002). At that point, accounting for parsimony is doubly important. I will not be fitting just 'E' models or models without 'E' since these do not make sense. To get the A, C, and E values square the loadings (A, C, E) and add the residual variances (E) if you model them like that from the summary(fit, stand = T). Make sure to follow path-tracing rules for the common pathway models.
If you wish to use this code or a modification thereof, cite this page unless the preprint or paper related to it has come online, in which case, you should cite the latest version of that.
First, I want the phenotypic factor loadings and I want to see if the test worked the same way for both members of each twin pair.
# Phenotypic Factor Models
## Twin 1 and 2 Invariance
OSTALL <- read.csv("Osbornetall.csv")
NMTALL <- read.csv("NMTTall.csv")
OSTATALL <- read.csv("OsboAltTall.csv")
OT <- '
g =~ pmanum + pmaverb + pmarea + pmaspa'
OTA <- '
g =~ faces + logical + socperc + cancal + crdrot + ship'
NT <- '
g =~ ENG + MAT + SOC + NAT + VOC'
GTC <- cfa(OT, data = OSTALL, group = "nr", std.lv = T)
GTM <- cfa(OT, data = OSTALL, group = "nr", std.lv = F, group.equal = "loadings")
GTS <- cfa(OT, data = OSTALL, group = "nr", std.lv = F, group.equal = c("loadings", "intercepts"))
GTR <- cfa(OT, data = OSTALL, group = "nr", std.lv = F, group.equal = c("loadings", "intercepts", "residuals"))
GTL <- cfa(OT, data = OSTALL, group = "nr", std.lv = T, group.equal = c("loadings", "intercepts", "residuals"))
round(cbind("Configural" = fitMeasures(GTC, FITM),
"Metric" = fitMeasures(GTM, FITM),
"Scalar" = fitMeasures(GTS, FITM),
"Strict" = fitMeasures(GTR, FITM),
"Variances" = fitMeasures(GTL, FITM)), 3)
## Configural Metric Scalar Strict Variances
## chisq 2.214 7.714 10.782 13.535 13.535
## df 4.000 7.000 10.000 14.000 14.000
## npar 24.000 21.000 18.000 14.000 14.000
## cfi 1.000 0.999 0.999 1.000 1.000
## rmsea 0.000 0.023 0.020 0.000 0.000
## rmsea.ci.lower 0.000 0.000 0.000 0.000 0.000
## rmsea.ci.upper 0.083 0.094 0.083 0.068 0.068
## aic 12088.775 12088.275 12085.343 12080.097 12080.097
## bic 12183.339 12171.019 12156.267 12135.259 12135.259
## srmr 0.009 0.040 0.043 0.044 0.044
GTC <- cfa(OTA, data = OSTATALL, group = "nr", std.lv = T)
GTM <- cfa(OTA, data = OSTATALL, group = "nr", std.lv = F, group.equal = "loadings")
GTS <- cfa(OTA, data = OSTATALL, group = "nr", std.lv = F, group.equal = c("loadings", "intercepts"))
GTR <- cfa(OTA, data = OSTATALL, group = "nr", std.lv = F, group.equal = c("loadings", "intercepts", "residuals"))
GTL <- cfa(OTA, data = OSTATALL, group = "nr", std.lv = T, group.equal = c("loadings", "intercepts", "residuals"))
round(cbind("Configural" = fitMeasures(GTC, FITM),
"Metric" = fitMeasures(GTM, FITM),
"Scalar" = fitMeasures(GTS, FITM),
"Strict" = fitMeasures(GTR, FITM),
"Variances" = fitMeasures(GTL, FITM)), 3)
## Configural Metric Scalar Strict Variances
## chisq 39.634 41.324 44.445 47.610 47.610
## df 18.000 23.000 28.000 34.000 34.000
## npar 36.000 31.000 26.000 20.000 20.000
## cfi 0.972 0.977 0.979 0.983 0.983
## rmsea 0.066 0.054 0.046 0.038 0.038
## rmsea.ci.lower 0.038 0.025 0.017 0.000 0.000
## rmsea.ci.upper 0.094 0.079 0.070 0.062 0.062
## aic 24915.939 24907.629 24900.749 24891.914 24891.914
## bic 25071.486 25041.572 25013.089 24978.329 24978.329
## srmr 0.033 0.036 0.038 0.039 0.039
GTC <- cfa(NT, data = NMTALL, group = "nr", std.lv = T)
GTM <- cfa(NT, data = NMTALL, group = "nr", std.lv = F, group.equal = "loadings")
GTS <- cfa(NT, data = NMTALL, group = "nr", std.lv = F, group.equal = c("loadings", "intercepts"))
GTR <- cfa(NT, data = NMTALL, group = "nr", std.lv = F, group.equal = c("loadings", "intercepts", "residuals"))
GTL <- cfa(NT, data = NMTALL, group = "nr", std.lv = T, group.equal = c("loadings", "intercepts", "residuals"))
round(cbind("Configural" = fitMeasures(GTC, FITM),
"Metric" = fitMeasures(GTM, FITM),
"Scalar" = fitMeasures(GTS, FITM),
"Strict" = fitMeasures(GTR, FITM),
"Variances" = fitMeasures(GTL, FITM)), 3)
## Configural Metric Scalar Strict Variances
## chisq 217.101 222.760 223.367 231.204 231.204
## df 10.000 14.000 18.000 23.000 23.000
## npar 30.000 26.000 22.000 17.000 17.000
## cfi 0.958 0.957 0.958 0.957 0.957
## rmsea 0.157 0.133 0.117 0.104 0.104
## rmsea.ci.lower 0.139 0.118 0.103 0.092 0.092
## rmsea.ci.upper 0.176 0.149 0.131 0.116 0.116
## aic 46931.210 46928.870 46921.476 46919.313 46919.313
## bic 47093.971 47069.929 47040.834 47011.544 47011.544
## srmr 0.032 0.036 0.037 0.037 0.037
# Common Pathway Models
PMACPACE <- '
# Factors and Loadings
g1 =~ NA * pmanum1 + c(gl1, gl1) * pmanum1 + c(0.5, 0.5) * pmanum1 + NA * pmaverb1 + c(gl2, gl2) * pmaverb1 + c(0.5, 0.5) * pmaverb1 + NA * pmarea1 + c(gl3, gl3) * pmarea1 + c(0.5, 0.5) * pmarea1 + NA * pmaspa1 + c(gl4, gl4) * pmaspa1 + c(0.5, 0.5) * pmaspa1
g2 =~ NA * pmanum2 + c(gl1, gl1) * pmanum2 + c(0.5, 0.5) * pmanum2 + NA * pmaverb2 + c(gl2, gl2) * pmaverb2 + c(0.5, 0.5) * pmaverb2 + NA * pmarea2 + c(gl3, gl3) * pmarea2 + c(0.5, 0.5) * pmarea2 + NA * pmaspa2 + c(gl4, gl4) * pmaspa2 + c(0.5, 0.5) * pmaspa2
ag1 =~ NA * g1 + c(a, a) * g1 + c(0.5, 0.5) * g1
ag2 =~ NA * g2 + c(a, a) * g2 + c(0.5, 0.5) * g2
cg1 =~ NA * g1 + c(d, d) * g1
cg2 =~ NA * g2 + c(d, d) * g2
eg1 =~ NA * g1 + c(q, q) * g1
eg2 =~ NA * g2 + c(q, q) * g2
AS1 =~ NA * pmanum1 + c(a1, a1) * pmanum1 + c(0.5, 0.5) * pmanum1
AS2 =~ NA * pmanum2 + c(a1, a1) * pmanum2 + c(0.5, 0.5) * pmanum2
CS1 =~ NA * pmanum1 + c(d1, d1) * pmanum1
CS2 =~ NA * pmanum2 + c(d1, d1) * pmanum2
ES1 =~ NA * pmanum1 + c(e1, e1) * pmanum1
ES2 =~ NA * pmanum2 + c(e1, e1) * pmanum2
AS3 =~ NA * pmaverb1 + c(a2, a2) * pmaverb1 + c(0.5, 0.5) * pmaverb1
AS4 =~ NA * pmaverb2 + c(a2, a2) * pmaverb2 + c(0.5, 0.5) * pmaverb2
CS3 =~ NA * pmaverb1 + c(d2, d2) * pmaverb1
CS4 =~ NA * pmaverb2 + c(d2, d2) * pmaverb2
ES3 =~ NA * pmaverb1 + c(e2, e2) * pmaverb1
ES4 =~ NA * pmaverb2 + c(e2, e2) * pmaverb2
AS5 =~ NA * pmarea1 + c(a3, a3) * pmarea1 + c(0.5, 0.5) * pmarea1
AS6 =~ NA * pmarea2 + c(a3, a3) * pmarea2 + c(0.5, 0.5) * pmarea2
CS5 =~ NA * pmarea1 + c(d3, d3) * pmarea1
CS6 =~ NA * pmarea2 + c(d3, d3) * pmarea2
ES5 =~ NA * pmarea1 + c(e3, e3) * pmarea1
ES6 =~ NA * pmarea2 + c(e3, e3) * pmarea2
AS7 =~ NA * pmaspa1 + c(a4, a4) * pmaspa1 + c(0.5, 0.5) * pmaspa1
AS8 =~ NA * pmaspa2 + c(a4, a4) * pmaspa2 + c(0.5, 0.5) * pmaspa2
CS7 =~ NA * pmaspa1 + c(d4, d4) * pmaspa1
CS8 =~ NA * pmaspa2 + c(d4, d4) * pmaspa2
ES7 =~ NA * pmaspa1 + c(e4, e4) * pmaspa1
ES8 =~ NA * pmaspa2 + c(e4, e4) * pmaspa2
# Covariances
# Much of this section is not actually necessary due to orthogonal = T, and it should be longer without that, but I included it anyway
ag1 ~~ c(1, 0.5) * ag2
ag1 ~~ 0 * cg1 + 0 * cg2
ag2 ~~ 0 * cg1 + 0 * cg2
cg1 ~~ c(1, 1) * cg2
eg1 ~~ 0 * eg2
AS1 ~~ c(1, 0.5) * AS2
AS1 ~~ 0 * CS1 + 0 * CS2
AS2 ~~ 0 * CS1 + 0 * CS2
CS1 ~~ c(1, 1) * CS2
ES1 ~~ 0 * ES2
AS3 ~~ c(1, 0.5) * AS4
AS3 ~~ 0 * CS3 + 0 * CS4
AS4 ~~ 0 * CS3 + 0 * CS4
CS3 ~~ c(1, 1) * CS4
ES3 ~~ 0 * ES4
AS5 ~~ c(1, 0.5) * AS6
AS5 ~~ 0 * CS5 + 0 * CS6
AS6 ~~ 0 * CS5 + 0 * CS6
CS5 ~~ c(1, 1) * CS6
ES5 ~~ 0 * ES6
AS7 ~~ c(1, 0.5) * AS8
AS7 ~~ 0 * CS7 + 0 * CS8
AS8 ~~ 0 * CS7 + 0 * CS8
CS7 ~~ c(1, 1) * CS8
ES7 ~~ 0 * ES8
#Variances
ag1 ~~ 1 * ag1
ag2 ~~ 1 * ag2
cg1 ~~ 1 * cg1
cg2 ~~ 1 * cg2
eg1 ~~ 1 * eg1
eg2 ~~ 1 * eg2
AS1 ~~ 1 * AS1
AS2 ~~ 1 * AS2
AS3 ~~ 1 * AS3
AS4 ~~ 1 * AS4
AS5 ~~ 1 * AS5
AS6 ~~ 1 * AS6
AS7 ~~ 1 * AS7
AS8 ~~ 1 * AS8
CS1 ~~ 1 * CS1
CS2 ~~ 1 * CS2
CS3 ~~ 1 * CS3
CS4 ~~ 1 * CS4
CS5 ~~ 1 * CS5
CS6 ~~ 1 * CS6
CS7 ~~ 1 * CS7
CS8 ~~ 1 * CS8
ES1 ~~ 1 * ES1
ES2 ~~ 1 * ES2
ES3 ~~ 1 * ES3
ES4 ~~ 1 * ES4
ES5 ~~ 1 * ES5
ES6 ~~ 1 * ES6
ES7 ~~ 1 * ES7
ES8 ~~ 1 * ES8
pmanum1 ~~ 0 * pmanum1
pmanum2 ~~ 0 * pmanum2
pmaverb1 ~~ 0 * pmaverb1
pmaverb2 ~~ 0 * pmaverb2
pmarea1 ~~ 0 * pmarea1
pmarea2 ~~ 0 * pmarea2
pmaspa1 ~~ 0 * pmaspa1
pmaspa2 ~~ 0 * pmaspa2
g1 ~~ 0 * g1
g2 ~~ 0 * g2'
PMACPACE <- sem(PMACPACE, data = OSBO, group = "zygf", orthogonal = T)
PMACPADE <- '
# Factors and Loadings
g1 =~ NA * pmanum1 + c(gl1, gl1) * pmanum1 + c(0.5, 0.5) * pmanum1 + NA * pmaverb1 + c(gl2, gl2) * pmaverb1 + c(0.5, 0.5) * pmaverb1 + NA * pmarea1 + c(gl3, gl3) * pmarea1 + c(0.5, 0.5) * pmarea1 + NA * pmaspa1 + c(gl4, gl4) * pmaspa1 + c(0.5, 0.5) * pmaspa1
g2 =~ NA * pmanum2 + c(gl1, gl1) * pmanum2 + c(0.5, 0.5) * pmanum2 + NA * pmaverb2 + c(gl2, gl2) * pmaverb2 + c(0.5, 0.5) * pmaverb2 + NA * pmarea2 + c(gl3, gl3) * pmarea2 + c(0.5, 0.5) * pmarea2 + NA * pmaspa2 + c(gl4, gl4) * pmaspa2 + c(0.5, 0.5) * pmaspa2
ag1 =~ NA * g1 + c(a, a) * g1 + c(0.5, 0.5) * g1
ag2 =~ NA * g2 + c(a, a) * g2 + c(0.5, 0.5) * g2
dg1 =~ NA * g1 + c(d, d) * g1
dg2 =~ NA * g2 + c(d, d) * g2
eg1 =~ NA * g1 + c(q, q) * g1
eg2 =~ NA * g2 + c(q, q) * g2
AS1 =~ NA * pmanum1 + c(a1, a1) * pmanum1 + c(0.5, 0.5) * pmanum1
AS2 =~ NA * pmanum2 + c(a1, a1) * pmanum2 + c(0.5, 0.5) * pmanum2
DS1 =~ NA * pmanum1 + c(d1, d1) * pmanum1
DS2 =~ NA * pmanum2 + c(d1, d1) * pmanum2
ES1 =~ NA * pmanum1 + c(e1, e1) * pmanum1
ES2 =~ NA * pmanum2 + c(e1, e1) * pmanum2
AS3 =~ NA * pmaverb1 + c(a2, a2) * pmaverb1 + c(0.5, 0.5) * pmaverb1
AS4 =~ NA * pmaverb2 + c(a2, a2) * pmaverb2 + c(0.5, 0.5) * pmaverb2
DS3 =~ NA * pmaverb1 + c(d2, d2) * pmaverb1
DS4 =~ NA * pmaverb2 + c(d2, d2) * pmaverb2
ES3 =~ NA * pmaverb1 + c(e2, e2) * pmaverb1
ES4 =~ NA * pmaverb2 + c(e2, e2) * pmaverb2
AS5 =~ NA * pmarea1 + c(a3, a3) * pmarea1 + c(0.5, 0.5) * pmarea1
AS6 =~ NA * pmarea2 + c(a3, a3) * pmarea2 + c(0.5, 0.5) * pmarea2
DS5 =~ NA * pmarea1 + c(d3, d3) * pmarea1
DS6 =~ NA * pmarea2 + c(d3, d3) * pmarea2
ES5 =~ NA * pmarea1 + c(e3, e3) * pmarea1
ES6 =~ NA * pmarea2 + c(e3, e3) * pmarea2
AS7 =~ NA * pmaspa1 + c(a4, a4) * pmaspa1 + c(0.5, 0.5) * pmaspa1
AS8 =~ NA * pmaspa2 + c(a4, a4) * pmaspa2 + c(0.5, 0.5) * pmaspa2
DS7 =~ NA * pmaspa1 + c(d4, d4) * pmaspa1
DS8 =~ NA * pmaspa2 + c(d4, d4) * pmaspa2
ES7 =~ NA * pmaspa1 + c(e4, e4) * pmaspa1
ES8 =~ NA * pmaspa2 + c(e4, e4) * pmaspa2
# Covariances
ag1 ~~ c(1, 0.5) * ag2
ag1 ~~ 0 * dg1 + 0 * dg2
ag2 ~~ 0 * dg1 + 0 * dg2
dg1 ~~ c(1, 0.25) * dg2
eg1 ~~ 0 * eg2
AS1 ~~ c(1, 0.5) * AS2
AS1 ~~ 0 * DS1 + 0 * DS2
AS2 ~~ 0 * DS1 + 0 * DS2
DS1 ~~ c(1, 0.25) * DS2
ES1 ~~ 0 * ES2
AS3 ~~ c(1, 0.5) * AS4
AS3 ~~ 0 * DS3 + 0 * DS4
AS4 ~~ 0 * DS3 + 0 * DS4
DS3 ~~ c(1, 0.25) * DS4
ES3 ~~ 0 * ES4
AS5 ~~ c(1, 0.5) * AS6
AS5 ~~ 0 * DS5 + 0 * DS6
AS6 ~~ 0 * DS5 + 0 * DS6
DS5 ~~ c(1, 0.25) * DS6
ES5 ~~ 0 * ES6
AS7 ~~ c(1, 0.5) * AS8
AS7 ~~ 0 * DS7 + 0 * DS8
AS8 ~~ 0 * DS7 + 0 * DS8
DS7 ~~ c(1, 0.25) * DS8
ES7 ~~ 0 * ES8
#Variances
ag1 ~~ 1 * ag1
ag2 ~~ 1 * ag2
dg1 ~~ 1 * dg1
dg2 ~~ 1 * dg2
eg1 ~~ 1 * eg1
eg2 ~~ 1 * eg2
AS1 ~~ 1 * AS1
AS2 ~~ 1 * AS2
AS3 ~~ 1 * AS3
AS4 ~~ 1 * AS4
AS5 ~~ 1 * AS5
AS6 ~~ 1 * AS6
AS7 ~~ 1 * AS7
AS8 ~~ 1 * AS8
DS1 ~~ 1 * DS1
DS2 ~~ 1 * DS2
DS3 ~~ 1 * DS3
DS4 ~~ 1 * DS4
DS5 ~~ 1 * DS5
DS6 ~~ 1 * DS6
DS7 ~~ 1 * DS7
DS8 ~~ 1 * DS8
ES1 ~~ 1 * ES1
ES2 ~~ 1 * ES2
ES3 ~~ 1 * ES3
ES4 ~~ 1 * ES4
ES5 ~~ 1 * ES5
ES6 ~~ 1 * ES6
ES7 ~~ 1 * ES7
ES8 ~~ 1 * ES8
pmanum1 ~~ 0 * pmanum1
pmanum2 ~~ 0 * pmanum2
pmaverb1 ~~ 0 * pmaverb1
pmaverb2 ~~ 0 * pmaverb2
pmarea1 ~~ 0 * pmarea1
pmarea2 ~~ 0 * pmarea2
pmaspa1 ~~ 0 * pmaspa1
pmaspa2 ~~ 0 * pmaspa2
g1 ~~ 0 * g1
g2 ~~ 0 * g2'
PMACPADE <- sem(PMACPADE, data = OSBO, group = "zygf", orthogonal = T)
PMACPAE <- '
# Factors and Loadings
g1 =~ NA * pmanum1 + c(gl1, gl1) * pmanum1 + c(0.5, 0.5) * pmanum1 + NA * pmaverb1 + c(gl2, gl2) * pmaverb1 + c(0.5, 0.5) * pmaverb1 + NA * pmarea1 + c(gl3, gl3) * pmarea1 + c(0.5, 0.5) * pmarea1 + NA * pmaspa1 + c(gl4, gl4) * pmaspa1 + c(0.5, 0.5) * pmaspa1
g2 =~ NA * pmanum2 + c(gl1, gl1) * pmanum2 + c(0.5, 0.5) * pmanum2 + NA * pmaverb2 + c(gl2, gl2) * pmaverb2 + c(0.5, 0.5) * pmaverb2 + NA * pmarea2 + c(gl3, gl3) * pmarea2 + c(0.5, 0.5) * pmarea2 + NA * pmaspa2 + c(gl4, gl4) * pmaspa2 + c(0.5, 0.5) * pmaspa2
ag1 =~ NA * g1 + c(a, a) * g1 + c(0.5, 0.5) * g1
ag2 =~ NA * g2 + c(a, a) * g2 + c(0.5, 0.5) * g2
eg1 =~ NA * g1 + c(q, q) * g1
eg2 =~ NA * g2 + c(q, q) * g2
AS1 =~ NA * pmanum1 + c(a1, a1) * pmanum1 + c(0.5, 0.5) * pmanum1
AS2 =~ NA * pmanum2 + c(a1, a1) * pmanum2 + c(0.5, 0.5) * pmanum2
ES1 =~ NA * pmanum1 + c(e1, e1) * pmanum1
ES2 =~ NA * pmanum2 + c(e1, e1) * pmanum2
AS3 =~ NA * pmaverb1 + c(a2, a2) * pmaverb1 + c(0.5, 0.5) * pmaverb1
AS4 =~ NA * pmaverb2 + c(a2, a2) * pmaverb2 + c(0.5, 0.5) * pmaverb2
ES3 =~ NA * pmaverb1 + c(e2, e2) * pmaverb1
ES4 =~ NA * pmaverb2 + c(e2, e2) * pmaverb2
AS5 =~ NA * pmarea1 + c(a3, a3) * pmarea1 + c(0.5, 0.5) * pmarea1
AS6 =~ NA * pmarea2 + c(a3, a3) * pmarea2 + c(0.5, 0.5) * pmarea2
ES5 =~ NA * pmarea1 + c(e3, e3) * pmarea1
ES6 =~ NA * pmarea2 + c(e3, e3) * pmarea2
AS7 =~ NA * pmaspa1 + c(a4, a4) * pmaspa1 + c(0.5, 0.5) * pmaspa1
AS8 =~ NA * pmaspa2 + c(a4, a4) * pmaspa2 + c(0.5, 0.5) * pmaspa2
ES7 =~ NA * pmaspa1 + c(e4, e4) * pmaspa1
ES8 =~ NA * pmaspa2 + c(e4, e4) * pmaspa2
# Covariances
ag1 ~~ c(1, 0.5) * ag2
eg1 ~~ 0 * eg2
AS1 ~~ c(1, 0.5) * AS2
ES1 ~~ 0 * ES2
AS3 ~~ c(1, 0.5) * AS4
ES3 ~~ 0 * ES4
AS5 ~~ c(1, 0.5) * AS6
ES5 ~~ 0 * ES6
AS7 ~~ c(1, 0.5) * AS8
ES7 ~~ 0 * ES8
#Variances
ag1 ~~ 1 * ag1
ag2 ~~ 1 * ag2
eg1 ~~ 1 * eg1
eg2 ~~ 1 * eg2
AS1 ~~ 1 * AS1
AS2 ~~ 1 * AS2
AS3 ~~ 1 * AS3
AS4 ~~ 1 * AS4
AS5 ~~ 1 * AS5
AS6 ~~ 1 * AS6
AS7 ~~ 1 * AS7
AS8 ~~ 1 * AS8
ES1 ~~ 1 * ES1
ES2 ~~ 1 * ES2
ES3 ~~ 1 * ES3
ES4 ~~ 1 * ES4
ES5 ~~ 1 * ES5
ES6 ~~ 1 * ES6
ES7 ~~ 1 * ES7
ES8 ~~ 1 * ES8
pmanum1 ~~ 0 * pmanum1
pmanum2 ~~ 0 * pmanum2
pmaverb1 ~~ 0 * pmaverb1
pmaverb2 ~~ 0 * pmaverb2
pmarea1 ~~ 0 * pmarea1
pmarea2 ~~ 0 * pmarea2
pmaspa1 ~~ 0 * pmaspa1
pmaspa2 ~~ 0 * pmaspa2
g1 ~~ 0 * g1
g2 ~~ 0 * g2'
PMACPAE <- sem(PMACPAE, data = OSBO, group = "zygf", orthogonal = T)
## Warning in lav_model_vcov(lavmodel = lavmodel, lavsamplestats = lavsamplestats, : lavaan WARNING:
## The variance-covariance matrix of the estimated parameters (vcov)
## does not appear to be positive definite! The smallest eigenvalue
## (= 3.549457e-13) is close to zero. This may be a symptom that the
## model is not identified.
PMACPCE <- '
# Factors and Loadings
g1 =~ NA * pmanum1 + c(gl1, gl1) * pmanum1 + c(0.5, 0.5) * pmanum1 + NA * pmaverb1 + c(gl2, gl2) * pmaverb1 + c(0.5, 0.5) * pmaverb1 + NA * pmarea1 + c(gl3, gl3) * pmarea1 + c(0.5, 0.5) * pmarea1 + NA * pmaspa1 + c(gl4, gl4) * pmaspa1 + c(0.5, 0.5) * pmaspa1
g2 =~ NA * pmanum2 + c(gl1, gl1) * pmanum2 + c(0.5, 0.5) * pmanum2 + NA * pmaverb2 + c(gl2, gl2) * pmaverb2 + c(0.5, 0.5) * pmaverb2 + NA * pmarea2 + c(gl3, gl3) * pmarea2 + c(0.5, 0.5) * pmarea2 + NA * pmaspa2 + c(gl4, gl4) * pmaspa2 + c(0.5, 0.5) * pmaspa2
cg1 =~ NA * g1 + c(d, d) * g1
cg2 =~ NA * g2 + c(d, d) * g2
eg1 =~ NA * g1 + c(q, q) * g1
eg2 =~ NA * g2 + c(q, q) * g2
CS1 =~ NA * pmanum1 + c(d1, d1) * pmanum1
CS2 =~ NA * pmanum2 + c(d1, d1) * pmanum2
ES1 =~ NA * pmanum1 + c(e1, e1) * pmanum1
ES2 =~ NA * pmanum2 + c(e1, e1) * pmanum2
CS3 =~ NA * pmaverb1 + c(d2, d2) * pmaverb1
CS4 =~ NA * pmaverb2 + c(d2, d2) * pmaverb2
ES3 =~ NA * pmaverb1 + c(e2, e2) * pmaverb1
ES4 =~ NA * pmaverb2 + c(e2, e2) * pmaverb2
CS5 =~ NA * pmarea1 + c(d3, d3) * pmarea1
CS6 =~ NA * pmarea2 + c(d3, d3) * pmarea2
ES5 =~ NA * pmarea1 + c(e3, e3) * pmarea1
ES6 =~ NA * pmarea2 + c(e3, e3) * pmarea2
CS7 =~ NA * pmaspa1 + c(d4, d4) * pmaspa1
CS8 =~ NA * pmaspa2 + c(d4, d4) * pmaspa2
ES7 =~ NA * pmaspa1 + c(e4, e4) * pmaspa1
ES8 =~ NA * pmaspa2 + c(e4, e4) * pmaspa2
# Covariances
cg1 ~~ c(1, 1) * cg2
eg1 ~~ 0 * eg2
CS1 ~~ c(1, 1) * CS2
ES1 ~~ 0 * ES2
CS3 ~~ c(1, 1) * CS4
ES3 ~~ 0 * ES4
CS5 ~~ c(1, 1) * CS6
ES5 ~~ 0 * ES6
CS7 ~~ c(1, 1) * CS8
ES7 ~~ 0 * ES8
#Variances
cg1 ~~ 1 * cg1
cg2 ~~ 1 * cg2
eg1 ~~ 1 * eg1
eg2 ~~ 1 * eg2
CS1 ~~ 1 * CS1
CS2 ~~ 1 * CS2
CS3 ~~ 1 * CS3
CS4 ~~ 1 * CS4
CS5 ~~ 1 * CS5
CS6 ~~ 1 * CS6
CS7 ~~ 1 * CS7
CS8 ~~ 1 * CS8
ES1 ~~ 1 * ES1
ES2 ~~ 1 * ES2
ES3 ~~ 1 * ES3
ES4 ~~ 1 * ES4
ES5 ~~ 1 * ES5
ES6 ~~ 1 * ES6
ES7 ~~ 1 * ES7
ES8 ~~ 1 * ES8
pmanum1 ~~ 0 * pmanum1
pmanum2 ~~ 0 * pmanum2
pmaverb1 ~~ 0 * pmaverb1
pmaverb2 ~~ 0 * pmaverb2
pmarea1 ~~ 0 * pmarea1
pmarea2 ~~ 0 * pmarea2
pmaspa1 ~~ 0 * pmaspa1
pmaspa2 ~~ 0 * pmaspa2
g1 ~~ 0 * g1
g2 ~~ 0 * g2'
PMACPCE <- sem(PMACPCE, data = OSBO, group = "zygf", orthogonal = T)
## Warning in lav_model_vcov(lavmodel = lavmodel, lavsamplestats = lavsamplestats, : lavaan WARNING:
## The variance-covariance matrix of the estimated parameters (vcov)
## does not appear to be positive definite! The smallest eigenvalue
## (= 5.035028e-13) is close to zero. This may be a symptom that the
## model is not identified.
PMACPDE <- '
# Factors and Loadings
g1 =~ NA * pmanum1 + c(gl1, gl1) * pmanum1 + c(0.5, 0.5) * pmanum1 + NA * pmaverb1 + c(gl2, gl2) * pmaverb1 + c(0.5, 0.5) * pmaverb1 + NA * pmarea1 + c(gl3, gl3) * pmarea1 + c(0.5, 0.5) * pmarea1 + NA * pmaspa1 + c(gl4, gl4) * pmaspa1 + c(0.5, 0.5) * pmaspa1
g2 =~ NA * pmanum2 + c(gl1, gl1) * pmanum2 + c(0.5, 0.5) * pmanum2 + NA * pmaverb2 + c(gl2, gl2) * pmaverb2 + c(0.5, 0.5) * pmaverb2 + NA * pmarea2 + c(gl3, gl3) * pmarea2 + c(0.5, 0.5) * pmarea2 + NA * pmaspa2 + c(gl4, gl4) * pmaspa2 + c(0.5, 0.5) * pmaspa2
dg1 =~ NA * g1 + c(d, d) * g1
dg2 =~ NA * g2 + c(d, d) * g2
eg1 =~ NA * g1 + c(q, q) * g1
eg2 =~ NA * g2 + c(q, q) * g2
DS1 =~ NA * pmanum1 + c(d1, d1) * pmanum1
DS2 =~ NA * pmanum2 + c(d1, d1) * pmanum2
ES1 =~ NA * pmanum1 + c(e1, e1) * pmanum1
ES2 =~ NA * pmanum2 + c(e1, e1) * pmanum2
DS3 =~ NA * pmaverb1 + c(d2, d2) * pmaverb1
DS4 =~ NA * pmaverb2 + c(d2, d2) * pmaverb2
ES3 =~ NA * pmaverb1 + c(e2, e2) * pmaverb1
ES4 =~ NA * pmaverb2 + c(e2, e2) * pmaverb2
DS5 =~ NA * pmarea1 + c(d3, d3) * pmarea1
DS6 =~ NA * pmarea2 + c(d3, d3) * pmarea2
ES5 =~ NA * pmarea1 + c(e3, e3) * pmarea1
ES6 =~ NA * pmarea2 + c(e3, e3) * pmarea2
DS7 =~ NA * pmaspa1 + c(d4, d4) * pmaspa1
DS8 =~ NA * pmaspa2 + c(d4, d4) * pmaspa2
ES7 =~ NA * pmaspa1 + c(e4, e4) * pmaspa1
ES8 =~ NA * pmaspa2 + c(e4, e4) * pmaspa2
# Covariances
dg1 ~~ c(1, 0.25) * dg2
eg1 ~~ 0 * eg2
DS1 ~~ c(1, 0.25) * DS2
ES1 ~~ 0 * ES2
DS3 ~~ c(1, 0.25) * DS4
ES3 ~~ 0 * ES4
DS5 ~~ c(1, 0.25) * DS6
ES5 ~~ 0 * ES6
DS7 ~~ c(1, 0.25) * DS8
ES7 ~~ 0 * ES8
#Variances
dg1 ~~ 1 * dg1
dg2 ~~ 1 * dg2
eg1 ~~ 1 * eg1
eg2 ~~ 1 * eg2
DS1 ~~ 1 * DS1
DS2 ~~ 1 * DS2
DS3 ~~ 1 * DS3
DS4 ~~ 1 * DS4
DS5 ~~ 1 * DS5
DS6 ~~ 1 * DS6
DS7 ~~ 1 * DS7
DS8 ~~ 1 * DS8
ES1 ~~ 1 * ES1
ES2 ~~ 1 * ES2
ES3 ~~ 1 * ES3
ES4 ~~ 1 * ES4
ES5 ~~ 1 * ES5
ES6 ~~ 1 * ES6
ES7 ~~ 1 * ES7
ES8 ~~ 1 * ES8
pmanum1 ~~ 0 * pmanum1
pmanum2 ~~ 0 * pmanum2
pmaverb1 ~~ 0 * pmaverb1
pmaverb2 ~~ 0 * pmaverb2
pmarea1 ~~ 0 * pmarea1
pmarea2 ~~ 0 * pmarea2
pmaspa1 ~~ 0 * pmaspa1
pmaspa2 ~~ 0 * pmaspa2
g1 ~~ 0 * g1
g2 ~~ 0 * g2'
PMACPDE <- sem(PMACPDE, data = OSBO, group = "zygf", orthogonal = T)
## Warning in lav_model_vcov(lavmodel = lavmodel, lavsamplestats = lavsamplestats, : lavaan WARNING:
## The variance-covariance matrix of the estimated parameters (vcov)
## does not appear to be positive definite! The smallest eigenvalue
## (= 6.628897e-13) is close to zero. This may be a symptom that the
## model is not identified.
#Independent Pathway Models
PMAIPACE <- '
# Loadings
AL1 =~ NA * pmanum1 + c(a, a) * pmanum1 + c(0.5, 0.5) * pmanum1 + NA * pmaverb1 + c(ab, ab) * pmaverb1 + c(0.5, 0.5) * pmaverb1 + NA * pmarea1 + c(ac, ac) * pmarea1 + c(0.5, 0.5) * pmarea1 + NA * pmaspa1 + c(ad, ad) * pmaspa1 + c(0.5, 0.5) * pmaspa1
AL2 =~ NA * pmanum2 + c(a, a) * pmanum2 + c(0.5, 0.5) * pmanum2 + NA * pmaverb2 + c(ab, ab) * pmaverb2 + c(0.5, 0.5) * pmaverb2 + NA * pmarea2 + c(ac, ac) * pmarea2 + c(0.5, 0.5) * pmarea2 + NA * pmaspa2 + c(ad, ad) * pmaspa2 + c(0.5, 0.5) * pmaspa2
CL1 =~ NA * pmanum1 + c(d, d) * pmanum1 + NA * pmaverb1 + c(db, db) * pmaverb1 + NA * pmarea1 + c(dc, dc) * pmarea1 + NA * pmaspa1 + c(dd, dd) * pmaspa1
CL2 =~ NA * pmanum2 + c(d, d) * pmanum2 + NA * pmaverb2 + c(db, db) * pmaverb2 + NA * pmarea2 + c(dc, dc) * pmarea2 + NA * pmaspa2 + c(dd, dd) * pmaspa2
EL1 =~ NA * pmanum1 + c(el, el) * pmanum1 + NA * pmaverb1 + c(elb, elb) * pmaverb1 + NA * pmarea1 + c(elc, elc) * pmarea1 + NA * pmaspa1 + c(eld, eld) * pmaspa1
EL2 =~ NA * pmanum2 + c(el, el) * pmanum2 + NA * pmaverb2 + c(elb, elb) * pmaverb2 + NA * pmarea2 + c(elc, elc) * pmarea2 + NA * pmaspa2 + c(eld, eld) * pmaspa2
AS1 =~ NA * pmanum1 + c(a1, a1) * pmanum1 + c(0.5, 0.5) * pmanum1
AS2 =~ NA * pmanum2 + c(a1, a1) * pmanum2 + c(0.5, 0.5) * pmanum2
CS1 =~ NA * pmanum1 + c(d1, d1) * pmanum1
CS2 =~ NA * pmanum2 + c(d1, d1) * pmanum2
AS3 =~ NA * pmaverb1 + c(a2, a2) * pmaverb1 + c(0.5, 0.5) * pmaverb1
AS4 =~ NA * pmaverb2 + c(a2, a2) * pmaverb2 + c(0.5, 0.5) * pmaverb2
CS3 =~ NA * pmaverb1 + c(d2, d2) * pmaverb1
CS4 =~ NA * pmaverb2 + c(d2, d2) * pmaverb2
AS5 =~ NA * pmarea1 + c(a3, a3) * pmarea1 + c(0.5, 0.5) * pmarea1
AS6 =~ NA * pmarea2 + c(a3, a3) * pmarea2 + c(0.5, 0.5) * pmarea2
CS5 =~ NA * pmarea1 + c(d3, d3) * pmarea1
CS6 =~ NA * pmarea2 + c(d3, d3) * pmarea2
AS7 =~ NA * pmaspa1 + c(a4, a4) * pmaspa1 + c(0.5, 0.5) * pmaspa1
AS8 =~ NA * pmaspa2 + c(a4, a4) * pmaspa2 + c(0.5, 0.5) * pmaspa2
CS7 =~ NA * pmaspa1 + c(d4, d4) * pmaspa1
CS8 =~ NA * pmaspa2 + c(d4, d4) * pmaspa2
# Covariances
AL1 ~~ c(1, 0.5) * AL2
AL1 ~~ 0 * CL1 + 0 * CL2
AL2 ~~ 0 * CL1 + 0 * CL2
CL1 ~~ c(1, 1) * CL2
EL1 ~~ c(0, 0) * EL2
AS1 ~~ c(1, 0.5) * AS2
AS1 ~~ 0 * CS1 + 0 * CS2
AS2 ~~ 0 * CS1 + 0 * CS2
CS1 ~~ c(1, 1) * CS2
AS3 ~~ c(1, 0.5) * AS4
AS3 ~~ 0 * CS3 + 0 * CS4
AS4 ~~ 0 * CS3 + 0 * CS4
CS3 ~~ c(1, 1) * CS4
AS5 ~~ c(1, 0.5) * AS6
AS5 ~~ 0 * CS5 + 0 * CS6
AS6 ~~ 0 * CS5 + 0 * CS6
CS5 ~~ c(1, 1) * CS6
AS7 ~~ c(1, 0.5) * AS8
AS7 ~~ 0 * CS7 + 0 * CS8
AS8 ~~ 0 * CS7 + 0 * CS8
CS7 ~~ c(1, 1) * CS8
# Variances
AL1 ~~ 1 * AL1
AL2 ~~ 1 * AL2
CL1 ~~ 1 * CL1
CL2 ~~ 1 * CL2
EL1 ~~ 1 * EL1
EL2 ~~ 1 * EL2
AS1 ~~ 1 * AS1
AS2 ~~ 1 * AS2
AS3 ~~ 1 * AS3
AS4 ~~ 1 * AS4
AS5 ~~ 1 * AS5
AS6 ~~ 1 * AS6
AS7 ~~ 1 * AS7
AS8 ~~ 1 * AS8
CS1 ~~ 1 * CS1
CS2 ~~ 1 * CS2
CS3 ~~ 1 * CS3
CS4 ~~ 1 * CS4
CS5 ~~ 1 * CS5
CS6 ~~ 1 * CS6
CS7 ~~ 1 * CS7
CS8 ~~ 1 * CS8
pmanum1 ~~ c(e2, e2) * pmanum1
pmanum2 ~~ c(e2, e2) * pmanum2
pmaverb1 ~~ c(e3, e3) * pmaverb1
pmaverb2 ~~ c(e3, e3) * pmaverb2
pmarea1 ~~ c(e4, e4) * pmarea1
pmarea2 ~~ c(e4, e4) * pmarea2
pmaspa1 ~~ c(e5, e5) * pmaspa1
pmaspa2 ~~ c(e5, e5) * pmaspa2'
PMAIPACE <- sem(PMAIPACE, data = OSBO, group = "zygf", orthogonal = T)
PMAIPADE <- '
# Loadings
AL1 =~ NA * pmanum1 + c(a, a) * pmanum1 + c(0.5, 0.5) * pmanum1 + NA * pmaverb1 + c(ab, ab) * pmaverb1 + c(0.5, 0.5) * pmaverb1 + NA * pmarea1 + c(ac, ac) * pmarea1 + c(0.5, 0.5) * pmarea1 + NA * pmaspa1 + c(ad, ad) * pmaspa1 + c(0.5, 0.5) * pmaspa1
AL2 =~ NA * pmanum2 + c(a, a) * pmanum2 + c(0.5, 0.5) * pmanum2 + NA * pmaverb2 + c(ab, ab) * pmaverb2 + c(0.5, 0.5) * pmaverb2 + NA * pmarea2 + c(ac, ac) * pmarea2 + c(0.5, 0.5) * pmarea2 + NA * pmaspa2 + c(ad, ad) * pmaspa2 + c(0.5, 0.5) * pmaspa2
DL1 =~ NA * pmanum1 + c(d, d) * pmanum1 + NA * pmaverb1 + c(db, db) * pmaverb1 + NA * pmarea1 + c(dc, dc) * pmarea1 + NA * pmaspa1 + c(dd, dd) * pmaspa1
DL2 =~ NA * pmanum2 + c(d, d) * pmanum2 + NA * pmaverb2 + c(db, db) * pmaverb2 + NA * pmarea2 + c(dc, dc) * pmarea2 + NA * pmaspa2 + c(dd, dd) * pmaspa2
EL1 =~ NA * pmanum1 + c(el, el) * pmanum1 + NA * pmaverb1 + c(elb, elb) * pmaverb1 + NA * pmarea1 + c(elc, elc) * pmarea1 + NA * pmaspa1 + c(eld, eld) * pmaspa1
EL2 =~ NA * pmanum2 + c(el, el) * pmanum2 + NA * pmaverb2 + c(elb, elb) * pmaverb2 + NA * pmarea2 + c(elc, elc) * pmarea2 + NA * pmaspa2 + c(eld, eld) * pmaspa2
AS1 =~ NA * pmanum1 + c(a1, a1) * pmanum1 + c(0.5, 0.5) * pmanum1
AS2 =~ NA * pmanum2 + c(a1, a1) * pmanum2 + c(0.5, 0.5) * pmanum2
DS1 =~ NA * pmanum1 + c(d1, d1) * pmanum1
DS2 =~ NA * pmanum2 + c(d1, d1) * pmanum2
AS3 =~ NA * pmaverb1 + c(a2, a2) * pmaverb1 + c(0.5, 0.5) * pmaverb1
AS4 =~ NA * pmaverb2 + c(a2, a2) * pmaverb2 + c(0.5, 0.5) * pmaverb2
DS3 =~ NA * pmaverb1 + c(d2, d2) * pmaverb1
DS4 =~ NA * pmaverb2 + c(d2, d2) * pmaverb2
AS5 =~ NA * pmarea1 + c(a3, a3) * pmarea1 + c(0.5, 0.5) * pmarea1
AS6 =~ NA * pmarea2 + c(a3, a3) * pmarea2 + c(0.5, 0.5) * pmarea2
DS5 =~ NA * pmarea1 + c(d3, d3) * pmarea1
DS6 =~ NA * pmarea2 + c(d3, d3) * pmarea2
AS7 =~ NA * pmaspa1 + c(a4, a4) * pmaspa1 + c(0.5, 0.5) * pmaspa1
AS8 =~ NA * pmaspa2 + c(a4, a4) * pmaspa2 + c(0.5, 0.5) * pmaspa2
DS7 =~ NA * pmaspa1 + c(d4, d4) * pmaspa1
DS8 =~ NA * pmaspa2 + c(d4, d4) * pmaspa2
# Covariances
AL1 ~~ c(1, 0.5) * AL2
AL1 ~~ 0 * DL1 + 0 * DL2
AL2 ~~ 0 * DL1 + 0 * DL2
DL1 ~~ c(1, 0.25) * DL2
EL1 ~~ c(0, 0) * EL2
AS1 ~~ c(1, 0.5) * AS2
AS1 ~~ 0 * DS1 + 0 * DS2
AS2 ~~ 0 * DS1 + 0 * DS2
DS1 ~~ c(1, 0.25) * DS2
AS3 ~~ c(1, 0.5) * AS4
AS3 ~~ 0 * DS3 + 0 * DS4
AS4 ~~ 0 * DS3 + 0 * DS4
DS3 ~~ c(1, 0.25) * DS4
AS5 ~~ c(1, 0.5) * AS6
AS5 ~~ 0 * DS5 + 0 * DS6
AS6 ~~ 0 * DS5 + 0 * DS6
DS5 ~~ c(1, 0.25) * DS6
AS7 ~~ c(1, 0.5) * AS8
AS7 ~~ 0 * DS7 + 0 * DS8
AS8 ~~ 0 * DS7 + 0 * DS8
DS7 ~~ c(1, 0.25) * DS8
# Variances
AL1 ~~ 1 * AL1
AL2 ~~ 1 * AL2
DL1 ~~ 1 * DL1
DL2 ~~ 1 * DL2
EL1 ~~ 1 * EL1
EL2 ~~ 1 * EL2
AS1 ~~ 1 * AS1
AS2 ~~ 1 * AS2
AS3 ~~ 1 * AS3
AS4 ~~ 1 * AS4
AS5 ~~ 1 * AS5
AS6 ~~ 1 * AS6
AS7 ~~ 1 * AS7
AS8 ~~ 1 * AS8
DS1 ~~ 1 * DS1
DS2 ~~ 1 * DS2
DS3 ~~ 1 * DS3
DS4 ~~ 1 * DS4
DS5 ~~ 1 * DS5
DS6 ~~ 1 * DS6
DS7 ~~ 1 * DS7
DS8 ~~ 1 * DS8
pmanum1 ~~ c(e2, e2) * pmanum1
pmanum2 ~~ c(e2, e2) * pmanum2
pmaverb1 ~~ c(e3, e3) * pmaverb1
pmaverb2 ~~ c(e3, e3) * pmaverb2
pmarea1 ~~ c(e4, e4) * pmarea1
pmarea2 ~~ c(e4, e4) * pmarea2
pmaspa1 ~~ c(e5, e5) * pmaspa1
pmaspa2 ~~ c(e5, e5) * pmaspa2'
PMAIPADE <- sem(PMAIPADE, data = OSBO, group = "zygf", orthogonal = T)
PMAIPAE <- '
# Loadings
AL1 =~ NA * pmanum1 + c(a, a) * pmanum1 + c(0.5, 0.5) * pmanum1 + NA * pmaverb1 + c(ab, ab) * pmaverb1 + c(0.5, 0.5) * pmaverb1 + NA * pmarea1 + c(ac, ac) * pmarea1 + c(0.5, 0.5) * pmarea1 + NA * pmaspa1 + c(ad, ad) * pmaspa1 + c(0.5, 0.5) * pmaspa1
AL2 =~ NA * pmanum2 + c(a, a) * pmanum2 + c(0.5, 0.5) * pmanum2 + NA * pmaverb2 + c(ab, ab) * pmaverb2 + c(0.5, 0.5) * pmaverb2 + NA * pmarea2 + c(ac, ac) * pmarea2 + c(0.5, 0.5) * pmarea2 + NA * pmaspa2 + c(ad, ad) * pmaspa2 + c(0.5, 0.5) * pmaspa2
EL1 =~ NA * pmanum1 + c(el, el) * pmanum1 + NA * pmaverb1 + c(elb, elb) * pmaverb1 + NA * pmarea1 + c(elc, elc) * pmarea1 + NA * pmaspa1 + c(eld, eld) * pmaspa1
EL2 =~ NA * pmanum2 + c(el, el) * pmanum2 + NA * pmaverb2 + c(elb, elb) * pmaverb2 + NA * pmarea2 + c(elc, elc) * pmarea2 + NA * pmaspa2 + c(eld, eld) * pmaspa2
AS1 =~ NA * pmanum1 + c(a1, a1) * pmanum1 + c(0.5, 0.5) * pmanum1
AS2 =~ NA * pmanum2 + c(a1, a1) * pmanum2 + c(0.5, 0.5) * pmanum2
AS3 =~ NA * pmaverb1 + c(a2, a2) * pmaverb1 + c(0.5, 0.5) * pmaverb1
AS4 =~ NA * pmaverb2 + c(a2, a2) * pmaverb2 + c(0.5, 0.5) * pmaverb2
AS5 =~ NA * pmarea1 + c(a3, a3) * pmarea1 + c(0.5, 0.5) * pmarea1
AS6 =~ NA * pmarea2 + c(a3, a3) * pmarea2 + c(0.5, 0.5) * pmarea2
AS7 =~ NA * pmaspa1 + c(a4, a4) * pmaspa1 + c(0.5, 0.5) * pmaspa1
AS8 =~ NA * pmaspa2 + c(a4, a4) * pmaspa2 + c(0.5, 0.5) * pmaspa2
# Covariances
AL1 ~~ c(1, 0.5) * AL2
EL1 ~~ c(0, 0) * EL2
AS1 ~~ c(1, 0.5) * AS2
AS3 ~~ c(1, 0.5) * AS4
AS5 ~~ c(1, 0.5) * AS6
AS7 ~~ c(1, 0.5) * AS8
# Variances
AL1 ~~ 1 * AL1
AL2 ~~ 1 * AL2
EL1 ~~ 1 * EL1
EL2 ~~ 1 * EL2
AS1 ~~ 1 * AS1
AS2 ~~ 1 * AS2
AS3 ~~ 1 * AS3
AS4 ~~ 1 * AS4
AS5 ~~ 1 * AS5
AS6 ~~ 1 * AS6
AS7 ~~ 1 * AS7
AS8 ~~ 1 * AS8
pmanum1 ~~ c(e2, e2) * pmanum1
pmanum2 ~~ c(e2, e2) * pmanum2
pmaverb1 ~~ c(e3, e3) * pmaverb1
pmaverb2 ~~ c(e3, e3) * pmaverb2
pmarea1 ~~ c(e4, e4) * pmarea1
pmarea2 ~~ c(e4, e4) * pmarea2
pmaspa1 ~~ c(e5, e5) * pmaspa1
pmaspa2 ~~ c(e5, e5) * pmaspa2'
PMAIPAE <- sem(PMAIPAE, data = OSBO, group = "zygf", orthogonal = T)
PMAIPCE <- '
# Loadings
CL1 =~ NA * pmanum1 + c(d, d) * pmanum1 + NA * pmaverb1 + c(db, db) * pmaverb1 + NA * pmarea1 + c(dc, dc) * pmarea1 + NA * pmaspa1 + c(dd, dd) * pmaspa1
CL2 =~ NA * pmanum2 + c(d, d) * pmanum2 + NA * pmaverb2 + c(db, db) * pmaverb2 + NA * pmarea2 + c(dc, dc) * pmarea2 + NA * pmaspa2 + c(dd, dd) * pmaspa2
EL1 =~ NA * pmanum1 + c(el, el) * pmanum1 + NA * pmaverb1 + c(elb, elb) * pmaverb1 + NA * pmarea1 + c(elc, elc) * pmarea1 + NA * pmaspa1 + c(eld, eld) * pmaspa1
EL2 =~ NA * pmanum2 + c(el, el) * pmanum2 + NA * pmaverb2 + c(elb, elb) * pmaverb2 + NA * pmarea2 + c(elc, elc) * pmarea2 + NA * pmaspa2 + c(eld, eld) * pmaspa2
CS1 =~ NA * pmanum1 + c(d1, d1) * pmanum1
CS2 =~ NA * pmanum2 + c(d1, d1) * pmanum2
CS3 =~ NA * pmaverb1 + c(d2, d2) * pmaverb1
CS4 =~ NA * pmaverb2 + c(d2, d2) * pmaverb2
CS5 =~ NA * pmarea1 + c(d3, d3) * pmarea1
CS6 =~ NA * pmarea2 + c(d3, d3) * pmarea2
CS7 =~ NA * pmaspa1 + c(d4, d4) * pmaspa1
CS8 =~ NA * pmaspa2 + c(d4, d4) * pmaspa2
# Covariances
CL1 ~~ c(1, 1) * CL2
EL1 ~~ c(0, 0) * EL2
CS1 ~~ c(1, 1) * CS2
CS3 ~~ c(1, 1) * CS4
CS5 ~~ c(1, 1) * CS6
CS7 ~~ c(1, 1) * CS8
# Variances
CL1 ~~ 1 * CL1
CL2 ~~ 1 * CL2
EL1 ~~ 1 * EL1
EL2 ~~ 1 * EL2
CS1 ~~ 1 * CS1
CS2 ~~ 1 * CS2
CS3 ~~ 1 * CS3
CS4 ~~ 1 * CS4
CS5 ~~ 1 * CS5
CS6 ~~ 1 * CS6
CS7 ~~ 1 * CS7
CS8 ~~ 1 * CS8
pmanum1 ~~ c(e2, e2) * pmanum1
pmanum2 ~~ c(e2, e2) * pmanum2
pmaverb1 ~~ c(e3, e3) * pmaverb1
pmaverb2 ~~ c(e3, e3) * pmaverb2
pmarea1 ~~ c(e4, e4) * pmarea1
pmarea2 ~~ c(e4, e4) * pmarea2
pmaspa1 ~~ c(e5, e5) * pmaspa1
pmaspa2 ~~ c(e5, e5) * pmaspa2'
PMAIPCE <- sem(PMAIPCE, data = OSBO, group = "zygf", orthogonal = T)
PMAIPDE <- '
# Loadings
DL1 =~ NA * pmanum1 + c(d, d) * pmanum1 + NA * pmaverb1 + c(db, db) * pmaverb1 + NA * pmarea1 + c(dc, dc) * pmarea1 + NA * pmaspa1 + c(dd, dd) * pmaspa1
DL2 =~ NA * pmanum2 + c(d, d) * pmanum2 + NA * pmaverb2 + c(db, db) * pmaverb2 + NA * pmarea2 + c(dc, dc) * pmarea2 + NA * pmaspa2 + c(dd, dd) * pmaspa2
EL1 =~ NA * pmanum1 + c(el, el) * pmanum1 + NA * pmaverb1 + c(elb, elb) * pmaverb1 + NA * pmarea1 + c(elc, elc) * pmarea1 + NA * pmaspa1 + c(eld, eld) * pmaspa1
EL2 =~ NA * pmanum2 + c(el, el) * pmanum2 + NA * pmaverb2 + c(elb, elb) * pmaverb2 + NA * pmarea2 + c(elc, elc) * pmarea2 + NA * pmaspa2 + c(eld, eld) * pmaspa2
DS1 =~ NA * pmanum1 + c(d1, d1) * pmanum1
DS2 =~ NA * pmanum2 + c(d1, d1) * pmanum2
DS3 =~ NA * pmaverb1 + c(d2, d2) * pmaverb1
DS4 =~ NA * pmaverb2 + c(d2, d2) * pmaverb2
DS5 =~ NA * pmarea1 + c(d3, d3) * pmarea1
DS6 =~ NA * pmarea2 + c(d3, d3) * pmarea2
DS7 =~ NA * pmaspa1 + c(d4, d4) * pmaspa1
DS8 =~ NA * pmaspa2 + c(d4, d4) * pmaspa2
# Covariances
DL1 ~~ c(1, 0.25) * DL2
EL1 ~~ c(0, 0) * EL2
DS1 ~~ c(1, 0.25) * DS2
DS3 ~~ c(1, 0.25) * DS4
DS5 ~~ c(1, 0.25) * DS6
DS7 ~~ c(1, 0.25) * DS8
# Variances
DL1 ~~ 1 * DL1
DL2 ~~ 1 * DL2
EL1 ~~ 1 * EL1
EL2 ~~ 1 * EL2
DS1 ~~ 1 * DS1
DS2 ~~ 1 * DS2
DS3 ~~ 1 * DS3
DS4 ~~ 1 * DS4
DS5 ~~ 1 * DS5
DS6 ~~ 1 * DS6
DS7 ~~ 1 * DS7
DS8 ~~ 1 * DS8
pmanum1 ~~ c(e2, e2) * pmanum1
pmanum2 ~~ c(e2, e2) * pmanum2
pmaverb1 ~~ c(e3, e3) * pmaverb1
pmaverb2 ~~ c(e3, e3) * pmaverb2
pmarea1 ~~ c(e4, e4) * pmarea1
pmarea2 ~~ c(e4, e4) * pmarea2
pmaspa1 ~~ c(e5, e5) * pmaspa1
pmaspa2 ~~ c(e5, e5) * pmaspa2'
PMAIPDE <- sem(PMAIPDE, data = OSBO, group = "zygf", orthogonal = T)
round(cbind("CP ACE" = fitMeasures(PMACPACE, GFIT),
"CP ADE" = fitMeasures(PMACPADE, GFIT),
"CP AE" = fitMeasures(PMACPAE, GFIT),
"CP CE" = fitMeasures(PMACPCE, GFIT),
"CP DE" = fitMeasures(PMACPDE, GFIT),
"IP ACE" = fitMeasures(PMAIPACE, GFIT),
"IP ADE" = fitMeasures(PMAIPADE, GFIT),
"IP AE" = fitMeasures(PMAIPAE, GFIT),
"IP CE" = fitMeasures(PMAIPCE, GFIT),
"IP DE" = fitMeasures(PMAIPDE, GFIT)), 3)
## CP ACE CP ADE CP AE CP CE CP DE IP ACE IP ADE
## logl -5777.235 -5786.957 -5787.364 -5785.964 -5800.325 -5775.168 -5786.008
## df 53.000 53.000 58.000 58.000 58.000 48.000 48.000
## npar 35.000 35.000 30.000 30.000 30.000 40.000 40.000
## aic 11624.470 11643.915 11634.729 11631.927 11660.650 11630.337 11652.015
## bic 11737.372 11756.816 11731.501 11728.699 11757.423 11759.366 11781.045
## chisq 49.287 68.732 69.546 66.744 95.467 45.153 66.832
## pvalue 0.620 0.072 0.143 0.202 0.001 0.590 0.037
## tli 1.004 0.983 0.988 0.991 0.962 1.003 0.977
## rmsea 0.000 0.056 0.046 0.040 0.083 0.000 0.065
## IP AE IP CE IP DE
## logl -5786.827 -5784.511 -5799.818
## df 56.000 56.000 56.000
## npar 32.000 32.000 32.000
## aic 11637.655 11633.022 11663.637
## bic 11740.878 11736.246 11766.861
## chisq 68.471 63.839 94.454
## pvalue 0.122 0.220 0.001
## tli 0.987 0.992 0.960
## rmsea 0.049 0.039 0.086
Another part of the dataset did not take Thurstone's Primary Mental Abilities (PMA) test. Instead, they took tests of facial recognition, logical reasoning, social perception, cancellation, card rotation, and reasoning about ship port destinations (read Osborne's descriptions if you want to know more). Age-wise, the samples are fairly comparable with means of 15.01 (\(\sigma = 1.46\)) versus 15.50 (\(\sigma = 1.58\)).
# Common Pathway Models
ALTCPACE <- '
# Factors and Loadings
g1 =~ NA * faces1 + c(gl1, gl1) * faces1 + c(0.5, 0.5) * faces1 + NA * logical1 + c(gl2, gl2) * logical1 + c(0.5, 0.5) * logical1 + NA * socperc1 + c(gl3, gl3) * socperc1 + c(0.5, 0.5) * socperc1 + NA * cancal1 + c(gl4, gl4) * cancal1 + c(0.5, 0.5) * cancal1 + NA * crdrot1 + c(gl5, gl5) * crdrot1 + c(0.5, 0.5) * crdrot1 + NA * ship1 + c(gl6, gl6) * ship1 + c(0.5, 0.5) * ship1
g2 =~ NA * faces2 + c(gl1, gl1) * faces2 + c(0.5, 0.5) * faces2 + NA * logical2 + c(gl2, gl2) * logical2 + c(0.5, 0.5) * logical2 + NA * socperc2 + c(gl3, gl3) * socperc2 + c(0.5, 0.5) * socperc2 + NA * cancal2 + c(gl4, gl4) * cancal2 + c(0.5, 0.5) * cancal2 + NA * crdrot2 + c(gl5, gl5) * crdrot2 + c(0.5, 0.5) * crdrot2 + NA * ship2 + c(gl6, gl6) * ship2 + c(0.5, 0.5) * ship2
ag1 =~ NA * g1 + c(a, a) * g1 + c(0.5, 0.5) * g1
ag2 =~ NA * g2 + c(a, a) * g2 + c(0.5, 0.5) * g2
cg1 =~ NA * g1 + c(d, d) * g1
cg2 =~ NA * g2 + c(d, d) * g2
eg1 =~ NA * g1 + c(q, q) * g1
eg2 =~ NA * g2 + c(q, q) * g2
AS1 =~ NA * faces1 + c(a1, a1) * faces1 + c(0.5, 0.5) * faces1
AS2 =~ NA * faces2 + c(a1, a1) * faces2 + c(0.5, 0.5) * faces2
CS1 =~ NA * faces1 + c(d1, d1) * faces1
CS2 =~ NA * faces2 + c(d1, d1) * faces2
ES1 =~ NA * faces1 + c(e1, e1) * faces1
ES2 =~ NA * faces2 + c(e1, e1) * faces2
AS3 =~ NA * logical1 + c(a2, a2) * logical1 + c(0.5, 0.5) * logical1
AS4 =~ NA * logical2 + c(a2, a2) * logical2 + c(0.5, 0.5) * logical2
CS3 =~ NA * logical1 + c(d2, d2) * logical1
CS4 =~ NA * logical2 + c(d2, d2) * logical2
ES3 =~ NA * logical1 + c(e2, e2) * logical1
ES4 =~ NA * logical2 + c(e2, e2) * logical2
AS5 =~ NA * socperc1 + c(a3, a3) * socperc1 + c(0.5, 0.5) * socperc1
AS6 =~ NA * socperc2 + c(a3, a3) * socperc2 + c(0.5, 0.5) * socperc2
CS5 =~ NA * socperc1 + c(d3, d3) * socperc1
CS6 =~ NA * socperc2 + c(d3, d3) * socperc2
ES5 =~ NA * socperc1 + c(e3, e3) * socperc1
ES6 =~ NA * socperc2 + c(e3, e3) * socperc2
AS7 =~ NA * cancal1 + c(a4, a4) * cancal1 + c(0.5, 0.5) * cancal1
AS8 =~ NA * cancal2 + c(a4, a4) * cancal2 + c(0.5, 0.5) * cancal2
CS7 =~ NA * cancal1 + c(d4, d4) * cancal1
CS8 =~ NA * cancal2 + c(d4, d4) * cancal2
ES7 =~ NA * cancal1 + c(e4, e4) * cancal1
ES8 =~ NA * cancal2 + c(e4, e4) * cancal2
AS9 =~ NA * crdrot1 + c(a5, a5) * crdrot1 + c(0.5, 0.5) * crdrot1
AS10 =~ NA * crdrot2 + c(a5, a5) * crdrot2 + c(0.5, 0.5) * crdrot2
CS9 =~ NA * crdrot1 + c(d5, d5) * crdrot1
CS10 =~ NA * crdrot2 + c(d5, d5) * crdrot2
ES9 =~ NA * crdrot1 + c(e5, e5) * crdrot1
ES10 =~ NA * crdrot2 + c(e5, e5) * crdrot2
AS11 =~ NA * ship1 + c(a6, a6) * ship1 + c(0.5, 0.5) * ship1
AS12 =~ NA * ship2 + c(a6, a6) * ship2 + c(0.5, 0.5) * ship2
CS11 =~ NA * ship1 + c(d6, d6) * ship1
CS12 =~ NA * ship2 + c(d6, d6) * ship2
ES11 =~ NA * ship1 + c(e6, e6) * ship1
ES12 =~ NA * ship2 + c(e6, e6) * ship2
# Covariances
ag1 ~~ c(1, 0.5) * ag2
ag1 ~~ 0 * cg1 + 0 * cg2
ag2 ~~ 0 * cg1 + 0 * cg2
cg1 ~~ c(1, 1) * cg2
eg1 ~~ 0 * eg2
AS1 ~~ c(1, 0.5) * AS2
AS1 ~~ 0 * CS1 + 0 * CS2
AS2 ~~ 0 * CS1 + 0 * CS2
CS1 ~~ c(1, 1) * CS2
ES1 ~~ 0 * ES2
AS3 ~~ c(1, 0.5) * AS4
AS3 ~~ 0 * CS3 + 0 * CS4
AS4 ~~ 0 * CS3 + 0 * CS4
CS3 ~~ c(1, 1) * CS4
ES3 ~~ 0 * ES4
AS5 ~~ c(1, 0.5) * AS6
AS5 ~~ 0 * CS5 + 0 * CS6
AS6 ~~ 0 * CS5 + 0 * CS6
CS5 ~~ c(1, 1) * CS6
ES5 ~~ 0 * ES6
AS7 ~~ c(1, 0.5) * AS8
AS7 ~~ 0 * CS7 + 0 * CS8
AS8 ~~ 0 * CS7 + 0 * CS8
CS7 ~~ c(1, 1) * CS8
ES7 ~~ 0 * ES8
AS9 ~~ c(1, 0.5) * AS10
AS9 ~~ 0 * CS9 + 0 * CS10
AS10 ~~ 0 * CS9 + 0 * CS10
CS9 ~~ c(1, 1) * CS10
ES9 ~~ 0 * ES10
AS11 ~~ c(1, 0.5) * AS12
AS11 ~~ 0 * CS11 + 0 * CS12
AS12 ~~ 0 * CS11 + 0 * CS12
CS11 ~~ c(1, 1) * CS12
ES11 ~~ 0 * ES12
#Variances
ag1 ~~ 1 * ag1
ag2 ~~ 1 * ag2
cg1 ~~ 1 * cg1
cg2 ~~ 1 * cg2
eg1 ~~ 1 * eg1
eg2 ~~ 1 * eg2
AS1 ~~ 1 * AS1
AS2 ~~ 1 * AS2
AS3 ~~ 1 * AS3
AS4 ~~ 1 * AS4
AS5 ~~ 1 * AS5
AS6 ~~ 1 * AS6
AS7 ~~ 1 * AS7
AS8 ~~ 1 * AS8
AS9 ~~ 1 * AS9
AS10 ~~ 1 * AS10
AS11 ~~ 1 * AS11
AS12 ~~ 1 * AS12
CS1 ~~ 1 * CS1
CS2 ~~ 1 * CS2
CS3 ~~ 1 * CS3
CS4 ~~ 1 * CS4
CS5 ~~ 1 * CS5
CS6 ~~ 1 * CS6
CS7 ~~ 1 * CS7
CS8 ~~ 1 * CS8
CS9 ~~ 1 * CS9
CS10 ~~ 1 * CS10
CS11 ~~ 1 * CS11
CS12 ~~ 1 * CS12
ES1 ~~ 1 * ES1
ES2 ~~ 1 * ES2
ES3 ~~ 1 * ES3
ES4 ~~ 1 * ES4
ES5 ~~ 1 * ES5
ES6 ~~ 1 * ES6
ES7 ~~ 1 * ES7
ES8 ~~ 1 * ES8
ES9 ~~ 1 * ES9
ES10 ~~ 1 * ES10
ES11 ~~ 1 * ES11
ES12 ~~ 1 * ES12
faces1 ~~ 0 * faces1
faces2 ~~ 0 * faces2
logical1 ~~ 0 * logical1
logical2 ~~ 0 * logical2
socperc1 ~~ 0 * socperc1
socperc2 ~~ 0 * socperc2
cancal1 ~~ 0 * cancal1
cancal2 ~~ 0 * cancal2
crdrot1 ~~ 0 * crdrot1
crdrot2 ~~ 0 * crdrot2
ship1 ~~ 0 * ship1
ship2 ~~ 0 * ship2
g1 ~~ 0 * g1
g2 ~~ 0 * g2'
ALTCPACE <- sem(ALTCPACE, data = OSBOA, group = "zygf", orthogonal = T)
ALTCPADE <- '
# Factors and Loadings
g1 =~ NA * faces1 + c(gl1, gl1) * faces1 + c(0.5, 0.5) * faces1 + NA * logical1 + c(gl2, gl2) * logical1 + c(0.5, 0.5) * logical1 + NA * socperc1 + c(gl3, gl3) * socperc1 + c(0.5, 0.5) * socperc1 + NA * cancal1 + c(gl4, gl4) * cancal1 + c(0.5, 0.5) * cancal1 + NA * crdrot1 + c(gl5, gl5) * crdrot1 + c(0.5, 0.5) * crdrot1 + NA * ship1 + c(gl6, gl6) * ship1 + c(0.5, 0.5) * ship1
g2 =~ NA * faces2 + c(gl1, gl1) * faces2 + c(0.5, 0.5) * faces2 + NA * logical2 + c(gl2, gl2) * logical2 + c(0.5, 0.5) * logical2 + NA * socperc2 + c(gl3, gl3) * socperc2 + c(0.5, 0.5) * socperc2 + NA * cancal2 + c(gl4, gl4) * cancal2 + c(0.5, 0.5) * cancal2 + NA * crdrot2 + c(gl5, gl5) * crdrot2 + c(0.5, 0.5) * crdrot2 + NA * ship2 + c(gl6, gl6) * ship2 + c(0.5, 0.5) * ship2
ag1 =~ NA * g1 + c(a, a) * g1 + c(0.5, 0.5) * g1
ag2 =~ NA * g2 + c(a, a) * g2 + c(0.5, 0.5) * g2
dg1 =~ NA * g1 + c(d, d) * g1
dg2 =~ NA * g2 + c(d, d) * g2
eg1 =~ NA * g1 + c(q, q) * g1
eg2 =~ NA * g2 + c(q, q) * g2
AS1 =~ NA * faces1 + c(a1, a1) * faces1 + c(0.5, 0.5) * faces1
AS2 =~ NA * faces2 + c(a1, a1) * faces2 + c(0.5, 0.5) * faces2
DS1 =~ NA * faces1 + c(d1, d1) * faces1
DS2 =~ NA * faces2 + c(d1, d1) * faces2
ES1 =~ NA * faces1 + c(e1, e1) * faces1
ES2 =~ NA * faces2 + c(e1, e1) * faces2
AS3 =~ NA * logical1 + c(a2, a2) * logical1 + c(0.5, 0.5) * logical1
AS4 =~ NA * logical2 + c(a2, a2) * logical2 + c(0.5, 0.5) * logical2
DS3 =~ NA * logical1 + c(d2, d2) * logical1
DS4 =~ NA * logical2 + c(d2, d2) * logical2
ES3 =~ NA * logical1 + c(e2, e2) * logical1
ES4 =~ NA * logical2 + c(e2, e2) * logical2
AS5 =~ NA * socperc1 + c(a3, a3) * socperc1 + c(0.5, 0.5) * socperc1
AS6 =~ NA * socperc2 + c(a3, a3) * socperc2 + c(0.5, 0.5) * socperc2
DS5 =~ NA * socperc1 + c(d3, d3) * socperc1
DS6 =~ NA * socperc2 + c(d3, d3) * socperc2
ES5 =~ NA * socperc1 + c(e3, e3) * socperc1
ES6 =~ NA * socperc2 + c(e3, e3) * socperc2
AS7 =~ NA * cancal1 + c(a4, a4) * cancal1 + c(0.5, 0.5) * cancal1
AS8 =~ NA * cancal2 + c(a4, a4) * cancal2 + c(0.5, 0.5) * cancal2
DS7 =~ NA * cancal1 + c(d4, d4) * cancal1
DS8 =~ NA * cancal2 + c(d4, d4) * cancal2
ES7 =~ NA * cancal1 + c(e4, e4) * cancal1
ES8 =~ NA * cancal2 + c(e4, e4) * cancal2
AS9 =~ NA * crdrot1 + c(a5, a5) * crdrot1 + c(0.5, 0.5) * crdrot1
AS10 =~ NA * crdrot2 + c(a5, a5) * crdrot2 + c(0.5, 0.5) * crdrot2
DS9 =~ NA * crdrot1 + c(d5, d5) * crdrot1
DS10 =~ NA * crdrot2 + c(d5, d5) * crdrot2
ES9 =~ NA * crdrot1 + c(e5, e5) * crdrot1
ES10 =~ NA * crdrot2 + c(e5, e5) * crdrot2
AS11 =~ NA * ship1 + c(a6, a6) * ship1 + c(0.5, 0.5) * ship1
AS12 =~ NA * ship2 + c(a6, a6) * ship2 + c(0.5, 0.5) * ship2
DS11 =~ NA * ship1 + c(d6, d6) * ship1
DS12 =~ NA * ship2 + c(d6, d6) * ship2
ES11 =~ NA * ship1 + c(e6, e6) * ship1
ES12 =~ NA * ship2 + c(e6, e6) * ship2
# Covariances
ag1 ~~ c(1, 0.5) * ag2
ag1 ~~ 0 * dg1 + 0 * dg2
ag2 ~~ 0 * dg1 + 0 * dg2
dg1 ~~ c(1, 0.25) * dg2
eg1 ~~ 0 * eg2
AS1 ~~ c(1, 0.5) * AS2
AS1 ~~ 0 * DS1 + 0 * DS2
AS2 ~~ 0 * DS1 + 0 * DS2
DS1 ~~ c(1, 0.25) * DS2
ES1 ~~ 0 * ES2
AS3 ~~ c(1, 0.5) * AS4
AS3 ~~ 0 * DS3 + 0 * DS4
AS4 ~~ 0 * DS3 + 0 * DS4
DS3 ~~ c(1, 0.25) * DS4
ES3 ~~ 0 * ES4
AS5 ~~ c(1, 0.5) * AS6
AS5 ~~ 0 * DS5 + 0 * DS6
AS6 ~~ 0 * DS5 + 0 * DS6
DS5 ~~ c(1, 0.25) * DS6
ES5 ~~ 0 * ES6
AS7 ~~ c(1, 0.5) * AS8
AS7 ~~ 0 * DS7 + 0 * DS8
AS8 ~~ 0 * DS7 + 0 * DS8
DS7 ~~ c(1, 0.25) * DS8
ES7 ~~ 0 * ES8
AS9 ~~ c(1, 0.5) * AS10
AS9 ~~ 0 * DS9 + 0 * DS10
AS10 ~~ 0 * DS9 + 0 * DS10
DS9 ~~ c(1, 0.25) * DS10
ES9 ~~ 0 * ES10
AS11 ~~ c(1, 0.5) * AS12
AS11 ~~ 0 * DS11 + 0 * DS12
AS12 ~~ 0 * DS11 + 0 * DS12
DS11 ~~ c(1, 0.25) * DS12
ES11 ~~ 0 * ES12
#Variances
ag1 ~~ 1 * ag1
ag2 ~~ 1 * ag2
dg1 ~~ 1 * dg1
dg2 ~~ 1 * dg2
eg1 ~~ 1 * eg1
eg2 ~~ 1 * eg2
AS1 ~~ 1 * AS1
AS2 ~~ 1 * AS2
AS3 ~~ 1 * AS3
AS4 ~~ 1 * AS4
AS5 ~~ 1 * AS5
AS6 ~~ 1 * AS6
AS7 ~~ 1 * AS7
AS8 ~~ 1 * AS8
AS9 ~~ 1 * AS9
AS10 ~~ 1 * AS10
AS11 ~~ 1 * AS11
AS12 ~~ 1 * AS12
DS1 ~~ 1 * DS1
DS2 ~~ 1 * DS2
DS3 ~~ 1 * DS3
DS4 ~~ 1 * DS4
DS5 ~~ 1 * DS5
DS6 ~~ 1 * DS6
DS7 ~~ 1 * DS7
DS8 ~~ 1 * DS8
DS9 ~~ 1 * DS9
DS10 ~~ 1 * DS10
DS11 ~~ 1 * DS11
DS12 ~~ 1 * DS12
ES1 ~~ 1 * ES1
ES2 ~~ 1 * ES2
ES3 ~~ 1 * ES3
ES4 ~~ 1 * ES4
ES5 ~~ 1 * ES5
ES6 ~~ 1 * ES6
ES7 ~~ 1 * ES7
ES8 ~~ 1 * ES8
ES9 ~~ 1 * ES9
ES10 ~~ 1 * ES10
ES11 ~~ 1 * ES11
ES12 ~~ 1 * ES12
faces1 ~~ 0 * faces1
faces2 ~~ 0 * faces2
logical1 ~~ 0 * logical1
logical2 ~~ 0 * logical2
socperc1 ~~ 0 * socperc1
socperc2 ~~ 0 * socperc2
cancal1 ~~ 0 * cancal1
cancal2 ~~ 0 * cancal2
crdrot1 ~~ 0 * crdrot1
crdrot2 ~~ 0 * crdrot2
ship1 ~~ 0 * ship1
ship2 ~~ 0 * ship2
g1 ~~ 0 * g1
g2 ~~ 0 * g2'
ALTCPADE <- sem(ALTCPADE, data = OSBOA, group = "zygf", orthogonal = T)
ALTCPAE <- '
# Factors and Loadings
g1 =~ NA * faces1 + c(gl1, gl1) * faces1 + c(0.5, 0.5) * faces1 + NA * logical1 + c(gl2, gl2) * logical1 + c(0.5, 0.5) * logical1 + NA * socperc1 + c(gl3, gl3) * socperc1 + c(0.5, 0.5) * socperc1 + NA * cancal1 + c(gl4, gl4) * cancal1 + c(0.5, 0.5) * cancal1 + NA * crdrot1 + c(gl5, gl5) * crdrot1 + c(0.5, 0.5) * crdrot1 + NA * ship1 + c(gl6, gl6) * ship1 + c(0.5, 0.5) * ship1
g2 =~ NA * faces2 + c(gl1, gl1) * faces2 + c(0.5, 0.5) * faces2 + NA * logical2 + c(gl2, gl2) * logical2 + c(0.5, 0.5) * logical2 + NA * socperc2 + c(gl3, gl3) * socperc2 + c(0.5, 0.5) * socperc2 + NA * cancal2 + c(gl4, gl4) * cancal2 + c(0.5, 0.5) * cancal2 + NA * crdrot2 + c(gl5, gl5) * crdrot2 + c(0.5, 0.5) * crdrot2 + NA * ship2 + c(gl6, gl6) * ship2 + c(0.5, 0.5) * ship2
ag1 =~ NA * g1 + c(a, a) * g1 + c(0.5, 0.5) * g1
ag2 =~ NA * g2 + c(a, a) * g2 + c(0.5, 0.5) * g2
eg1 =~ NA * g1 + c(q, q) * g1
eg2 =~ NA * g2 + c(q, q) * g2
AS1 =~ NA * faces1 + c(a1, a1) * faces1 + c(0.5, 0.5) * faces1
AS2 =~ NA * faces2 + c(a1, a1) * faces2 + c(0.5, 0.5) * faces2
ES1 =~ NA * faces1 + c(e1, e1) * faces1
ES2 =~ NA * faces2 + c(e1, e1) * faces2
AS3 =~ NA * logical1 + c(a2, a2) * logical1 + c(0.5, 0.5) * logical1
AS4 =~ NA * logical2 + c(a2, a2) * logical2 + c(0.5, 0.5) * logical2
ES3 =~ NA * logical1 + c(e2, e2) * logical1
ES4 =~ NA * logical2 + c(e2, e2) * logical2
AS5 =~ NA * socperc1 + c(a3, a3) * socperc1 + c(0.5, 0.5) * socperc1
AS6 =~ NA * socperc2 + c(a3, a3) * socperc2 + c(0.5, 0.5) * socperc2
ES5 =~ NA * socperc1 + c(e3, e3) * socperc1
ES6 =~ NA * socperc2 + c(e3, e3) * socperc2
AS7 =~ NA * cancal1 + c(a4, a4) * cancal1 + c(0.5, 0.5) * cancal1
AS8 =~ NA * cancal2 + c(a4, a4) * cancal2 + c(0.5, 0.5) * cancal2
ES7 =~ NA * cancal1 + c(e4, e4) * cancal1
ES8 =~ NA * cancal2 + c(e4, e4) * cancal2
AS9 =~ NA * crdrot1 + c(a5, a5) * crdrot1 + c(0.5, 0.5) * crdrot1
AS10 =~ NA * crdrot2 + c(a5, a5) * crdrot2 + c(0.5, 0.5) * crdrot2
ES9 =~ NA * crdrot1 + c(e5, e5) * crdrot1
ES10 =~ NA * crdrot2 + c(e5, e5) * crdrot2
AS11 =~ NA * ship1 + c(a6, a6) * ship1 + c(0.5, 0.5) * ship1
AS12 =~ NA * ship2 + c(a6, a6) * ship2 + c(0.5, 0.5) * ship2
ES11 =~ NA * ship1 + c(e6, e6) * ship1
ES12 =~ NA * ship2 + c(e6, e6) * ship2
# Covariances
ag1 ~~ c(1, 0.5) * ag2
eg1 ~~ 0 * eg2
AS1 ~~ c(1, 0.5) * AS2
ES1 ~~ 0 * ES2
AS3 ~~ c(1, 0.5) * AS4
ES3 ~~ 0 * ES4
AS5 ~~ c(1, 0.5) * AS6
ES5 ~~ 0 * ES6
AS7 ~~ c(1, 0.5) * AS8
ES7 ~~ 0 * ES8
AS9 ~~ c(1, 0.5) * AS10
ES9 ~~ 0 * ES10
AS11 ~~ c(1, 0.5) * AS12
ES11 ~~ 0 * ES12
#Variances
ag1 ~~ 1 * ag1
ag2 ~~ 1 * ag2
eg1 ~~ 1 * eg1
eg2 ~~ 1 * eg2
AS1 ~~ 1 * AS1
AS2 ~~ 1 * AS2
AS3 ~~ 1 * AS3
AS4 ~~ 1 * AS4
AS5 ~~ 1 * AS5
AS6 ~~ 1 * AS6
AS7 ~~ 1 * AS7
AS8 ~~ 1 * AS8
AS9 ~~ 1 * AS9
AS10 ~~ 1 * AS10
AS11 ~~ 1 * AS11
AS12 ~~ 1 * AS12
ES1 ~~ 1 * ES1
ES2 ~~ 1 * ES2
ES3 ~~ 1 * ES3
ES4 ~~ 1 * ES4
ES5 ~~ 1 * ES5
ES6 ~~ 1 * ES6
ES7 ~~ 1 * ES7
ES8 ~~ 1 * ES8
ES9 ~~ 1 * ES9
ES10 ~~ 1 * ES10
ES11 ~~ 1 * ES11
ES12 ~~ 1 * ES12
faces1 ~~ 0 * faces1
faces2 ~~ 0 * faces2
logical1 ~~ 0 * logical1
logical2 ~~ 0 * logical2
socperc1 ~~ 0 * socperc1
socperc2 ~~ 0 * socperc2
cancal1 ~~ 0 * cancal1
cancal2 ~~ 0 * cancal2
crdrot1 ~~ 0 * crdrot1
crdrot2 ~~ 0 * crdrot2
ship1 ~~ 0 * ship1
ship2 ~~ 0 * ship2
g1 ~~ 0 * g1
g2 ~~ 0 * g2'
ALTCPAE <- sem(ALTCPAE, data = OSBOA, group = "zygf", orthogonal = T)
ALTCPCE <- '
# Factors and Loadings
g1 =~ NA * faces1 + c(gl1, gl1) * faces1 + c(0.5, 0.5) * faces1 + NA * logical1 + c(gl2, gl2) * logical1 + c(0.5, 0.5) * logical1 + NA * socperc1 + c(gl3, gl3) * socperc1 + c(0.5, 0.5) * socperc1 + NA * cancal1 + c(gl4, gl4) * cancal1 + c(0.5, 0.5) * cancal1 + NA * crdrot1 + c(gl5, gl5) * crdrot1 + c(0.5, 0.5) * crdrot1 + NA * ship1 + c(gl6, gl6) * ship1 + c(0.5, 0.5) * ship1
g2 =~ NA * faces2 + c(gl1, gl1) * faces2 + c(0.5, 0.5) * faces2 + NA * logical2 + c(gl2, gl2) * logical2 + c(0.5, 0.5) * logical2 + NA * socperc2 + c(gl3, gl3) * socperc2 + c(0.5, 0.5) * socperc2 + NA * cancal2 + c(gl4, gl4) * cancal2 + c(0.5, 0.5) * cancal2 + NA * crdrot2 + c(gl5, gl5) * crdrot2 + c(0.5, 0.5) * crdrot2 + NA * ship2 + c(gl6, gl6) * ship2 + c(0.5, 0.5) * ship2
cg1 =~ NA * g1 + c(d, d) * g1
cg2 =~ NA * g2 + c(d, d) * g2
eg1 =~ NA * g1 + c(q, q) * g1
eg2 =~ NA * g2 + c(q, q) * g2
CS1 =~ NA * faces1 + c(d1, d1) * faces1
CS2 =~ NA * faces2 + c(d1, d1) * faces2
ES1 =~ NA * faces1 + c(e1, e1) * faces1
ES2 =~ NA * faces2 + c(e1, e1) * faces2
CS3 =~ NA * logical1 + c(d2, d2) * logical1
CS4 =~ NA * logical2 + c(d2, d2) * logical2
ES3 =~ NA * logical1 + c(e2, e2) * logical1
ES4 =~ NA * logical2 + c(e2, e2) * logical2
CS5 =~ NA * socperc1 + c(d3, d3) * socperc1
CS6 =~ NA * socperc2 + c(d3, d3) * socperc2
ES5 =~ NA * socperc1 + c(e3, e3) * socperc1
ES6 =~ NA * socperc2 + c(e3, e3) * socperc2
CS7 =~ NA * cancal1 + c(d4, d4) * cancal1
CS8 =~ NA * cancal2 + c(d4, d4) * cancal2
ES7 =~ NA * cancal1 + c(e4, e4) * cancal1
ES8 =~ NA * cancal2 + c(e4, e4) * cancal2
CS9 =~ NA * crdrot1 + c(d5, d5) * crdrot1
CS10 =~ NA * crdrot2 + c(d5, d5) * crdrot2
ES9 =~ NA * crdrot1 + c(e5, e5) * crdrot1
ES10 =~ NA * crdrot2 + c(e5, e5) * crdrot2
CS11 =~ NA * ship1 + c(d6, d6) * ship1
CS12 =~ NA * ship2 + c(d6, d6) * ship2
ES11 =~ NA * ship1 + c(e6, e6) * ship1
ES12 =~ NA * ship2 + c(e6, e6) * ship2
# Covariances
cg1 ~~ c(1, 1) * cg2
eg1 ~~ 0 * eg2
CS1 ~~ c(1, 1) * CS2
ES1 ~~ 0 * ES2
CS3 ~~ c(1, 1) * CS4
ES3 ~~ 0 * ES4
CS5 ~~ c(1, 1) * CS6
ES5 ~~ 0 * ES6
CS7 ~~ c(1, 1) * CS8
ES7 ~~ 0 * ES8
CS9 ~~ c(1, 1) * CS10
ES9 ~~ 0 * ES10
CS11 ~~ c(1, 1) * CS12
ES11 ~~ 0 * ES12
#Variances
cg1 ~~ 1 * cg1
cg2 ~~ 1 * cg2
eg1 ~~ 1 * eg1
eg2 ~~ 1 * eg2
CS1 ~~ 1 * CS1
CS2 ~~ 1 * CS2
CS3 ~~ 1 * CS3
CS4 ~~ 1 * CS4
CS5 ~~ 1 * CS5
CS6 ~~ 1 * CS6
CS7 ~~ 1 * CS7
CS8 ~~ 1 * CS8
CS9 ~~ 1 * CS9
CS10 ~~ 1 * CS10
CS11 ~~ 1 * CS11
CS12 ~~ 1 * CS12
ES1 ~~ 1 * ES1
ES2 ~~ 1 * ES2
ES3 ~~ 1 * ES3
ES4 ~~ 1 * ES4
ES5 ~~ 1 * ES5
ES6 ~~ 1 * ES6
ES7 ~~ 1 * ES7
ES8 ~~ 1 * ES8
ES9 ~~ 1 * ES9
ES10 ~~ 1 * ES10
ES11 ~~ 1 * ES11
ES12 ~~ 1 * ES12
faces1 ~~ 0 * faces1
faces2 ~~ 0 * faces2
logical1 ~~ 0 * logical1
logical2 ~~ 0 * logical2
socperc1 ~~ 0 * socperc1
socperc2 ~~ 0 * socperc2
cancal1 ~~ 0 * cancal1
cancal2 ~~ 0 * cancal2
crdrot1 ~~ 0 * crdrot1
crdrot2 ~~ 0 * crdrot2
ship1 ~~ 0 * ship1
ship2 ~~ 0 * ship2
g1 ~~ 0 * g1
g2 ~~ 0 * g2'
ALTCPCE <- sem(ALTCPCE, data = OSBOA, group = "zygf", orthogonal = T)
ALTCPDE <- '
# Factors and Loadings
g1 =~ NA * faces1 + c(gl1, gl1) * faces1 + c(0.5, 0.5) * faces1 + NA * logical1 + c(gl2, gl2) * logical1 + c(0.5, 0.5) * logical1 + NA * socperc1 + c(gl3, gl3) * socperc1 + c(0.5, 0.5) * socperc1 + NA * cancal1 + c(gl4, gl4) * cancal1 + c(0.5, 0.5) * cancal1 + NA * crdrot1 + c(gl5, gl5) * crdrot1 + c(0.5, 0.5) * crdrot1 + NA * ship1 + c(gl6, gl6) * ship1 + c(0.5, 0.5) * ship1
g2 =~ NA * faces2 + c(gl1, gl1) * faces2 + c(0.5, 0.5) * faces2 + NA * logical2 + c(gl2, gl2) * logical2 + c(0.5, 0.5) * logical2 + NA * socperc2 + c(gl3, gl3) * socperc2 + c(0.5, 0.5) * socperc2 + NA * cancal2 + c(gl4, gl4) * cancal2 + c(0.5, 0.5) * cancal2 + NA * crdrot2 + c(gl5, gl5) * crdrot2 + c(0.5, 0.5) * crdrot2 + NA * ship2 + c(gl6, gl6) * ship2 + c(0.5, 0.5) * ship2
dg1 =~ NA * g1 + c(d, d) * g1
dg2 =~ NA * g2 + c(d, d) * g2
eg1 =~ NA * g1 + c(q, q) * g1
eg2 =~ NA * g2 + c(q, q) * g2
DS1 =~ NA * faces1 + c(d1, d1) * faces1
DS2 =~ NA * faces2 + c(d1, d1) * faces2
ES1 =~ NA * faces1 + c(e1, e1) * faces1
ES2 =~ NA * faces2 + c(e1, e1) * faces2
DS3 =~ NA * logical1 + c(d2, d2) * logical1
DS4 =~ NA * logical2 + c(d2, d2) * logical2
ES3 =~ NA * logical1 + c(e2, e2) * logical1
ES4 =~ NA * logical2 + c(e2, e2) * logical2
DS5 =~ NA * socperc1 + c(d3, d3) * socperc1
DS6 =~ NA * socperc2 + c(d3, d3) * socperc2
ES5 =~ NA * socperc1 + c(e3, e3) * socperc1
ES6 =~ NA * socperc2 + c(e3, e3) * socperc2
DS7 =~ NA * cancal1 + c(d4, d4) * cancal1
DS8 =~ NA * cancal2 + c(d4, d4) * cancal2
ES7 =~ NA * cancal1 + c(e4, e4) * cancal1
ES8 =~ NA * cancal2 + c(e4, e4) * cancal2
DS9 =~ NA * crdrot1 + c(d5, d5) * crdrot1
DS10 =~ NA * crdrot2 + c(d5, d5) * crdrot2
ES9 =~ NA * crdrot1 + c(e5, e5) * crdrot1
ES10 =~ NA * crdrot2 + c(e5, e5) * crdrot2
DS11 =~ NA * ship1 + c(d6, d6) * ship1
DS12 =~ NA * ship2 + c(d6, d6) * ship2
ES11 =~ NA * ship1 + c(e6, e6) * ship1
ES12 =~ NA * ship2 + c(e6, e6) * ship2
# Covariances
dg1 ~~ c(1, 0.25) * dg2
eg1 ~~ 0 * eg2
DS1 ~~ c(1, 0.25) * DS2
ES1 ~~ 0 * ES2
DS3 ~~ c(1, 0.25) * DS4
ES3 ~~ 0 * ES4
DS5 ~~ c(1, 0.25) * DS6
ES5 ~~ 0 * ES6
DS7 ~~ c(1, 0.25) * DS8
ES7 ~~ 0 * ES8
DS9 ~~ c(1, 0.25) * DS10
ES9 ~~ 0 * ES10
DS11 ~~ c(1, 0.25) * DS12
ES11 ~~ 0 * ES12
#Variances
dg1 ~~ 1 * dg1
dg2 ~~ 1 * dg2
eg1 ~~ 1 * eg1
eg2 ~~ 1 * eg2
DS1 ~~ 1 * DS1
DS2 ~~ 1 * DS2
DS3 ~~ 1 * DS3
DS4 ~~ 1 * DS4
DS5 ~~ 1 * DS5
DS6 ~~ 1 * DS6
DS7 ~~ 1 * DS7
DS8 ~~ 1 * DS8
DS9 ~~ 1 * DS9
DS10 ~~ 1 * DS10
DS11 ~~ 1 * DS11
DS12 ~~ 1 * DS12
ES1 ~~ 1 * ES1
ES2 ~~ 1 * ES2
ES3 ~~ 1 * ES3
ES4 ~~ 1 * ES4
ES5 ~~ 1 * ES5
ES6 ~~ 1 * ES6
ES7 ~~ 1 * ES7
ES8 ~~ 1 * ES8
ES9 ~~ 1 * ES9
ES10 ~~ 1 * ES10
ES11 ~~ 1 * ES11
ES12 ~~ 1 * ES12
faces1 ~~ 0 * faces1
faces2 ~~ 0 * faces2
logical1 ~~ 0 * logical1
logical2 ~~ 0 * logical2
socperc1 ~~ 0 * socperc1
socperc2 ~~ 0 * socperc2
cancal1 ~~ 0 * cancal1
cancal2 ~~ 0 * cancal2
crdrot1 ~~ 0 * crdrot1
crdrot2 ~~ 0 * crdrot2
ship1 ~~ 0 * ship1
ship2 ~~ 0 * ship2
g1 ~~ 0 * g1
g2 ~~ 0 * g2'
ALTCPDE <- sem(ALTCPDE, data = OSBOA, group = "zygf", orthogonal = T)
#Independent Pathway Models
ALTIPACE <- '
# Loadings
AL1 =~ NA * faces1 + c(a, a) * faces1 + c(0.5, 0.5) * faces1 + NA * logical1 + c(ab, ab) * logical1 + c(0.5, 0.5) * logical1 + NA * socperc1 + c(ac, ac) * socperc1 + c(0.5, 0.5) * socperc1 + NA * cancal1 + c(ad, ad) * cancal1 + c(0.5, 0.5) * cancal1 + NA * crdrot1 + c(ae, ae) * crdrot1 + c(0.5, 0.5) * crdrot1 + NA * ship1 + c(af, af) * ship1 + c(0.5, 0.5) * ship1
AL2 =~ NA * faces2 + c(a, a) * faces2 + c(0.5, 0.5) * faces2 + NA * logical2 + c(ab, ab) * logical2 + c(0.5, 0.5) * logical2 + NA * socperc2 + c(ac, ac) * socperc2 + c(0.5, 0.5) * socperc2 + NA * cancal2 + c(ad, ad) * cancal2 + c(0.5, 0.5) * cancal2 + NA * crdrot2 + c(ae, ae) * crdrot2 + c(0.5, 0.5) * crdrot2 + NA * ship2 + c(af, af) * ship2 + c(0.5, 0.5) * ship2
CL1 =~ NA * faces1 + c(d, d) * faces1 + NA * logical1 + c(db, db) * logical1 + NA * socperc1 + c(dc, dc) * socperc1 + NA * cancal1 + c(dd, dd) * cancal1 + NA * crdrot1 + c(de, de) * crdrot1 + NA * ship1 + c(df, df) * ship1
CL2 =~ NA * faces2 + c(d, d) * faces2 + NA * logical2 + c(db, db) * logical2 + NA * socperc2 + c(dc, dc) * socperc2 + NA * cancal2 + c(dd, dd) * cancal2 + NA * crdrot2 + c(de, de) * crdrot2 + NA * ship2 + c(df, df) * ship2
EL1 =~ NA * faces1 + c(el, el) * faces1 + NA * logical1 + c(elb, elb) * logical1 + NA * socperc1 + c(elc, elc) * socperc1 + NA * cancal1 + c(eld, eld) * cancal1 + NA * crdrot1 + c(ee, ee) * crdrot1 + NA * ship1 + c(ef, ef) * ship1
EL2 =~ NA * faces2 + c(el, el) * faces2 + NA * logical2 + c(elb, elb) * logical2 + NA * socperc2 + c(elc, elc) * socperc2 + NA * cancal2 + c(eld, eld) * cancal2 + NA * crdrot2 + c(ee, ee) * crdrot2 + NA * ship2 + c(ef, ef) * ship2
AS1 =~ NA * faces1 + c(a1, a1) * faces1 + c(0.5, 0.5) * faces1
AS2 =~ NA * faces2 + c(a1, a1) * faces2 + c(0.5, 0.5) * faces2
CS1 =~ NA * faces1 + c(d1, d1) * faces1
CS2 =~ NA * faces2 + c(d1, d1) * faces2
AS3 =~ NA * logical1 + c(a2, a2) * logical1 + c(0.5, 0.5) * logical1
AS4 =~ NA * logical2 + c(a2, a2) * logical2 + c(0.5, 0.5) * logical2
CS3 =~ NA * logical1 + c(d2, d2) * logical1
CS4 =~ NA * logical2 + c(d2, d2) * logical2
AS5 =~ NA * socperc1 + c(a3, a3) * socperc1 + c(0.5, 0.5) * socperc1
AS6 =~ NA * socperc2 + c(a3, a3) * socperc2 + c(0.5, 0.5) * socperc2
CS5 =~ NA * socperc1 + c(d3, d3) * socperc1
CS6 =~ NA * socperc2 + c(d3, d3) * socperc2
AS7 =~ NA * cancal1 + c(a4, a4) * cancal1 + c(0.5, 0.5) * cancal1
AS8 =~ NA * cancal2 + c(a4, a4) * cancal2 + c(0.5, 0.5) * cancal2
CS7 =~ NA * cancal1 + c(d4, d4) * cancal1
CS8 =~ NA * cancal2 + c(d4, d4) * cancal2
AS9 =~ NA * crdrot1 + c(a5, a5) * crdrot1 + c(0.5, 0.5) * crdrot1
AS10 =~ NA * crdrot2 + c(a5, a5) * crdrot2 + c(0.5, 0.5) * crdrot2
CS9 =~ NA * crdrot1 + c(d5, d5) * crdrot1
CS10 =~ NA * crdrot2 + c(d5, d5) * crdrot2
AS11 =~ NA * ship1 + c(a6, a6) * ship1 + c(0.5, 0.5) * ship1
AS12 =~ NA * ship2 + c(a6, a6) * ship2 + c(0.5, 0.5) * ship2
CS11 =~ NA * ship1 + c(d6, d6) * ship1
CS12 =~ NA * ship2 + c(d6, d6) * ship2
# Covariances
AL1 ~~ c(1, 0.5) * AL2
AL1 ~~ 0 * CL1 + 0 * CL2
AL2 ~~ 0 * CL1 + 0 * CL2
CL1 ~~ c(1, 1) * CL2
EL1 ~~ c(0, 0) * EL2
AS1 ~~ c(1, 0.5) * AS2
AS1 ~~ 0 * CS1 + 0 * CS2
AS2 ~~ 0 * CS1 + 0 * CS2
CS1 ~~ c(1, 1) * CS2
AS3 ~~ c(1, 0.5) * AS4
AS3 ~~ 0 * CS3 + 0 * CS4
AS4 ~~ 0 * CS3 + 0 * CS4
CS3 ~~ c(1, 1) * CS4
AS5 ~~ c(1, 0.5) * AS6
AS5 ~~ 0 * CS5 + 0 * CS6
AS6 ~~ 0 * CS5 + 0 * CS6
CS5 ~~ c(1, 1) * CS6
AS7 ~~ c(1, 0.5) * AS8
AS7 ~~ 0 * CS7 + 0 * CS8
AS8 ~~ 0 * CS7 + 0 * CS8
CS7 ~~ c(1, 1) * CS8
AS9 ~~ c(1, 0.5) * AS10
AS9 ~~ 0 * CS9 + 0 * CS10
AS10 ~~ 0 * CS9 + 0 * CS10
CS9 ~~ c(1, 1) * CS10
AS11 ~~ c(1, 0.5) * AS12
AS11 ~~ 0 * CS11 + 0 * CS12
AS12 ~~ 0 * CS11 + 0 * CS12
CS11 ~~ c(1, 1) * CS12
# Variances
AL1 ~~ 1 * AL1
AL2 ~~ 1 * AL2
CL1 ~~ 1 * CL1
CL2 ~~ 1 * CL2
EL1 ~~ 1 * EL1
EL2 ~~ 1 * EL2
AS1 ~~ 1 * AS1
AS2 ~~ 1 * AS2
AS3 ~~ 1 * AS3
AS4 ~~ 1 * AS4
AS5 ~~ 1 * AS5
AS6 ~~ 1 * AS6
AS7 ~~ 1 * AS7
AS8 ~~ 1 * AS8
AS9 ~~ 1 * AS9
AS10 ~~ 1 * AS10
AS11 ~~ 1 * AS11
AS12 ~~ 1 * AS12
CS1 ~~ 1 * CS1
CS2 ~~ 1 * CS2
CS3 ~~ 1 * CS3
CS4 ~~ 1 * CS4
CS5 ~~ 1 * CS5
CS6 ~~ 1 * CS6
CS7 ~~ 1 * CS7
CS8 ~~ 1 * CS8
CS9 ~~ 1 * CS9
CS10 ~~ 1 * CS10
CS11 ~~ 1 * CS11
CS12 ~~ 1 * CS12
faces1 ~~ c(e2, e2) * faces1
faces2 ~~ c(e2, e2) * faces2
logical1 ~~ c(e3, e3) * logical1
logical2 ~~ c(e3, e3) * logical2
socperc1 ~~ c(e4, e4) * socperc1
socperc2 ~~ c(e4, e4) * socperc2
cancal1 ~~ c(e5, e5) * cancal1
cancal2 ~~ c(e5, e5) * cancal2
crdrot1 ~~ c(e6, e6) * crdrot1
crdrot2 ~~ c(e6, e6) * crdrot2
ship1 ~~ c(e7, e7) * ship1
ship2 ~~ c(e7, e7) * ship2'
ALTIPACE <- sem(ALTIPACE, data = OSBOA, group = "zygf", orthogonal = T)
ALTIPADE <- '
# Loadings
AL1 =~ NA * faces1 + c(a, a) * faces1 + c(0.5, 0.5) * faces1 + NA * logical1 + c(ab, ab) * logical1 + c(0.5, 0.5) * logical1 + NA * socperc1 + c(ac, ac) * socperc1 + c(0.5, 0.5) * socperc1 + NA * cancal1 + c(ad, ad) * cancal1 + c(0.5, 0.5) * cancal1 + NA * crdrot1 + c(ae, ae) * crdrot1 + c(0.5, 0.5) * crdrot1 + NA * ship1 + c(af, af) * ship1 + c(0.5, 0.5) * ship1
AL2 =~ NA * faces2 + c(a, a) * faces2 + c(0.5, 0.5) * faces2 + NA * logical2 + c(ab, ab) * logical2 + c(0.5, 0.5) * logical2 + NA * socperc2 + c(ac, ac) * socperc2 + c(0.5, 0.5) * socperc2 + NA * cancal2 + c(ad, ad) * cancal2 + c(0.5, 0.5) * cancal2 + NA * crdrot2 + c(ae, ae) * crdrot2 + c(0.5, 0.5) * crdrot2 + NA * ship2 + c(af, af) * ship2 + c(0.5, 0.5) * ship2
DL1 =~ NA * faces1 + c(d, d) * faces1 + NA * logical1 + c(db, db) * logical1 + NA * socperc1 + c(dc, dc) * socperc1 + NA * cancal1 + c(dd, dd) * cancal1 + NA * crdrot1 + c(de, de) * crdrot1 + NA * ship1 + c(df, df) * ship1
DL2 =~ NA * faces2 + c(d, d) * faces2 + NA * logical2 + c(db, db) * logical2 + NA * socperc2 + c(dc, dc) * socperc2 + NA * cancal2 + c(dd, dd) * cancal2 + NA * crdrot2 + c(de, de) * crdrot2 + NA * ship2 + c(df, df) * ship2
EL1 =~ NA * faces1 + c(el, el) * faces1 + NA * logical1 + c(elb, elb) * logical1 + NA * socperc1 + c(elc, elc) * socperc1 + NA * cancal1 + c(eld, eld) * cancal1 + NA * crdrot1 + c(ee, ee) * crdrot1 + NA * ship1 + c(ef, ef) * ship1
EL2 =~ NA * faces2 + c(el, el) * faces2 + NA * logical2 + c(elb, elb) * logical2 + NA * socperc2 + c(elc, elc) * socperc2 + NA * cancal2 + c(eld, eld) * cancal2 + NA * crdrot2 + c(ee, ee) * crdrot2 + NA * ship2 + c(ef, ef) * ship2
AS1 =~ NA * faces1 + c(a1, a1) * faces1 + c(0.5, 0.5) * faces1
AS2 =~ NA * faces2 + c(a1, a1) * faces2 + c(0.5, 0.5) * faces2
DS1 =~ NA * faces1 + c(d1, d1) * faces1
DS2 =~ NA * faces2 + c(d1, d1) * faces2
AS3 =~ NA * logical1 + c(a2, a2) * logical1 + c(0.5, 0.5) * logical1
AS4 =~ NA * logical2 + c(a2, a2) * logical2 + c(0.5, 0.5) * logical2
DS3 =~ NA * logical1 + c(d2, d2) * logical1
DS4 =~ NA * logical2 + c(d2, d2) * logical2
AS5 =~ NA * socperc1 + c(a3, a3) * socperc1 + c(0.5, 0.5) * socperc1
AS6 =~ NA * socperc2 + c(a3, a3) * socperc2 + c(0.5, 0.5) * socperc2
DS5 =~ NA * socperc1 + c(d3, d3) * socperc1
DS6 =~ NA * socperc2 + c(d3, d3) * socperc2
AS7 =~ NA * cancal1 + c(a4, a4) * cancal1 + c(0.5, 0.5) * cancal1
AS8 =~ NA * cancal2 + c(a4, a4) * cancal2 + c(0.5, 0.5) * cancal2
DS7 =~ NA * cancal1 + c(d4, d4) * cancal1
DS8 =~ NA * cancal2 + c(d4, d4) * cancal2
AS9 =~ NA * crdrot1 + c(a5, a5) * crdrot1 + c(0.5, 0.5) * crdrot1
AS10 =~ NA * crdrot2 + c(a5, a5) * crdrot2 + c(0.5, 0.5) * crdrot2
DS9 =~ NA * crdrot1 + c(d5, d5) * crdrot1
DS10 =~ NA * crdrot2 + c(d5, d5) * crdrot2
AS11 =~ NA * ship1 + c(a6, a6) * ship1 + c(0.5, 0.5) * ship1
AS12 =~ NA * ship2 + c(a6, a6) * ship2 + c(0.5, 0.5) * ship2
DS11 =~ NA * ship1 + c(d6, d6) * ship1
DS12 =~ NA * ship2 + c(d6, d6) * ship2
# Covariances
AL1 ~~ c(1, 0.5) * AL2
AL1 ~~ 0 * DL1 + 0 * DL2
AL2 ~~ 0 * DL1 + 0 * DL2
DL1 ~~ c(1, 0.25) * DL2
EL1 ~~ c(0, 0) * EL2
AS1 ~~ c(1, 0.5) * AS2
AS1 ~~ 0 * DS1 + 0 * DS2
AS2 ~~ 0 * DS1 + 0 * DS2
DS1 ~~ c(1, 0.25) * DS2
AS3 ~~ c(1, 0.5) * AS4
AS3 ~~ 0 * DS3 + 0 * DS4
AS4 ~~ 0 * DS3 + 0 * DS4
DS3 ~~ c(1, 0.25) * DS4
AS5 ~~ c(1, 0.5) * AS6
AS5 ~~ 0 * DS5 + 0 * DS6
AS6 ~~ 0 * DS5 + 0 * DS6
DS5 ~~ c(1, 0.25) * DS6
AS7 ~~ c(1, 0.5) * AS8
AS7 ~~ 0 * DS7 + 0 * DS8
AS8 ~~ 0 * DS7 + 0 * DS8
DS7 ~~ c(1, 0.25) * DS8
AS9 ~~ c(1, 0.5) * AS10
AS9 ~~ 0 * DS9 + 0 * DS10
AS10 ~~ 0 * DS9 + 0 * DS10
DS9 ~~ c(1, 0.25) * DS10
AS11 ~~ c(1, 0.5) * AS12
AS11 ~~ 0 * DS11 + 0 * DS12
AS12 ~~ 0 * DS11 + 0 * DS12
DS11 ~~ c(1, 0.25) * DS12
# Variances
AL1 ~~ 1 * AL1
AL2 ~~ 1 * AL2
DL1 ~~ 1 * DL1
DL2 ~~ 1 * DL2
EL1 ~~ 1 * EL1
EL2 ~~ 1 * EL2
AS1 ~~ 1 * AS1
AS2 ~~ 1 * AS2
AS3 ~~ 1 * AS3
AS4 ~~ 1 * AS4
AS5 ~~ 1 * AS5
AS6 ~~ 1 * AS6
AS7 ~~ 1 * AS7
AS8 ~~ 1 * AS8
AS9 ~~ 1 * AS9
AS10 ~~ 1 * AS10
AS11 ~~ 1 * AS11
AS12 ~~ 1 * AS12
DS1 ~~ 1 * DS1
DS2 ~~ 1 * DS2
DS3 ~~ 1 * DS3
DS4 ~~ 1 * DS4
DS5 ~~ 1 * DS5
DS6 ~~ 1 * DS6
DS7 ~~ 1 * DS7
DS8 ~~ 1 * DS8
DS9 ~~ 1 * DS9
DS10 ~~ 1 * DS10
DS11 ~~ 1 * DS11
DS12 ~~ 1 * DS12
faces1 ~~ c(e2, e2) * faces1
faces2 ~~ c(e2, e2) * faces2
logical1 ~~ c(e3, e3) * logical1
logical2 ~~ c(e3, e3) * logical2
socperc1 ~~ c(e4, e4) * socperc1
socperc2 ~~ c(e4, e4) * socperc2
cancal1 ~~ c(e5, e5) * cancal1
cancal2 ~~ c(e5, e5) * cancal2
crdrot1 ~~ c(e6, e6) * crdrot1
crdrot2 ~~ c(e6, e6) * crdrot2
ship1 ~~ c(e7, e7) * ship1
ship2 ~~ c(e7, e7) * ship2'
ALTIPADE <- sem(ALTIPADE, data = OSBOA, group = "zygf", orthogonal = T)
ALTIPAE <- '
# Loadings
AL1 =~ NA * faces1 + c(a, a) * faces1 + c(0.5, 0.5) * faces1 + NA * logical1 + c(ab, ab) * logical1 + c(0.5, 0.5) * logical1 + NA * socperc1 + c(ac, ac) * socperc1 + c(0.5, 0.5) * socperc1 + NA * cancal1 + c(ad, ad) * cancal1 + c(0.5, 0.5) * cancal1 + NA * crdrot1 + c(ae, ae) * crdrot1 + c(0.5, 0.5) * crdrot1 + NA * ship1 + c(af, af) * ship1 + c(0.5, 0.5) * ship1
AL2 =~ NA * faces2 + c(a, a) * faces2 + c(0.5, 0.5) * faces2 + NA * logical2 + c(ab, ab) * logical2 + c(0.5, 0.5) * logical2 + NA * socperc2 + c(ac, ac) * socperc2 + c(0.5, 0.5) * socperc2 + NA * cancal2 + c(ad, ad) * cancal2 + c(0.5, 0.5) * cancal2 + NA * crdrot2 + c(ae, ae) * crdrot2 + c(0.5, 0.5) * crdrot2 + NA * ship2 + c(af, af) * ship2 + c(0.5, 0.5) * ship2
EL1 =~ NA * faces1 + c(el, el) * faces1 + NA * logical1 + c(elb, elb) * logical1 + NA * socperc1 + c(elc, elc) * socperc1 + NA * cancal1 + c(eld, eld) * cancal1 + NA * crdrot1 + c(ee, ee) * crdrot1 + NA * ship1 + c(ef, ef) * ship1
EL2 =~ NA * faces2 + c(el, el) * faces2 + NA * logical2 + c(elb, elb) * logical2 + NA * socperc2 + c(elc, elc) * socperc2 + NA * cancal2 + c(eld, eld) * cancal2 + NA * crdrot2 + c(ee, ee) * crdrot2 + NA * ship2 + c(ef, ef) * ship2
AS1 =~ NA * faces1 + c(a1, a1) * faces1 + c(0.5, 0.5) * faces1
AS2 =~ NA * faces2 + c(a1, a1) * faces2 + c(0.5, 0.5) * faces2
AS3 =~ NA * logical1 + c(a2, a2) * logical1 + c(0.5, 0.5) * logical1
AS4 =~ NA * logical2 + c(a2, a2) * logical2 + c(0.5, 0.5) * logical2
AS5 =~ NA * socperc1 + c(a3, a3) * socperc1 + c(0.5, 0.5) * socperc1
AS6 =~ NA * socperc2 + c(a3, a3) * socperc2 + c(0.5, 0.5) * socperc2
AS7 =~ NA * cancal1 + c(a4, a4) * cancal1 + c(0.5, 0.5) * cancal1
AS8 =~ NA * cancal2 + c(a4, a4) * cancal2 + c(0.5, 0.5) * cancal2
AS9 =~ NA * crdrot1 + c(a5, a5) * crdrot1 + c(0.5, 0.5) * crdrot1
AS10 =~ NA * crdrot2 + c(a5, a5) * crdrot2 + c(0.5, 0.5) * crdrot2
AS11 =~ NA * ship1 + c(a6, a6) * ship1 + c(0.5, 0.5) * ship1
AS12 =~ NA * ship2 + c(a6, a6) * ship2 + c(0.5, 0.5) * ship2
# Covariances
AL1 ~~ c(1, 0.5) * AL2
EL1 ~~ c(0, 0) * EL2
AS1 ~~ c(1, 0.5) * AS2
AS3 ~~ c(1, 0.5) * AS4
AS5 ~~ c(1, 0.5) * AS6
AS7 ~~ c(1, 0.5) * AS8
AS9 ~~ c(1, 0.5) * AS10
AS11 ~~ c(1, 0.5) * AS12
# Variances
AL1 ~~ 1 * AL1
AL2 ~~ 1 * AL2
EL1 ~~ 1 * EL1
EL2 ~~ 1 * EL2
AS1 ~~ 1 * AS1
AS2 ~~ 1 * AS2
AS3 ~~ 1 * AS3
AS4 ~~ 1 * AS4
AS5 ~~ 1 * AS5
AS6 ~~ 1 * AS6
AS7 ~~ 1 * AS7
AS8 ~~ 1 * AS8
AS9 ~~ 1 * AS9
AS10 ~~ 1 * AS10
AS11 ~~ 1 * AS11
AS12 ~~ 1 * AS12
faces1 ~~ c(e2, e2) * faces1
faces2 ~~ c(e2, e2) * faces2
logical1 ~~ c(e3, e3) * logical1
logical2 ~~ c(e3, e3) * logical2
socperc1 ~~ c(e4, e4) * socperc1
socperc2 ~~ c(e4, e4) * socperc2
cancal1 ~~ c(e5, e5) * cancal1
cancal2 ~~ c(e5, e5) * cancal2
crdrot1 ~~ c(e6, e6) * crdrot1
crdrot2 ~~ c(e6, e6) * crdrot2
ship1 ~~ c(e7, e7) * ship1
ship2 ~~ c(e7, e7) * ship2'
ALTIPAE <- sem(ALTIPAE, data = OSBOA, group = "zygf", orthogonal = T)
ALTIPCE <- '
# Loadings
CL1 =~ NA * faces1 + c(d, d) * faces1 + NA * logical1 + c(db, db) * logical1 + NA * socperc1 + c(dc, dc) * socperc1 + NA * cancal1 + c(dd, dd) * cancal1 + NA * crdrot1 + c(de, de) * crdrot1 + NA * ship1 + c(df, df) * ship1
CL2 =~ NA * faces2 + c(d, d) * faces2 + NA * logical2 + c(db, db) * logical2 + NA * socperc2 + c(dc, dc) * socperc2 + NA * cancal2 + c(dd, dd) * cancal2 + NA * crdrot2 + c(de, de) * crdrot2 + NA * ship2 + c(df, df) * ship2
EL1 =~ NA * faces1 + c(el, el) * faces1 + NA * logical1 + c(elb, elb) * logical1 + NA * socperc1 + c(elc, elc) * socperc1 + NA * cancal1 + c(eld, eld) * cancal1 + NA * crdrot1 + c(ee, ee) * crdrot1 + NA * ship1 + c(ef, ef) * ship1
EL2 =~ NA * faces2 + c(el, el) * faces2 + NA * logical2 + c(elb, elb) * logical2 + NA * socperc2 + c(elc, elc) * socperc2 + NA * cancal2 + c(eld, eld) * cancal2 + NA * crdrot2 + c(ee, ee) * crdrot2 + NA * ship2 + c(ef, ef) * ship2
CS1 =~ NA * faces1 + c(d1, d1) * faces1
CS2 =~ NA * faces2 + c(d1, d1) * faces2
CS3 =~ NA * logical1 + c(d2, d2) * logical1
CS4 =~ NA * logical2 + c(d2, d2) * logical2
CS5 =~ NA * socperc1 + c(d3, d3) * socperc1
CS6 =~ NA * socperc2 + c(d3, d3) * socperc2
CS7 =~ NA * cancal1 + c(d4, d4) * cancal1
CS8 =~ NA * cancal2 + c(d4, d4) * cancal2
CS9 =~ NA * crdrot1 + c(d5, d5) * crdrot1
CS10 =~ NA * crdrot2 + c(d5, d5) * crdrot2
CS11 =~ NA * ship1 + c(d6, d6) * ship1
CS12 =~ NA * ship2 + c(d6, d6) * ship2
# Covariances
CL1 ~~ c(1, 1) * CL2
EL1 ~~ c(0, 0) * EL2
CS1 ~~ c(1, 1) * CS2
CS3 ~~ c(1, 1) * CS4
CS5 ~~ c(1, 1) * CS6
CS7 ~~ c(1, 1) * CS8
CS9 ~~ c(1, 1) * CS10
CS11 ~~ c(1, 1) * CS12
# Variances
CL1 ~~ 1 * CL1
CL2 ~~ 1 * CL2
EL1 ~~ 1 * EL1
EL2 ~~ 1 * EL2
CS1 ~~ 1 * CS1
CS2 ~~ 1 * CS2
CS3 ~~ 1 * CS3
CS4 ~~ 1 * CS4
CS5 ~~ 1 * CS5
CS6 ~~ 1 * CS6
CS7 ~~ 1 * CS7
CS8 ~~ 1 * CS8
CS9 ~~ 1 * CS9
CS10 ~~ 1 * CS10
CS11 ~~ 1 * CS11
CS12 ~~ 1 * CS12
faces1 ~~ c(e2, e2) * faces1
faces2 ~~ c(e2, e2) * faces2
logical1 ~~ c(e3, e3) * logical1
logical2 ~~ c(e3, e3) * logical2
socperc1 ~~ c(e4, e4) * socperc1
socperc2 ~~ c(e4, e4) * socperc2
cancal1 ~~ c(e5, e5) * cancal1
cancal2 ~~ c(e5, e5) * cancal2
crdrot1 ~~ c(e6, e6) * crdrot1
crdrot2 ~~ c(e6, e6) * crdrot2
ship1 ~~ c(e7, e7) * ship1
ship2 ~~ c(e7, e7) * ship2'
ALTIPCE <- sem(ALTIPCE, data = OSBOA, group = "zygf", orthogonal = T)
ALTIPDE <- '
# Loadings
DL1 =~ NA * faces1 + c(d, d) * faces1 + NA * logical1 + c(db, db) * logical1 + NA * socperc1 + c(dc, dc) * socperc1 + NA * cancal1 + c(dd, dd) * cancal1 + NA * crdrot1 + c(de, de) * crdrot1 + NA * ship1 + c(df, df) * ship1
DL2 =~ NA * faces2 + c(d, d) * faces2 + NA * logical2 + c(db, db) * logical2 + NA * socperc2 + c(dc, dc) * socperc2 + NA * cancal2 + c(dd, dd) * cancal2 + NA * crdrot2 + c(de, de) * crdrot2 + NA * ship2 + c(df, df) * ship2
EL1 =~ NA * faces1 + c(el, el) * faces1 + NA * logical1 + c(elb, elb) * logical1 + NA * socperc1 + c(elc, elc) * socperc1 + NA * cancal1 + c(eld, eld) * cancal1 + NA * crdrot1 + c(ee, ee) * crdrot1 + NA * ship1 + c(ef, ef) * ship1
EL2 =~ NA * faces2 + c(el, el) * faces2 + NA * logical2 + c(elb, elb) * logical2 + NA * socperc2 + c(elc, elc) * socperc2 + NA * cancal2 + c(eld, eld) * cancal2 + NA * crdrot2 + c(ee, ee) * crdrot2 + NA * ship2 + c(ef, ef) * ship2
DS1 =~ NA * faces1 + c(d1, d1) * faces1
DS2 =~ NA * faces2 + c(d1, d1) * faces2
DS3 =~ NA * logical1 + c(d2, d2) * logical1
DS4 =~ NA * logical2 + c(d2, d2) * logical2
DS5 =~ NA * socperc1 + c(d3, d3) * socperc1
DS6 =~ NA * socperc2 + c(d3, d3) * socperc2
DS7 =~ NA * cancal1 + c(d4, d4) * cancal1
DS8 =~ NA * cancal2 + c(d4, d4) * cancal2
DS9 =~ NA * crdrot1 + c(d5, d5) * crdrot1
DS10 =~ NA * crdrot2 + c(d5, d5) * crdrot2
DS11 =~ NA * ship1 + c(d6, d6) * ship1
DS12 =~ NA * ship2 + c(d6, d6) * ship2
# Covariances
DL1 ~~ c(1, 0.25) * DL2
EL1 ~~ c(0, 0) * EL2
DS1 ~~ c(1, 0.25) * DS2
DS3 ~~ c(1, 0.25) * DS4
DS5 ~~ c(1, 0.25) * DS6
DS7 ~~ c(1, 0.25) * DS8
DS9 ~~ c(1, 0.25) * DS10
DS11 ~~ c(1, 0.25) * DS12
# Variances
DL1 ~~ 1 * DL1
DL2 ~~ 1 * DL2
EL1 ~~ 1 * EL1
EL2 ~~ 1 * EL2
DS1 ~~ 1 * DS1
DS2 ~~ 1 * DS2
DS3 ~~ 1 * DS3
DS4 ~~ 1 * DS4
DS5 ~~ 1 * DS5
DS6 ~~ 1 * DS6
DS7 ~~ 1 * DS7
DS8 ~~ 1 * DS8
DS9 ~~ 1 * DS9
DS10 ~~ 1 * DS10
DS11 ~~ 1 * DS11
DS12 ~~ 1 * DS12
faces1 ~~ c(e2, e2) * faces1
faces2 ~~ c(e2, e2) * faces2
logical1 ~~ c(e3, e3) * logical1
logical2 ~~ c(e3, e3) * logical2
socperc1 ~~ c(e4, e4) * socperc1
socperc2 ~~ c(e4, e4) * socperc2
cancal1 ~~ c(e5, e5) * cancal1
cancal2 ~~ c(e5, e5) * cancal2
crdrot1 ~~ c(e6, e6) * crdrot1
crdrot2 ~~ c(e6, e6) * crdrot2
ship1 ~~ c(e7, e7) * ship1
ship2 ~~ c(e7, e7) * ship2'
ALTIPDE <- sem(ALTIPDE, data = OSBOA, group = "zygf", orthogonal = T)
round(cbind("CP ACE" = fitMeasures(ALTCPACE, GFIT),
"CP ADE" = fitMeasures(ALTCPADE, GFIT),
"CP AE" = fitMeasures(ALTCPAE, GFIT),
"CP CE" = fitMeasures(ALTCPCE, GFIT),
"CP DE" = fitMeasures(ALTCPDE, GFIT),
"IP ACE" = fitMeasures(ALTIPACE, GFIT),
"IP ADE" = fitMeasures(ALTIPADE, GFIT),
"IP AE" = fitMeasures(ALTIPAE, GFIT),
"IP CE" = fitMeasures(ALTIPCE, GFIT),
"IP DE" = fitMeasures(ALTIPDE, GFIT)), 3)
## CP ACE CP ADE CP AE CP CE CP DE IP ACE
## logl -11918.311 -11927.645 -11927.828 -11946.286 -11946.694 -11908.541
## df 129.000 129.000 136.000 136.000 136.000 120.000
## npar 51.000 51.000 44.000 44.000 44.000 60.000
## aic 23938.621 23957.291 23943.655 23980.572 23981.388 23937.082
## bic 24122.517 24141.187 24102.310 24139.228 24140.044 24153.430
## chisq 153.376 172.045 172.410 209.327 210.143 133.836
## pvalue 0.070 0.007 0.019 0.000 0.000 0.183
## tli 0.980 0.964 0.971 0.942 0.941 0.988
## rmsea 0.037 0.050 0.044 0.063 0.063 0.029
## IP ADE IP AE IP CE IP DE
## logl -11912.867 -11923.792 -11945.521 -11943.242
## df 120.000 132.000 132.000 132.000
## npar 60.000 48.000 48.000 48.000
## aic 23945.734 23943.584 23987.043 23982.483
## bic 24162.082 24116.663 24160.121 24155.562
## chisq 142.488 164.339 207.797 203.238
## pvalue 0.079 0.030 0.000 0.000
## tli 0.980 0.973 0.938 0.942
## rmsea 0.037 0.042 0.065 0.063
It is difficult to choose between these models, so I will go through my process for adjudicating fits. First, if the faces subtest is dropped, the common pathway models dominate. With 154 + 118 = 272 pairs, our critical p is 0.0006. In the common pathway models, the zero-df comparisons eliminate the ADE, CE, and DE models. The comparison is, thus, between the ACE and AE model, and the \(\chi^2_7\) = 0.008, which is conventionally significant, but not with scaling. The \(\Delta\)BIC is -20 for the AE model, but the \(\Delta\)AIC is 5, with a shift in TLI and RMSEA of \(\Delta\) -0.009 and 0.007. These model fit indices are not particularly useful for this sort of model comparison; all they do it reiterate that one or another model is less parsimonious, so I ascribe much less weight to them. There is only one large specific C effect, so perhaps AE is correct. Within the independent pathway models, the zero-df comparisons eliminate the same models. In the independent pathway model, the p value for our residual model comparison is 0.002, with a \(\Delta\)BIC of -37 but a \(\Delta\)AIC of 6, with a shift in TLI and RMSEA of \(\Delta\) -0.015 and 0.013. The AE independent pathway model is relatively worse than the ACE, but it's still difficult to reject it. Between the CP-AE and IP-AE, the p value is actually 0.059, so it isn't even conventionally significant; thus, with the same AIC and vastly superior BIC, the CP-AE trounces the IP-AE. Comparing the CP-ACE, CP-AE, and IP-ACE, the IP-ACE enjoys a \(\Delta\)AIC advantage of 1.539. At this level, it is typical to accept the more parsimonious model, but \(\chi^2_9\) = 0.021 - barely conventionally significant and not significant with scaling. The BIC is far better, and the \(\Delta\) TLI are -0.008 and 0.008, all of which are considered marginal model differences. The CP-AE has a p value for the IP-ACE of 0.001 and far worse AIC, but better BIC. It is honestly quite difficult to choose between these models, but it appears that, by diagnostic model fit indices, the CP-ACE is not much worse than the IP-ACE, but it is far more parsimonious. Substantive considerations are weighted towards the CP-ACE over the IP-ACE because the IP-ACE is actually overfitted: while none of the common loadings for the CP model are insignificant, three E, one C, and almost/by scaled criteria one of the A (p = 0.036) common loadings is insignificant. With the AIC conforming to the \(\Delta\) < 2 rule, I think the CP-ACE model is probably the best bet. If there were more subtests and a higher-order model could be fit, this would allow the simultaneous assessment of faces with g and a group factor that might better explain its commonality with other variables
#Common Pathway Models
NMTCPACE <- '
# Factors and Loadings
g1 =~ NA * ENG1 + c(gl1, gl1) * ENG1 + c(0.5, 0.5) * ENG1 + NA * MAT1 + c(gl2, gl2) * MAT1 + c(0.5, 0.5) * MAT1 + NA * SOC1 + c(gl3, gl3) * SOC1 + c(0.5, 0.5) * SOC1 + NA * NAT1 + c(gl4, gl4) * NAT1 + c(0.5, 0.5) * NAT1 + NA * VOC1 + c(gl5, gl5) * VOC1 + c(0.5, 0.5) * VOC1
g2 =~ NA * ENG2 + c(gl1, gl1) * ENG2 + c(0.5, 0.5) * ENG2 + NA * MAT2 + c(gl2, gl2) * MAT2 + c(0.5, 0.5) * MAT2 + NA * SOC2 + c(gl3, gl3) * SOC2 + c(0.5, 0.5) * SOC2 + NA * NAT2 + c(gl4, gl4) * NAT2 + c(0.5, 0.5) * NAT2 + NA * VOC2 + c(gl5, gl5) * VOC2 + c(0.5, 0.5) * VOC2
ag1 =~ NA * g1 + c(a, a) * g1 + c(0.5, 0.5) * g1
ag2 =~ NA * g2 + c(a, a) * g2 + c(0.5, 0.5) * g2
cg1 =~ NA * g1 + c(d, d) * g1
cg2 =~ NA * g2 + c(d, d) * g2
eg1 =~ NA * g1 + c(q, q) * g1
eg2 =~ NA * g2 + c(q, q) * g2
AS1 =~ NA * ENG1 + c(a1, a1) * ENG1 + c(0.5, 0.5) * ENG1
AS2 =~ NA * ENG2 + c(a1, a1) * ENG2 + c(0.5, 0.5) * ENG2
CS1 =~ NA * ENG1 + c(d1, d1) * ENG1
CS2 =~ NA * ENG2 + c(d1, d1) * ENG2
ES1 =~ NA * ENG1 + c(e1, e1) * ENG1
ES2 =~ NA * ENG2 + c(e1, e1) * ENG2
AS3 =~ NA * MAT1 + c(a2, a2) * MAT1 + c(0.5, 0.5) * MAT1
AS4 =~ NA * MAT2 + c(a2, a2) * MAT2 + c(0.5, 0.5) * MAT2
CS3 =~ NA * MAT1 + c(d2, d2) * MAT1
CS4 =~ NA * MAT2 + c(d2, d2) * MAT2
ES3 =~ NA * MAT1 + c(e2, e2) * MAT1
ES4 =~ NA * MAT2 + c(e2, e2) * MAT2
AS5 =~ NA * SOC1 + c(a3, a3) * SOC1 + c(0.5, 0.5) * SOC1
AS6 =~ NA * SOC2 + c(a3, a3) * SOC2 + c(0.5, 0.5) * SOC2
CS5 =~ NA * SOC1 + c(d3, d3) * SOC1
CS6 =~ NA * SOC2 + c(d3, d3) * SOC2
ES5 =~ NA * SOC1 + c(e3, e3) * SOC1
ES6 =~ NA * SOC2 + c(e3, e3) * SOC2
AS7 =~ NA * NAT1 + c(a4, a4) * NAT1 + c(0.5, 0.5) * NAT1
AS8 =~ NA * NAT2 + c(a4, a4) * NAT2 + c(0.5, 0.5) * NAT2
CS7 =~ NA * NAT1 + c(d4, d4) * NAT1
CS8 =~ NA * NAT2 + c(d4, d4) * NAT2
ES7 =~ NA * NAT1 + c(e4, e4) * NAT1
ES8 =~ NA * NAT2 + c(e4, e4) * NAT2
AS9 =~ NA * VOC1 + c(a5, a5) * VOC1 + c(0.5, 0.5) * VOC1
AS10 =~ NA * VOC2 + c(a5, a5) * VOC2 + c(0.5, 0.5) * VOC2
CS9 =~ NA * VOC1 + c(d5, d5) * VOC1
CS10 =~ NA * VOC2 + c(d5, d5) * VOC2
ES9 =~ NA * VOC1 + c(e5, e5) * VOC1
ES10 =~ NA * VOC2 + c(e5, e5) * VOC2
# Covariances
ag1 ~~ c(1, 0.5) * ag2
ag1 ~~ 0 * cg1 + 0 * cg2
ag2 ~~ 0 * cg1 + 0 * cg2
cg1 ~~ c(1, 1) * cg2
eg1 ~~ 0 * eg2
AS1 ~~ c(1, 0.5) * AS2
AS1 ~~ 0 * CS1 + 0 * CS2
AS2 ~~ 0 * CS1 + 0 * CS2
CS1 ~~ c(1, 1) * CS2
ES1 ~~ 0 * ES2
AS3 ~~ c(1, 0.5) * AS4
AS3 ~~ 0 * CS3 + 0 * CS4
AS4 ~~ 0 * CS3 + 0 * CS4
CS3 ~~ c(1, 1) * CS4
ES3 ~~ 0 * ES4
AS5 ~~ c(1, 0.5) * AS6
AS5 ~~ 0 * CS5 + 0 * CS6
AS6 ~~ 0 * CS5 + 0 * CS6
CS5 ~~ c(1, 1) * CS6
ES5 ~~ 0 * ES6
AS7 ~~ c(1, 0.5) * AS8
AS7 ~~ 0 * CS7 + 0 * CS8
AS8 ~~ 0 * CS7 + 0 * CS8
CS7 ~~ c(1, 1) * CS8
ES7 ~~ 0 * ES8
AS9 ~~ c(1, 0.5) * AS10
AS9 ~~ 0 * CS9 + 0 * CS10
AS10 ~~ 0 * CS9 + 0 * CS10
CS9 ~~ c(1, 1) * CS10
ES9 ~~ 0 * ES10
#Variances
ag1 ~~ 1 * ag1
ag2 ~~ 1 * ag2
cg1 ~~ 1 * cg1
cg2 ~~ 1 * cg2
eg1 ~~ 1 * eg1
eg2 ~~ 1 * eg2
AS1 ~~ 1 * AS1
AS2 ~~ 1 * AS2
AS3 ~~ 1 * AS3
AS4 ~~ 1 * AS4
AS5 ~~ 1 * AS5
AS6 ~~ 1 * AS6
AS7 ~~ 1 * AS7
AS8 ~~ 1 * AS8
AS9 ~~ 1 * AS9
AS10 ~~ 1 * AS10
CS1 ~~ 1 * CS1
CS2 ~~ 1 * CS2
CS3 ~~ 1 * CS3
CS4 ~~ 1 * CS4
CS5 ~~ 1 * CS5
CS6 ~~ 1 * CS6
CS7 ~~ 1 * CS7
CS8 ~~ 1 * CS8
CS9 ~~ 1 * CS9
CS10 ~~ 1 * CS10
ES1 ~~ 1 * ES1
ES2 ~~ 1 * ES2
ES3 ~~ 1 * ES3
ES4 ~~ 1 * ES4
ES5 ~~ 1 * ES5
ES6 ~~ 1 * ES6
ES7 ~~ 1 * ES7
ES8 ~~ 1 * ES8
ES9 ~~ 1 * ES9
ES10 ~~ 1 * ES10
ENG1 ~~ 0 * ENG1
ENG2 ~~ 0 * ENG2
MAT1 ~~ 0 * MAT1
MAT2 ~~ 0 * MAT2
SOC1 ~~ 0 * SOC1
SOC2 ~~ 0 * SOC2
NAT1 ~~ 0 * NAT1
NAT2 ~~ 0 * NAT2
VOC1 ~~ 0 * VOC1
VOC2 ~~ 0 * VOC2
g1 ~~ 0 * g1
g2 ~~ 0 * g2'
NMTCPACE <- sem(NMTCPACE, data = NMTP, group = "ZYG", orthogonal = T)
## Warning in lav_model_vcov(lavmodel = lavmodel, lavsamplestats = lavsamplestats, : lavaan WARNING:
## The variance-covariance matrix of the estimated parameters (vcov)
## does not appear to be positive definite! The smallest eigenvalue
## (= 3.283539e-13) is close to zero. This may be a symptom that the
## model is not identified.
NMTCPADE <- '
# Factors and Loadings
g1 =~ NA * ENG1 + c(gl1, gl1) * ENG1 + c(0.5, 0.5) * ENG1 + NA * MAT1 + c(gl2, gl2) * MAT1 + c(0.5, 0.5) * MAT1 + NA * SOC1 + c(gl3, gl3) * SOC1 + c(0.5, 0.5) * SOC1 + NA * NAT1 + c(gl4, gl4) * NAT1 + c(0.5, 0.5) * NAT1 + NA * VOC1 + c(gl5, gl5) * VOC1 + c(0.5, 0.5) * VOC1
g2 =~ NA * ENG2 + c(gl1, gl1) * ENG2 + c(0.5, 0.5) * ENG2 + NA * MAT2 + c(gl2, gl2) * MAT2 + c(0.5, 0.5) * MAT2 + NA * SOC2 + c(gl3, gl3) * SOC2 + c(0.5, 0.5) * SOC2 + NA * NAT2 + c(gl4, gl4) * NAT2 + c(0.5, 0.5) * NAT2 + NA * VOC2 + c(gl5, gl5) * VOC2 + c(0.5, 0.5) * VOC2
ag1 =~ NA * g1 + c(a, a) * g1 + c(0.5, 0.5) * g1
ag2 =~ NA * g2 + c(a, a) * g2 + c(0.5, 0.5) * g2
dg1 =~ NA * g1 + c(d, d) * g1
dg2 =~ NA * g2 + c(d, d) * g2
eg1 =~ NA * g1 + c(q, q) * g1
eg2 =~ NA * g2 + c(q, q) * g2
AS1 =~ NA * ENG1 + c(a1, a1) * ENG1 + c(0.5, 0.5) * ENG1
AS2 =~ NA * ENG2 + c(a1, a1) * ENG2 + c(0.5, 0.5) * ENG2
DS1 =~ NA * ENG1 + c(d1, d1) * ENG1
DS2 =~ NA * ENG2 + c(d1, d1) * ENG2
ES1 =~ NA * ENG1 + c(e1, e1) * ENG1
ES2 =~ NA * ENG2 + c(e1, e1) * ENG2
AS3 =~ NA * MAT1 + c(a2, a2) * MAT1 + c(0.5, 0.5) * MAT1
AS4 =~ NA * MAT2 + c(a2, a2) * MAT2 + c(0.5, 0.5) * MAT2
DS3 =~ NA * MAT1 + c(d2, d2) * MAT1
DS4 =~ NA * MAT2 + c(d2, d2) * MAT2
ES3 =~ NA * MAT1 + c(e2, e2) * MAT1
ES4 =~ NA * MAT2 + c(e2, e2) * MAT2
AS5 =~ NA * SOC1 + c(a3, a3) * SOC1 + c(0.5, 0.5) * SOC1
AS6 =~ NA * SOC2 + c(a3, a3) * SOC2 + c(0.5, 0.5) * SOC2
DS5 =~ NA * SOC1 + c(d3, d3) * SOC1
DS6 =~ NA * SOC2 + c(d3, d3) * SOC2
ES5 =~ NA * SOC1 + c(e3, e3) * SOC1
ES6 =~ NA * SOC2 + c(e3, e3) * SOC2
AS7 =~ NA * NAT1 + c(a4, a4) * NAT1 + c(0.5, 0.5) * NAT1
AS8 =~ NA * NAT2 + c(a4, a4) * NAT2 + c(0.5, 0.5) * NAT2
DS7 =~ NA * NAT1 + c(d4, d4) * NAT1
DS8 =~ NA * NAT2 + c(d4, d4) * NAT2
ES7 =~ NA * NAT1 + c(e4, e4) * NAT1
ES8 =~ NA * NAT2 + c(e4, e4) * NAT2
AS9 =~ NA * VOC1 + c(a5, a5) * VOC1 + c(0.5, 0.5) * VOC1
AS10 =~ NA * VOC2 + c(a5, a5) * VOC2 + c(0.5, 0.5) * VOC2
DS9 =~ NA * VOC1 + c(d5, d5) * VOC1
DS10 =~ NA * VOC2 + c(d5, d5) * VOC2
ES9 =~ NA * VOC1 + c(e5, e5) * VOC1
ES10 =~ NA * VOC2 + c(e5, e5) * VOC2
# Covariances
ag1 ~~ c(1, 0.5) * ag2
ag1 ~~ 0 * dg1 + 0 * dg2
ag2 ~~ 0 * dg1 + 0 * dg2
dg1 ~~ c(1, 0.25) * dg2
eg1 ~~ 0 * eg2
AS1 ~~ c(1, 0.5) * AS2
AS1 ~~ 0 * DS1 + 0 * DS2
AS2 ~~ 0 * DS1 + 0 * DS2
DS1 ~~ c(1, 0.25) * DS2
ES1 ~~ 0 * ES2
AS3 ~~ c(1, 0.5) * AS4
AS3 ~~ 0 * DS3 + 0 * DS4
AS4 ~~ 0 * DS3 + 0 * DS4
DS3 ~~ c(1, 0.25) * DS4
ES3 ~~ 0 * ES4
AS5 ~~ c(1, 0.5) * AS6
AS5 ~~ 0 * DS5 + 0 * DS6
AS6 ~~ 0 * DS5 + 0 * DS6
DS5 ~~ c(1, 0.25) * DS6
ES5 ~~ 0 * ES6
AS7 ~~ c(1, 0.5) * AS8
AS7 ~~ 0 * DS7 + 0 * DS8
AS8 ~~ 0 * DS7 + 0 * DS8
DS7 ~~ c(1, 0.25) * DS8
ES7 ~~ 0 * ES8
AS9 ~~ c(1, 0.5) * AS10
AS9 ~~ 0 * DS9 + 0 * DS10
AS10 ~~ 0 * DS9 + 0 * DS10
DS9 ~~ c(1, 0.25) * DS10
ES9 ~~ 0 * ES10
#Variances
ag1 ~~ 1 * ag1
ag2 ~~ 1 * ag2
dg1 ~~ 1 * dg1
dg2 ~~ 1 * dg2
eg1 ~~ 1 * eg1
eg2 ~~ 1 * eg2
AS1 ~~ 1 * AS1
AS2 ~~ 1 * AS2
AS3 ~~ 1 * AS3
AS4 ~~ 1 * AS4
AS5 ~~ 1 * AS5
AS6 ~~ 1 * AS6
AS7 ~~ 1 * AS7
AS8 ~~ 1 * AS8
AS9 ~~ 1 * AS9
AS10 ~~ 1 * AS10
DS1 ~~ 1 * DS1
DS2 ~~ 1 * DS2
DS3 ~~ 1 * DS3
DS4 ~~ 1 * DS4
DS5 ~~ 1 * DS5
DS6 ~~ 1 * DS6
DS7 ~~ 1 * DS7
DS8 ~~ 1 * DS8
DS9 ~~ 1 * DS9
DS10 ~~ 1 * DS10
ES1 ~~ 1 * ES1
ES2 ~~ 1 * ES2
ES3 ~~ 1 * ES3
ES4 ~~ 1 * ES4
ES5 ~~ 1 * ES5
ES6 ~~ 1 * ES6
ES7 ~~ 1 * ES7
ES8 ~~ 1 * ES8
ES9 ~~ 1 * ES9
ES10 ~~ 1 * ES10
ENG1 ~~ 0 * ENG1
ENG2 ~~ 0 * ENG2
MAT1 ~~ 0 * MAT1
MAT2 ~~ 0 * MAT2
SOC1 ~~ 0 * SOC1
SOC2 ~~ 0 * SOC2
NAT1 ~~ 0 * NAT1
NAT2 ~~ 0 * NAT2
VOC1 ~~ 0 * VOC1
VOC2 ~~ 0 * VOC2
g1 ~~ 0 * g1
g2 ~~ 0 * g2'
NMTCPADE <- sem(NMTCPADE, data = NMTP, group = "ZYG", orthogonal = T)
## Warning in lav_model_vcov(lavmodel = lavmodel, lavsamplestats = lavsamplestats, : lavaan WARNING:
## The variance-covariance matrix of the estimated parameters (vcov)
## does not appear to be positive definite! The smallest eigenvalue
## (= 3.007012e-13) is close to zero. This may be a symptom that the
## model is not identified.
NMTCPAE <- '
# Factors and Loadings
g1 =~ NA * ENG1 + c(gl1, gl1) * ENG1 + c(0.5, 0.5) * ENG1 + NA * MAT1 + c(gl2, gl2) * MAT1 + c(0.5, 0.5) * MAT1 + NA * SOC1 + c(gl3, gl3) * SOC1 + c(0.5, 0.5) * SOC1 + NA * NAT1 + c(gl4, gl4) * NAT1 + c(0.5, 0.5) * NAT1 + NA * VOC1 + c(gl5, gl5) * VOC1 + c(0.5, 0.5) * VOC1
g2 =~ NA * ENG2 + c(gl1, gl1) * ENG2 + c(0.5, 0.5) * ENG2 + NA * MAT2 + c(gl2, gl2) * MAT2 + c(0.5, 0.5) * MAT2 + NA * SOC2 + c(gl3, gl3) * SOC2 + c(0.5, 0.5) * SOC2 + NA * NAT2 + c(gl4, gl4) * NAT2 + c(0.5, 0.5) * NAT2 + NA * VOC2 + c(gl5, gl5) * VOC2 + c(0.5, 0.5) * VOC2
ag1 =~ NA * g1 + c(a, a) * g1 + c(0.5, 0.5) * g1
ag2 =~ NA * g2 + c(a, a) * g2 + c(0.5, 0.5) * g2
eg1 =~ NA * g1 + c(q, q) * g1
eg2 =~ NA * g2 + c(q, q) * g2
AS1 =~ NA * ENG1 + c(a1, a1) * ENG1 + c(0.5, 0.5) * ENG1
AS2 =~ NA * ENG2 + c(a1, a1) * ENG2 + c(0.5, 0.5) * ENG2
ES1 =~ NA * ENG1 + c(e1, e1) * ENG1
ES2 =~ NA * ENG2 + c(e1, e1) * ENG2
AS3 =~ NA * MAT1 + c(a2, a2) * MAT1 + c(0.5, 0.5) * MAT1
AS4 =~ NA * MAT2 + c(a2, a2) * MAT2 + c(0.5, 0.5) * MAT2
ES3 =~ NA * MAT1 + c(e2, e2) * MAT1
ES4 =~ NA * MAT2 + c(e2, e2) * MAT2
AS5 =~ NA * SOC1 + c(a3, a3) * SOC1 + c(0.5, 0.5) * SOC1
AS6 =~ NA * SOC2 + c(a3, a3) * SOC2 + c(0.5, 0.5) * SOC2
ES5 =~ NA * SOC1 + c(e3, e3) * SOC1
ES6 =~ NA * SOC2 + c(e3, e3) * SOC2
AS7 =~ NA * NAT1 + c(a4, a4) * NAT1 + c(0.5, 0.5) * NAT1
AS8 =~ NA * NAT2 + c(a4, a4) * NAT2 + c(0.5, 0.5) * NAT2
ES7 =~ NA * NAT1 + c(e4, e4) * NAT1
ES8 =~ NA * NAT2 + c(e4, e4) * NAT2
AS9 =~ NA * VOC1 + c(a5, a5) * VOC1 + c(0.5, 0.5) * VOC1
AS10 =~ NA * VOC2 + c(a5, a5) * VOC2 + c(0.5, 0.5) * VOC2
ES9 =~ NA * VOC1 + c(e5, e5) * VOC1
ES10 =~ NA * VOC2 + c(e5, e5) * VOC2
# Covariances
ag1 ~~ c(1, 0.5) * ag2
eg1 ~~ 0 * eg2
AS1 ~~ c(1, 0.5) * AS2
ES1 ~~ 0 * ES2
AS3 ~~ c(1, 0.5) * AS4
ES3 ~~ 0 * ES4
AS5 ~~ c(1, 0.5) * AS6
ES5 ~~ 0 * ES6
AS7 ~~ c(1, 0.5) * AS8
ES7 ~~ 0 * ES8
AS9 ~~ c(1, 0.5) * AS10
ES9 ~~ 0 * ES10
#Variances
ag1 ~~ 1 * ag1
ag2 ~~ 1 * ag2
eg1 ~~ 1 * eg1
eg2 ~~ 1 * eg2
AS1 ~~ 1 * AS1
AS2 ~~ 1 * AS2
AS3 ~~ 1 * AS3
AS4 ~~ 1 * AS4
AS5 ~~ 1 * AS5
AS6 ~~ 1 * AS6
AS7 ~~ 1 * AS7
AS8 ~~ 1 * AS8
AS9 ~~ 1 * AS9
AS10 ~~ 1 * AS10
ES1 ~~ 1 * ES1
ES2 ~~ 1 * ES2
ES3 ~~ 1 * ES3
ES4 ~~ 1 * ES4
ES5 ~~ 1 * ES5
ES6 ~~ 1 * ES6
ES7 ~~ 1 * ES7
ES8 ~~ 1 * ES8
ES9 ~~ 1 * ES9
ES10 ~~ 1 * ES10
ENG1 ~~ 0 * ENG1
ENG2 ~~ 0 * ENG2
MAT1 ~~ 0 * MAT1
MAT2 ~~ 0 * MAT2
SOC1 ~~ 0 * SOC1
SOC2 ~~ 0 * SOC2
NAT1 ~~ 0 * NAT1
NAT2 ~~ 0 * NAT2
VOC1 ~~ 0 * VOC1
VOC2 ~~ 0 * VOC2
g1 ~~ 0 * g1
g2 ~~ 0 * g2'
NMTCPAE <- sem(NMTCPAE, data = NMTP, group = "ZYG", orthogonal = T)
## Warning in lav_model_vcov(lavmodel = lavmodel, lavsamplestats = lavsamplestats, : lavaan WARNING:
## The variance-covariance matrix of the estimated parameters (vcov)
## does not appear to be positive definite! The smallest eigenvalue
## (= 4.675192e-16) is close to zero. This may be a symptom that the
## model is not identified.
NMTCPCE <- '
# Factors and Loadings
g1 =~ NA * ENG1 + c(gl1, gl1) * ENG1 + c(0.5, 0.5) * ENG1 + NA * MAT1 + c(gl2, gl2) * MAT1 + c(0.5, 0.5) * MAT1 + NA * SOC1 + c(gl3, gl3) * SOC1 + c(0.5, 0.5) * SOC1 + NA * NAT1 + c(gl4, gl4) * NAT1 + c(0.5, 0.5) * NAT1 + NA * VOC1 + c(gl5, gl5) * VOC1 + c(0.5, 0.5) * VOC1
g2 =~ NA * ENG2 + c(gl1, gl1) * ENG2 + c(0.5, 0.5) * ENG2 + NA * MAT2 + c(gl2, gl2) * MAT2 + c(0.5, 0.5) * MAT2 + NA * SOC2 + c(gl3, gl3) * SOC2 + c(0.5, 0.5) * SOC2 + NA * NAT2 + c(gl4, gl4) * NAT2 + c(0.5, 0.5) * NAT2 + NA * VOC2 + c(gl5, gl5) * VOC2 + c(0.5, 0.5) * VOC2
cg1 =~ NA * g1 + c(d, d) * g1
cg2 =~ NA * g2 + c(d, d) * g2
eg1 =~ NA * g1 + c(q, q) * g1
eg2 =~ NA * g2 + c(q, q) * g2
CS1 =~ NA * ENG1 + c(d1, d1) * ENG1
CS2 =~ NA * ENG2 + c(d1, d1) * ENG2
ES1 =~ NA * ENG1 + c(e1, e1) * ENG1
ES2 =~ NA * ENG2 + c(e1, e1) * ENG2
CS3 =~ NA * MAT1 + c(d2, d2) * MAT1
CS4 =~ NA * MAT2 + c(d2, d2) * MAT2
ES3 =~ NA * MAT1 + c(e2, e2) * MAT1
ES4 =~ NA * MAT2 + c(e2, e2) * MAT2
CS5 =~ NA * SOC1 + c(d3, d3) * SOC1
CS6 =~ NA * SOC2 + c(d3, d3) * SOC2
ES5 =~ NA * SOC1 + c(e3, e3) * SOC1
ES6 =~ NA * SOC2 + c(e3, e3) * SOC2
CS7 =~ NA * NAT1 + c(d4, d4) * NAT1
CS8 =~ NA * NAT2 + c(d4, d4) * NAT2
ES7 =~ NA * NAT1 + c(e4, e4) * NAT1
ES8 =~ NA * NAT2 + c(e4, e4) * NAT2
CS9 =~ NA * VOC1 + c(d5, d5) * VOC1
CS10 =~ NA * VOC2 + c(d5, d5) * VOC2
ES9 =~ NA * VOC1 + c(e5, e5) * VOC1
ES10 =~ NA * VOC2 + c(e5, e5) * VOC2
# Covariances
cg1 ~~ c(1, 1) * cg2
eg1 ~~ 0 * eg2
CS1 ~~ c(1, 1) * CS2
ES1 ~~ 0 * ES2
CS3 ~~ c(1, 1) * CS4
ES3 ~~ 0 * ES4
CS5 ~~ c(1, 1) * CS6
ES5 ~~ 0 * ES6
CS7 ~~ c(1, 1) * CS8
ES7 ~~ 0 * ES8
CS9 ~~ c(1, 1) * CS10
ES9 ~~ 0 * ES10
#Variances
cg1 ~~ 1 * cg1
cg2 ~~ 1 * cg2
eg1 ~~ 1 * eg1
eg2 ~~ 1 * eg2
CS1 ~~ 1 * CS1
CS2 ~~ 1 * CS2
CS3 ~~ 1 * CS3
CS4 ~~ 1 * CS4
CS5 ~~ 1 * CS5
CS6 ~~ 1 * CS6
CS7 ~~ 1 * CS7
CS8 ~~ 1 * CS8
CS9 ~~ 1 * CS9
CS10 ~~ 1 * CS10
ES1 ~~ 1 * ES1
ES2 ~~ 1 * ES2
ES3 ~~ 1 * ES3
ES4 ~~ 1 * ES4
ES5 ~~ 1 * ES5
ES6 ~~ 1 * ES6
ES7 ~~ 1 * ES7
ES8 ~~ 1 * ES8
ES9 ~~ 1 * ES9
ES10 ~~ 1 * ES10
ENG1 ~~ 0 * ENG1
ENG2 ~~ 0 * ENG2
MAT1 ~~ 0 * MAT1
MAT2 ~~ 0 * MAT2
SOC1 ~~ 0 * SOC1
SOC2 ~~ 0 * SOC2
NAT1 ~~ 0 * NAT1
NAT2 ~~ 0 * NAT2
VOC1 ~~ 0 * VOC1
VOC2 ~~ 0 * VOC2
g1 ~~ 0 * g1
g2 ~~ 0 * g2'
NMTCPCE <- sem(NMTCPCE, data = NMTP, group = "ZYG", orthogonal = T)
## Warning in lav_model_vcov(lavmodel = lavmodel, lavsamplestats = lavsamplestats, : lavaan WARNING:
## The variance-covariance matrix of the estimated parameters (vcov)
## does not appear to be positive definite! The smallest eigenvalue
## (= 8.965642e-16) is close to zero. This may be a symptom that the
## model is not identified.
NMTCPDE <- '
# Factors and Loadings
g1 =~ NA * ENG1 + c(gl1, gl1) * ENG1 + c(0.5, 0.5) * ENG1 + NA * MAT1 + c(gl2, gl2) * MAT1 + c(0.5, 0.5) * MAT1 + NA * SOC1 + c(gl3, gl3) * SOC1 + c(0.5, 0.5) * SOC1 + NA * NAT1 + c(gl4, gl4) * NAT1 + c(0.5, 0.5) * NAT1 + NA * VOC1 + c(gl5, gl5) * VOC1 + c(0.5, 0.5) * VOC1
g2 =~ NA * ENG2 + c(gl1, gl1) * ENG2 + c(0.5, 0.5) * ENG2 + NA * MAT2 + c(gl2, gl2) * MAT2 + c(0.5, 0.5) * MAT2 + NA * SOC2 + c(gl3, gl3) * SOC2 + c(0.5, 0.5) * SOC2 + NA * NAT2 + c(gl4, gl4) * NAT2 + c(0.5, 0.5) * NAT2 + NA * VOC2 + c(gl5, gl5) * VOC2 + c(0.5, 0.5) * VOC2
dg1 =~ NA * g1 + c(d, d) * g1
dg2 =~ NA * g2 + c(d, d) * g2
eg1 =~ NA * g1 + c(q, q) * g1
eg2 =~ NA * g2 + c(q, q) * g2
DS1 =~ NA * ENG1 + c(d1, d1) * ENG1
DS2 =~ NA * ENG2 + c(d1, d1) * ENG2
ES1 =~ NA * ENG1 + c(e1, e1) * ENG1
ES2 =~ NA * ENG2 + c(e1, e1) * ENG2
DS3 =~ NA * MAT1 + c(d2, d2) * MAT1
DS4 =~ NA * MAT2 + c(d2, d2) * MAT2
ES3 =~ NA * MAT1 + c(e2, e2) * MAT1
ES4 =~ NA * MAT2 + c(e2, e2) * MAT2
DS5 =~ NA * SOC1 + c(d3, d3) * SOC1
DS6 =~ NA * SOC2 + c(d3, d3) * SOC2
ES5 =~ NA * SOC1 + c(e3, e3) * SOC1
ES6 =~ NA * SOC2 + c(e3, e3) * SOC2
DS7 =~ NA * NAT1 + c(d4, d4) * NAT1
DS8 =~ NA * NAT2 + c(d4, d4) * NAT2
ES7 =~ NA * NAT1 + c(e4, e4) * NAT1
ES8 =~ NA * NAT2 + c(e4, e4) * NAT2
DS9 =~ NA * VOC1 + c(d5, d5) * VOC1
DS10 =~ NA * VOC2 + c(d5, d5) * VOC2
ES9 =~ NA * VOC1 + c(e5, e5) * VOC1
ES10 =~ NA * VOC2 + c(e5, e5) * VOC2
# Covariances
dg1 ~~ c(1, 0.25) * dg2
eg1 ~~ 0 * eg2
DS1 ~~ c(1, 0.25) * DS2
ES1 ~~ 0 * ES2
DS3 ~~ c(1, 0.25) * DS4
ES3 ~~ 0 * ES4
DS5 ~~ c(1, 0.25) * DS6
ES5 ~~ 0 * ES6
DS7 ~~ c(1, 0.25) * DS8
ES7 ~~ 0 * ES8
DS9 ~~ c(1, 0.25) * DS10
ES9 ~~ 0 * ES10
#Variances
dg1 ~~ 1 * dg1
dg2 ~~ 1 * dg2
eg1 ~~ 1 * eg1
eg2 ~~ 1 * eg2
DS1 ~~ 1 * DS1
DS2 ~~ 1 * DS2
DS3 ~~ 1 * DS3
DS4 ~~ 1 * DS4
DS5 ~~ 1 * DS5
DS6 ~~ 1 * DS6
DS7 ~~ 1 * DS7
DS8 ~~ 1 * DS8
DS9 ~~ 1 * DS9
DS10 ~~ 1 * DS10
ES1 ~~ 1 * ES1
ES2 ~~ 1 * ES2
ES3 ~~ 1 * ES3
ES4 ~~ 1 * ES4
ES5 ~~ 1 * ES5
ES6 ~~ 1 * ES6
ES7 ~~ 1 * ES7
ES8 ~~ 1 * ES8
ES9 ~~ 1 * ES9
ES10 ~~ 1 * ES10
ENG1 ~~ 0 * ENG1
ENG2 ~~ 0 * ENG2
MAT1 ~~ 0 * MAT1
MAT2 ~~ 0 * MAT2
SOC1 ~~ 0 * SOC1
SOC2 ~~ 0 * SOC2
NAT1 ~~ 0 * NAT1
NAT2 ~~ 0 * NAT2
VOC1 ~~ 0 * VOC1
VOC2 ~~ 0 * VOC2
g1 ~~ 0 * g1
g2 ~~ 0 * g2'
NMTCPDE <- sem(NMTCPDE, data = NMTP, group = "ZYG", orthogonal = T)
## Warning in lav_model_vcov(lavmodel = lavmodel, lavsamplestats = lavsamplestats, : lavaan WARNING:
## The variance-covariance matrix of the estimated parameters (vcov)
## does not appear to be positive definite! The smallest eigenvalue
## (= 5.877268e-16) is close to zero. This may be a symptom that the
## model is not identified.
#Independent Pathway Models
NMTIPACE <- '
# Loadings
AL1 =~ NA * ENG1 + c(al1, al1) * ENG1 + c(0.5, 0.5) * ENG1 + NA * MAT1 + c(al2, al2) * MAT1 + c(0.5, 0.5) * MAT1 + NA * SOC1 + c(al3, al3) * SOC1 + c(0.5, 0.5) * SOC1 + NA * NAT1 + c(al4, al4) * NAT1 + c(0.5, 0.5) * NAT1 + NA * VOC1 + c(al5, al5) * VOC1 + c(0.5, 0.5) * VOC1
AL2 =~ NA * ENG2 + c(al1, al1) * ENG2 + c(0.5, 0.5) * ENG2 + NA * MAT2 + c(al2, al2) * MAT2 + c(0.5, 0.5) * MAT2 + NA * SOC2 + c(al3, al3) * SOC2 + c(0.5, 0.5) * SOC2 + NA * NAT2 + c(al4, al4) * NAT2 + c(0.5, 0.5) * NAT2 + NA * VOC2 + c(al5, al5) * VOC2 + c(0.5, 0.5) * VOC2
CL1 =~ NA * ENG1 + c(cl1, cl1) * ENG1 + c(0.5, 0.5) * ENG1 + NA * MAT1 + c(cl2, cl2) * MAT1 + c(0.5, 0.5) * MAT1 + NA * SOC1 + c(cl3, cl3) * SOC1 + c(0.5, 0.5) * SOC1 + NA * NAT1 + c(cl4, cl4) * NAT1 + c(0.5, 0.5) * NAT1 + NA * VOC1 + c(cl5, cl5) * VOC1 + c(0.5, 0.5) * VOC1
CL2 =~ NA * ENG2 + c(cl1, cl1) * ENG2 + c(0.5, 0.5) * ENG2 + NA * MAT2 + c(cl2, cl2) * MAT2 + c(0.5, 0.5) * MAT2 + NA * SOC2 + c(cl3, cl3) * SOC2 + c(0.5, 0.5) * SOC2 + NA * NAT2 + c(cl4, cl4) * NAT2 + c(0.5, 0.5) * NAT2 + NA * VOC2 + c(cl5, cl5) * VOC2 + c(0.5, 0.5) * VOC2
EL1 =~ NA * ENG1 + c(el1, el1) * ENG1 + c(0.5, 0.5) * ENG1 + NA * MAT1 + c(el2, el2) * MAT1 + c(0.5, 0.5) * MAT1 + NA * SOC1 + c(el3, el3) * SOC1 + c(0.5, 0.5) * SOC1 + NA * NAT1 + c(el4, el4) * NAT1 + c(0.5, 0.5) * NAT1 + NA * VOC1 + c(el5, el5) * VOC1 + c(0.5, 0.5) * VOC1
EL2 =~ NA * ENG2 + c(el1, el1) * ENG2 + c(0.5, 0.5) * ENG2 + NA * MAT2 + c(el2, el2) * MAT2 + c(0.5, 0.5) * MAT2 + NA * SOC2 + c(el3, el3) * SOC2 + c(0.5, 0.5) * SOC2 + NA * NAT2 + c(el4, el4) * NAT2 + c(0.5, 0.5) * NAT2 + NA * VOC2 + c(el5, el5) * VOC2 + c(0.5, 0.5) * VOC2
AS1 =~ NA * ENG1 + c(a1, a1) * ENG1 + c(0.5, 0.5) * ENG1
AS2 =~ NA * ENG2 + c(a1, a1) * ENG2 + c(0.5, 0.5) * ENG2
CS1 =~ NA * ENG1 + c(d1, d1) * ENG1
CS2 =~ NA * ENG2 + c(d1, d1) * ENG2
AS3 =~ NA * MAT1 + c(a2, a2) * MAT1 + c(0.5, 0.5) * MAT1
AS4 =~ NA * MAT2 + c(a2, a2) * MAT2 + c(0.5, 0.5) * MAT2
CS3 =~ NA * MAT1 + c(d2, d2) * MAT1
CS4 =~ NA * MAT2 + c(d2, d2) * MAT2
AS5 =~ NA * SOC1 + c(a3, a3) * SOC1 + c(0.5, 0.5) * SOC1
AS6 =~ NA * SOC2 + c(a3, a3) * SOC2 + c(0.5, 0.5) * SOC2
CS5 =~ NA * SOC1 + c(d3, d3) * SOC1
CS6 =~ NA * SOC2 + c(d3, d3) * SOC2
AS7 =~ NA * NAT1 + c(a4, a4) * NAT1 + c(0.5, 0.5) * NAT1
AS8 =~ NA * NAT2 + c(a4, a4) * NAT2 + c(0.5, 0.5) * NAT2
CS7 =~ NA * NAT1 + c(d4, d4) * NAT1
CS8 =~ NA * NAT2 + c(d4, d4) * NAT2
AS9 =~ NA * VOC1 + c(a5, a5) * VOC1 + c(0.5, 0.5) * VOC1
AS10 =~ NA * VOC2 + c(a5, a5) * VOC2 + c(0.5, 0.5) * VOC2
CS9 =~ NA * VOC1 + c(d5, d5) * VOC1
CS10 =~ NA * VOC2 + c(d5, d5) * VOC2
# Covariances
AL1 ~~ c(1, 0.5) * AL2
AL1 ~~ 0 * CL1 + 0 * CL2
AL2 ~~ 0 * CL1 + 0 * CL2
CL1 ~~ c(1, 1) * CL2
EL1 ~~ c(0, 0) * EL2
AS1 ~~ c(1, 0.5) * AS2
AS1 ~~ 0 * CS1 + 0 * CS2
AS2 ~~ 0 * CS1 + 0 * CS2
CS1 ~~ c(1, 1) * CS2
AS3 ~~ c(1, 0.5) * AS4
AS3 ~~ 0 * CS3 + 0 * CS4
AS4 ~~ 0 * CS3 + 0 * CS4
CS3 ~~ c(1, 1) * CS4
AS5 ~~ c(1, 0.5) * AS6
AS5 ~~ 0 * CS5 + 0 * CS6
AS6 ~~ 0 * CS5 + 0 * CS6
CS5 ~~ c(1, 1) * CS6
AS7 ~~ c(1, 0.5) * AS8
AS7 ~~ 0 * CS7 + 0 * CS8
AS8 ~~ 0 * CS7 + 0 * CS8
CS7 ~~ c(1, 1) * CS8
AS9 ~~ c(1, 0.5) * AS10
AS9 ~~ 0 * CS9 + 0 * CS10
AS10 ~~ 0 * CS9 + 0 * CS10
CS9 ~~ c(1, 1) * CS10
# Variances
AL1 ~~ 1 * AL1
AL2 ~~ 1 * AL2
CL1 ~~ 1 * CL1
CL2 ~~ 1 * CL2
EL1 ~~ 1 * EL1
EL2 ~~ 1 * EL2
AS1 ~~ 1 * AS1
AS2 ~~ 1 * AS2
AS3 ~~ 1 * AS3
AS4 ~~ 1 * AS4
AS5 ~~ 1 * AS5
AS6 ~~ 1 * AS6
AS7 ~~ 1 * AS7
AS8 ~~ 1 * AS8
AS9 ~~ 1 * AS9
AS10 ~~ 1 * AS10
CS1 ~~ 1 * CS1
CS2 ~~ 1 * CS2
CS3 ~~ 1 * CS3
CS4 ~~ 1 * CS4
CS5 ~~ 1 * CS5
CS6 ~~ 1 * CS6
CS7 ~~ 1 * CS7
CS8 ~~ 1 * CS8
CS9 ~~ 1 * CS9
CS10 ~~ 1 * CS10
ENG1 ~~ c(e2, e2) * ENG1
ENG2 ~~ c(e2, e2) * ENG2
MAT1 ~~ c(e3, e3) * MAT1
MAT2 ~~ c(e3, e3) * MAT2
SOC1 ~~ c(e4, e4) * SOC1
SOC2 ~~ c(e4, e4) * SOC2
NAT1 ~~ c(e5, e5) * NAT1
NAT2 ~~ c(e5, e5) * NAT2
VOC1 ~~ c(e6, e6) * VOC1
VOC2 ~~ c(e6, e6) * VOC2'
NMTIPACE <- sem(NMTIPACE, data = NMTP, group = "ZYG", orthogonal = T)
NMTIPADE <- '
# Loadings
AL1 =~ NA * ENG1 + c(al1, al1) * ENG1 + c(0.5, 0.5) * ENG1 + NA * MAT1 + c(al2, al2) * MAT1 + c(0.5, 0.5) * MAT1 + NA * SOC1 + c(al3, al3) * SOC1 + c(0.5, 0.5) * SOC1 + NA * NAT1 + c(al4, al4) * NAT1 + c(0.5, 0.5) * NAT1 + NA * VOC1 + c(al5, al5) * VOC1 + c(0.5, 0.5) * VOC1
AL2 =~ NA * ENG2 + c(al1, al1) * ENG2 + c(0.5, 0.5) * ENG2 + NA * MAT2 + c(al2, al2) * MAT2 + c(0.5, 0.5) * MAT2 + NA * SOC2 + c(al3, al3) * SOC2 + c(0.5, 0.5) * SOC2 + NA * NAT2 + c(al4, al4) * NAT2 + c(0.5, 0.5) * NAT2 + NA * VOC2 + c(al5, al5) * VOC2 + c(0.5, 0.5) * VOC2
DL1 =~ NA * ENG1 + c(dl1, dl1) * ENG1 + c(0.5, 0.5) * ENG1 + NA * MAT1 + c(dl2, dl2) * MAT1 + c(0.5, 0.5) * MAT1 + NA * SOC1 + c(dl3, dl3) * SOC1 + c(0.5, 0.5) * SOC1 + NA * NAT1 + c(dl4, dl4) * NAT1 + c(0.5, 0.5) * NAT1 + NA * VOC1 + c(dl5, dl5) * VOC1 + c(0.5, 0.5) * VOC1
DL2 =~ NA * ENG2 + c(dl1, dl1) * ENG2 + c(0.5, 0.5) * ENG2 + NA * MAT2 + c(dl2, dl2) * MAT2 + c(0.5, 0.5) * MAT2 + NA * SOC2 + c(dl3, dl3) * SOC2 + c(0.5, 0.5) * SOC2 + NA * NAT2 + c(dl4, dl4) * NAT2 + c(0.5, 0.5) * NAT2 + NA * VOC2 + c(dl5, dl5) * VOC2 + c(0.5, 0.5) * VOC2
EL1 =~ NA * ENG1 + c(el1, el1) * ENG1 + c(0.5, 0.5) * ENG1 + NA * MAT1 + c(el2, el2) * MAT1 + c(0.5, 0.5) * MAT1 + NA * SOC1 + c(el3, el3) * SOC1 + c(0.5, 0.5) * SOC1 + NA * NAT1 + c(el4, el4) * NAT1 + c(0.5, 0.5) * NAT1 + NA * VOC1 + c(el5, el5) * VOC1 + c(0.5, 0.5) * VOC1
EL2 =~ NA * ENG2 + c(el1, el1) * ENG2 + c(0.5, 0.5) * ENG2 + NA * MAT2 + c(el2, el2) * MAT2 + c(0.5, 0.5) * MAT2 + NA * SOC2 + c(el3, el3) * SOC2 + c(0.5, 0.5) * SOC2 + NA * NAT2 + c(el4, el4) * NAT2 + c(0.5, 0.5) * NAT2 + NA * VOC2 + c(el5, el5) * VOC2 + c(0.5, 0.5) * VOC2
AS1 =~ NA * ENG1 + c(a1, a1) * ENG1 + c(0.5, 0.5) * ENG1
AS2 =~ NA * ENG2 + c(a1, a1) * ENG2 + c(0.5, 0.5) * ENG2
DS1 =~ NA * ENG1 + c(d1, d1) * ENG1
DS2 =~ NA * ENG2 + c(d1, d1) * ENG2
AS3 =~ NA * MAT1 + c(a2, a2) * MAT1 + c(0.5, 0.5) * MAT1
AS4 =~ NA * MAT2 + c(a2, a2) * MAT2 + c(0.5, 0.5) * MAT2
DS3 =~ NA * MAT1 + c(d2, d2) * MAT1
DS4 =~ NA * MAT2 + c(d2, d2) * MAT2
AS5 =~ NA * SOC1 + c(a3, a3) * SOC1 + c(0.5, 0.5) * SOC1
AS6 =~ NA * SOC2 + c(a3, a3) * SOC2 + c(0.5, 0.5) * SOC2
DS5 =~ NA * SOC1 + c(d3, d3) * SOC1
DS6 =~ NA * SOC2 + c(d3, d3) * SOC2
AS7 =~ NA * NAT1 + c(a4, a4) * NAT1 + c(0.5, 0.5) * NAT1
AS8 =~ NA * NAT2 + c(a4, a4) * NAT2 + c(0.5, 0.5) * NAT2
DS7 =~ NA * NAT1 + c(d4, d4) * NAT1
DS8 =~ NA * NAT2 + c(d4, d4) * NAT2
AS9 =~ NA * VOC1 + c(a5, a5) * VOC1 + c(0.5, 0.5) * VOC1
AS10 =~ NA * VOC2 + c(a5, a5) * VOC2 + c(0.5, 0.5) * VOC2
DS9 =~ NA * VOC1 + c(d5, d5) * VOC1
DS10 =~ NA * VOC2 + c(d5, d5) * VOC2
# Covariances
AL1 ~~ c(1, 0.5) * AL2
AL1 ~~ 0 * DL1 + 0 * DL2
AL2 ~~ 0 * DL1 + 0 * DL2
DL1 ~~ c(1, 0.25) * DL2
EL1 ~~ c(0, 0) * EL2
AS1 ~~ c(1, 0.5) * AS2
AS1 ~~ 0 * DS1 + 0 * DS2
AS2 ~~ 0 * DS1 + 0 * DS2
DS1 ~~ c(1, 0.25) * DS2
AS3 ~~ c(1, 0.5) * AS4
AS3 ~~ 0 * DS3 + 0 * DS4
AS4 ~~ 0 * DS3 + 0 * DS4
DS3 ~~ c(1, 0.25) * DS4
AS5 ~~ c(1, 0.5) * AS6
AS5 ~~ 0 * DS5 + 0 * DS6
AS6 ~~ 0 * DS5 + 0 * DS6
DS5 ~~ c(1, 0.25) * DS6
AS7 ~~ c(1, 0.5) * AS8
AS7 ~~ 0 * DS7 + 0 * DS8
AS8 ~~ 0 * DS7 + 0 * DS8
DS7 ~~ c(1, 0.25) * DS8
AS9 ~~ c(1, 0.5) * AS10
AS9 ~~ 0 * DS9 + 0 * DS10
AS10 ~~ 0 * DS9 + 0 * DS10
DS9 ~~ c(1, 0.25) * DS10
# Variances
AL1 ~~ 1 * AL1
AL2 ~~ 1 * AL2
DL1 ~~ 1 * DL1
DL2 ~~ 1 * DL2
EL1 ~~ 1 * EL1
EL2 ~~ 1 * EL2
AS1 ~~ 1 * AS1
AS2 ~~ 1 * AS2
AS3 ~~ 1 * AS3
AS4 ~~ 1 * AS4
AS5 ~~ 1 * AS5
AS6 ~~ 1 * AS6
AS7 ~~ 1 * AS7
AS8 ~~ 1 * AS8
AS9 ~~ 1 * AS9
AS10 ~~ 1 * AS10
DS1 ~~ 1 * DS1
DS2 ~~ 1 * DS2
DS3 ~~ 1 * DS3
DS4 ~~ 1 * DS4
DS5 ~~ 1 * DS5
DS6 ~~ 1 * DS6
DS7 ~~ 1 * DS7
DS8 ~~ 1 * DS8
DS9 ~~ 1 * DS9
DS10 ~~ 1 * DS10
ENG1 ~~ c(e2, e2) * ENG1
ENG2 ~~ c(e2, e2) * ENG2
MAT1 ~~ c(e3, e3) * MAT1
MAT2 ~~ c(e3, e3) * MAT2
SOC1 ~~ c(e4, e4) * SOC1
SOC2 ~~ c(e4, e4) * SOC2
NAT1 ~~ c(e5, e5) * NAT1
NAT2 ~~ c(e5, e5) * NAT2
VOC1 ~~ c(e6, e6) * VOC1
VOC2 ~~ c(e6, e6) * VOC2'
NMTIPADE <- sem(NMTIPADE, data = NMTP, group = "ZYG", orthogonal = T)
## Warning in lav_model_vcov(lavmodel = lavmodel, lavsamplestats = lavsamplestats, : lavaan WARNING:
## The variance-covariance matrix of the estimated parameters (vcov)
## does not appear to be positive definite! The smallest eigenvalue
## (= 1.329562e-12) is close to zero. This may be a symptom that the
## model is not identified.
NMTIPAE <- '
# Loadings
AL1 =~ NA * ENG1 + c(al1, al1) * ENG1 + c(0.5, 0.5) * ENG1 + NA * MAT1 + c(al2, al2) * MAT1 + c(0.5, 0.5) * MAT1 + NA * SOC1 + c(al3, al3) * SOC1 + c(0.5, 0.5) * SOC1 + NA * NAT1 + c(al4, al4) * NAT1 + c(0.5, 0.5) * NAT1 + NA * VOC1 + c(al5, al5) * VOC1 + c(0.5, 0.5) * VOC1
AL2 =~ NA * ENG2 + c(al1, al1) * ENG2 + c(0.5, 0.5) * ENG2 + NA * MAT2 + c(al2, al2) * MAT2 + c(0.5, 0.5) * MAT2 + NA * SOC2 + c(al3, al3) * SOC2 + c(0.5, 0.5) * SOC2 + NA * NAT2 + c(al4, al4) * NAT2 + c(0.5, 0.5) * NAT2 + NA * VOC2 + c(al5, al5) * VOC2 + c(0.5, 0.5) * VOC2
EL1 =~ NA * ENG1 + c(el1, el1) * ENG1 + c(0.5, 0.5) * ENG1 + NA * MAT1 + c(el2, el2) * MAT1 + c(0.5, 0.5) * MAT1 + NA * SOC1 + c(el3, el3) * SOC1 + c(0.5, 0.5) * SOC1 + NA * NAT1 + c(el4, el4) * NAT1 + c(0.5, 0.5) * NAT1 + NA * VOC1 + c(el5, el5) * VOC1 + c(0.5, 0.5) * VOC1
EL2 =~ NA * ENG2 + c(el1, el1) * ENG2 + c(0.5, 0.5) * ENG2 + NA * MAT2 + c(el2, el2) * MAT2 + c(0.5, 0.5) * MAT2 + NA * SOC2 + c(el3, el3) * SOC2 + c(0.5, 0.5) * SOC2 + NA * NAT2 + c(el4, el4) * NAT2 + c(0.5, 0.5) * NAT2 + NA * VOC2 + c(el5, el5) * VOC2 + c(0.5, 0.5) * VOC2
AS1 =~ NA * ENG1 + c(a1, a1) * ENG1 + c(0.5, 0.5) * ENG1
AS2 =~ NA * ENG2 + c(a1, a1) * ENG2 + c(0.5, 0.5) * ENG2
AS3 =~ NA * MAT1 + c(a2, a2) * MAT1 + c(0.5, 0.5) * MAT1
AS4 =~ NA * MAT2 + c(a2, a2) * MAT2 + c(0.5, 0.5) * MAT2
AS5 =~ NA * SOC1 + c(a3, a3) * SOC1 + c(0.5, 0.5) * SOC1
AS6 =~ NA * SOC2 + c(a3, a3) * SOC2 + c(0.5, 0.5) * SOC2
AS7 =~ NA * NAT1 + c(a4, a4) * NAT1 + c(0.5, 0.5) * NAT1
AS8 =~ NA * NAT2 + c(a4, a4) * NAT2 + c(0.5, 0.5) * NAT2
AS9 =~ NA * VOC1 + c(a5, a5) * VOC1 + c(0.5, 0.5) * VOC1
AS10 =~ NA * VOC2 + c(a5, a5) * VOC2 + c(0.5, 0.5) * VOC2
# Covariances
AL1 ~~ c(1, 0.5) * AL2
EL1 ~~ c(0, 0) * EL2
AS1 ~~ c(1, 0.5) * AS2
AS3 ~~ c(1, 0.5) * AS4
AS5 ~~ c(1, 0.5) * AS6
AS7 ~~ c(1, 0.5) * AS8
AS9 ~~ c(1, 0.5) * AS10
# Variances
AL1 ~~ 1 * AL1
AL2 ~~ 1 * AL2
EL1 ~~ 1 * EL1
EL2 ~~ 1 * EL2
AS1 ~~ 1 * AS1
AS2 ~~ 1 * AS2
AS3 ~~ 1 * AS3
AS4 ~~ 1 * AS4
AS5 ~~ 1 * AS5
AS6 ~~ 1 * AS6
AS7 ~~ 1 * AS7
AS8 ~~ 1 * AS8
AS9 ~~ 1 * AS9
AS10 ~~ 1 * AS10
ENG1 ~~ c(e2, e2) * ENG1
ENG2 ~~ c(e2, e2) * ENG2
MAT1 ~~ c(e3, e3) * MAT1
MAT2 ~~ c(e3, e3) * MAT2
SOC1 ~~ c(e4, e4) * SOC1
SOC2 ~~ c(e4, e4) * SOC2
NAT1 ~~ c(e5, e5) * NAT1
NAT2 ~~ c(e5, e5) * NAT2
VOC1 ~~ c(e6, e6) * VOC1
VOC2 ~~ c(e6, e6) * VOC2'
NMTIPAE <- sem(NMTIPAE, data = NMTP, group = "ZYG", orthogonal = T)
NMTIPCE <- '
# Loadings
CL1 =~ NA * ENG1 + c(cl1, cl1) * ENG1 + c(0.5, 0.5) * ENG1 + NA * MAT1 + c(cl2, cl2) * MAT1 + c(0.5, 0.5) * MAT1 + NA * SOC1 + c(cl3, cl3) * SOC1 + c(0.5, 0.5) * SOC1 + NA * NAT1 + c(cl4, cl4) * NAT1 + c(0.5, 0.5) * NAT1 + NA * VOC1 + c(cl5, cl5) * VOC1 + c(0.5, 0.5) * VOC1
CL2 =~ NA * ENG2 + c(cl1, cl1) * ENG2 + c(0.5, 0.5) * ENG2 + NA * MAT2 + c(cl2, cl2) * MAT2 + c(0.5, 0.5) * MAT2 + NA * SOC2 + c(cl3, cl3) * SOC2 + c(0.5, 0.5) * SOC2 + NA * NAT2 + c(cl4, cl4) * NAT2 + c(0.5, 0.5) * NAT2 + NA * VOC2 + c(cl5, cl5) * VOC2 + c(0.5, 0.5) * VOC2
EL1 =~ NA * ENG1 + c(el1, el1) * ENG1 + c(0.5, 0.5) * ENG1 + NA * MAT1 + c(el2, el2) * MAT1 + c(0.5, 0.5) * MAT1 + NA * SOC1 + c(el3, el3) * SOC1 + c(0.5, 0.5) * SOC1 + NA * NAT1 + c(el4, el4) * NAT1 + c(0.5, 0.5) * NAT1 + NA * VOC1 + c(el5, el5) * VOC1 + c(0.5, 0.5) * VOC1
EL2 =~ NA * ENG2 + c(el1, el1) * ENG2 + c(0.5, 0.5) * ENG2 + NA * MAT2 + c(el2, el2) * MAT2 + c(0.5, 0.5) * MAT2 + NA * SOC2 + c(el3, el3) * SOC2 + c(0.5, 0.5) * SOC2 + NA * NAT2 + c(el4, el4) * NAT2 + c(0.5, 0.5) * NAT2 + NA * VOC2 + c(el5, el5) * VOC2 + c(0.5, 0.5) * VOC2
CS1 =~ NA * ENG1 + c(d1, d1) * ENG1
CS2 =~ NA * ENG2 + c(d1, d1) * ENG2
CS3 =~ NA * MAT1 + c(d2, d2) * MAT1
CS4 =~ NA * MAT2 + c(d2, d2) * MAT2
CS5 =~ NA * SOC1 + c(d3, d3) * SOC1
CS6 =~ NA * SOC2 + c(d3, d3) * SOC2
CS7 =~ NA * NAT1 + c(d4, d4) * NAT1
CS8 =~ NA * NAT2 + c(d4, d4) * NAT2
CS9 =~ NA * VOC1 + c(d5, d5) * VOC1
CS10 =~ NA * VOC2 + c(d5, d5) * VOC2
# Covariances
CL1 ~~ c(1, 1) * CL2
EL1 ~~ c(0, 0) * EL2
CS1 ~~ c(1, 1) * CS2
CS3 ~~ c(1, 1) * CS4
CS5 ~~ c(1, 1) * CS6
CS7 ~~ c(1, 1) * CS8
CS9 ~~ c(1, 1) * CS10
# Variances
CL1 ~~ 1 * CL1
CL2 ~~ 1 * CL2
EL1 ~~ 1 * EL1
EL2 ~~ 1 * EL2
CS1 ~~ 1 * CS1
CS2 ~~ 1 * CS2
CS3 ~~ 1 * CS3
CS4 ~~ 1 * CS4
CS5 ~~ 1 * CS5
CS6 ~~ 1 * CS6
CS7 ~~ 1 * CS7
CS8 ~~ 1 * CS8
CS9 ~~ 1 * CS9
CS10 ~~ 1 * CS10
ENG1 ~~ c(e2, e2) * ENG1
ENG2 ~~ c(e2, e2) * ENG2
MAT1 ~~ c(e3, e3) * MAT1
MAT2 ~~ c(e3, e3) * MAT2
SOC1 ~~ c(e4, e4) * SOC1
SOC2 ~~ c(e4, e4) * SOC2
NAT1 ~~ c(e5, e5) * NAT1
NAT2 ~~ c(e5, e5) * NAT2
VOC1 ~~ c(e6, e6) * VOC1
VOC2 ~~ c(e6, e6) * VOC2'
NMTIPCE <- sem(NMTIPCE, data = NMTP, group = "ZYG", orthogonal = T)
NMTIPDE <- '
# Loadings
DL1 =~ NA * ENG1 + c(dl1, dl1) * ENG1 + c(0.5, 0.5) * ENG1 + NA * MAT1 + c(dl2, dl2) * MAT1 + c(0.5, 0.5) * MAT1 + NA * SOC1 + c(dl3, dl3) * SOC1 + c(0.5, 0.5) * SOC1 + NA * NAT1 + c(dl4, dl4) * NAT1 + c(0.5, 0.5) * NAT1 + NA * VOC1 + c(dl5, dl5) * VOC1 + c(0.5, 0.5) * VOC1
DL2 =~ NA * ENG2 + c(dl1, dl1) * ENG2 + c(0.5, 0.5) * ENG2 + NA * MAT2 + c(dl2, dl2) * MAT2 + c(0.5, 0.5) * MAT2 + NA * SOC2 + c(dl3, dl3) * SOC2 + c(0.5, 0.5) * SOC2 + NA * NAT2 + c(dl4, dl4) * NAT2 + c(0.5, 0.5) * NAT2 + NA * VOC2 + c(dl5, dl5) * VOC2 + c(0.5, 0.5) * VOC2
EL1 =~ NA * ENG1 + c(el1, el1) * ENG1 + c(0.5, 0.5) * ENG1 + NA * MAT1 + c(el2, el2) * MAT1 + c(0.5, 0.5) * MAT1 + NA * SOC1 + c(el3, el3) * SOC1 + c(0.5, 0.5) * SOC1 + NA * NAT1 + c(el4, el4) * NAT1 + c(0.5, 0.5) * NAT1 + NA * VOC1 + c(el5, el5) * VOC1 + c(0.5, 0.5) * VOC1
EL2 =~ NA * ENG2 + c(el1, el1) * ENG2 + c(0.5, 0.5) * ENG2 + NA * MAT2 + c(el2, el2) * MAT2 + c(0.5, 0.5) * MAT2 + NA * SOC2 + c(el3, el3) * SOC2 + c(0.5, 0.5) * SOC2 + NA * NAT2 + c(el4, el4) * NAT2 + c(0.5, 0.5) * NAT2 + NA * VOC2 + c(el5, el5) * VOC2 + c(0.5, 0.5) * VOC2
DS1 =~ NA * ENG1 + c(d1, d1) * ENG1
DS2 =~ NA * ENG2 + c(d1, d1) * ENG2
DS3 =~ NA * MAT1 + c(d2, d2) * MAT1
DS4 =~ NA * MAT2 + c(d2, d2) * MAT2
DS5 =~ NA * SOC1 + c(d3, d3) * SOC1
DS6 =~ NA * SOC2 + c(d3, d3) * SOC2
DS7 =~ NA * NAT1 + c(d4, d4) * NAT1
DS8 =~ NA * NAT2 + c(d4, d4) * NAT2
DS9 =~ NA * VOC1 + c(d5, d5) * VOC1
DS10 =~ NA * VOC2 + c(d5, d5) * VOC2
# Covariances
DL1 ~~ c(1, 0.25) * DL2
EL1 ~~ c(0, 0) * EL2
DS1 ~~ c(1, 0.25) * DS2
DS3 ~~ c(1, 0.25) * DS4
DS5 ~~ c(1, 0.25) * DS6
DS7 ~~ c(1, 0.25) * DS8
DS9 ~~ c(1, 0.25) * DS10
# Variances
DL1 ~~ 1 * DL1
DL2 ~~ 1 * DL2
EL1 ~~ 1 * EL1
EL2 ~~ 1 * EL2
DS1 ~~ 1 * DS1
DS2 ~~ 1 * DS2
DS3 ~~ 1 * DS3
DS4 ~~ 1 * DS4
DS5 ~~ 1 * DS5
DS6 ~~ 1 * DS6
DS7 ~~ 1 * DS7
DS8 ~~ 1 * DS8
DS9 ~~ 1 * DS9
DS10 ~~ 1 * DS10
ENG1 ~~ c(e2, e2) * ENG1
ENG2 ~~ c(e2, e2) * ENG2
MAT1 ~~ c(e3, e3) * MAT1
MAT2 ~~ c(e3, e3) * MAT2
SOC1 ~~ c(e4, e4) * SOC1
SOC2 ~~ c(e4, e4) * SOC2
NAT1 ~~ c(e5, e5) * NAT1
NAT2 ~~ c(e5, e5) * NAT2
VOC1 ~~ c(e6, e6) * VOC1
VOC2 ~~ c(e6, e6) * VOC2'
NMTIPDE <- sem(NMTIPDE, data = NMTP, group = "ZYG", orthogonal = T)
round(cbind("CP ACE" = fitMeasures(NMTCPACE, GFIT),
"CP ADE" = fitMeasures(NMTCPADE, GFIT),
"CP AE" = fitMeasures(NMTCPAE, GFIT),
"CP CE" = fitMeasures(NMTCPCE, GFIT),
"CP DE" = fitMeasures(NMTCPDE, GFIT),
"IP ACE" = fitMeasures(NMTIPACE, GFIT),
"IP ADE" = fitMeasures(NMTIPADE, GFIT),
"IP AE" = fitMeasures(NMTIPAE, GFIT),
"IP CE" = fitMeasures(NMTIPCE, GFIT),
"IP DE" = fitMeasures(NMTIPDE, GFIT)), 3)
## CP ACE CP ADE CP AE CP CE CP DE IP ACE
## logl -22543.998 -22571.511 -22571.679 -22628.638 -22629.882 -22467.297
## df 87.000 87.000 93.000 93.000 93.000 80.000
## npar 43.000 43.000 37.000 37.000 37.000 50.000
## aic 45173.996 45229.022 45217.359 45331.277 45333.765 45034.594
## bic 45377.482 45432.507 45392.451 45506.368 45508.856 45271.204
## chisq 310.060 365.086 365.422 479.340 481.828 156.657
## pvalue 0.000 0.000 0.000 0.000 0.000 0.000
## tli 0.965 0.957 0.960 0.944 0.944 0.987
## rmsea 0.078 0.087 0.084 0.100 0.100 0.048
## IP ADE IP AE IP CE IP DE
## logl -22487.454 -22563.520 -22626.673 -22620.986
## df 80.000 90.000 90.000 90.000
## npar 50.000 40.000 40.000 40.000
## aic 45074.909 45207.040 45333.346 45321.972
## bic 45311.519 45396.328 45522.635 45511.261
## chisq 196.972 349.103 475.410 464.035
## pvalue 0.000 0.000 0.000 0.000
## tli 0.980 0.961 0.942 0.944
## rmsea 0.059 0.083 0.101 0.100
In the Osborne dataset, the best-fitting independent pathway model (CE) fits about as well as the worst-fitting common pathway model (DE). For parsimony reasons, the CE model works better than the ACE model, even though the \(\chi^2\) test says it fits significantly (p = 0.004) worse (scaled critical p = 0.001, however). This may be partially a power issue and may change with parameter pruning. However, in the National Merit data, the independent ACE pathway model was clearly the best-fitting by a massive margin.
The ACE effects on g in the Osborne data were \(A^2 = 0.328, C^2 = 0.591\), and \(E^2 = 0.081\); in the second set of Osborne data, there were \(A^2 = 0.399, C^2 = 0.551\), and \(E^2 = 0.050\). The A, C, and E effects on g in the National Merit data would have been \(A^2 = 0.453, C^2 = 0.475\), and \(E^2 = 0.072\). Since the National Merit data did not, in fact, find support for a common pathway, it might be worthwhile to check how its Jensen effects work out.
GL <- c(0.754, 0.716, 0.877, 0.767, 0.847)
AL <- c(0.526, 0.702, 0.611, 0.783, 0.464)
CL <- c(0.481, 0.259, 0.561, 0.279, 0.735)
EL <- c(0.226, 0.145, 0.237, 0.118, 0.286)
TC <- sqrt(AL^2 + CL^2 + EL^2)
dj <- data.frame("GL" = GL, "AL" = AL, "CL" = CL, "EL" = EL, "TC" = TC)
cor(dj); cor(dj, method = "spearman"); CONGO(GL, AL); CONGO(GL, CL); CONGO(GL, EL); CONGO(GL, TC)
## GL AL CL EL TC
## GL 1.0000000 -0.4634474 0.7952988 0.7043148 0.8219664
## AL -0.4634474 1.0000000 -0.8909166 -0.9534249 -0.2737397
## CL 0.7952988 -0.8909166 1.0000000 0.9734197 0.6598588
## EL 0.7043148 -0.9534249 0.9734197 1.0000000 0.4836471
## TC 0.8219664 -0.2737397 0.6598588 0.4836471 1.0000000
## GL AL CL EL TC
## GL 1.0 -0.3 0.8 0.6 0.8
## AL -0.3 1.0 -0.8 -0.9 -0.3
## CL 0.8 -0.8 1.0 0.9 0.7
## EL 0.6 -0.9 0.9 1.0 0.6
## TC 0.8 -0.3 0.7 0.6 1.0
## [1] 0.9737177
## [1] 0.9520201
## [1] 0.9691243
## [1] 0.998979
sum(TC^2)/5
## [1] 0.685281
There is an anti-Jensen effect for A and yet it was the most congruent with the g loadings. There are Jensen effects for C and E. One problem with Jensen effects is confounding common versus total influences on test scores. So, with just the residual influences and then adding them:
AL <- c(0.302, 0.380, 0.260, 0, 0.296)
CL <- c(0.395, 0.213, 0, 0.116, 0)
EL <- c(0.440, 0.479, 0.434, 0.531, 0.274)
TR <- sqrt(AL^2 + CL^2 + EL^2)
dj <- data.frame("GL" = GL, "AL" = AL, "CL" = CL, "EL" = EL, "TR" = TR)
cor(dj); cor(dj, method = "spearman"); CONGO(GL, AL); CONGO(GL, CL); CONGO(GL, EL); CONGO(GL, TR)
## GL AL CL EL TR
## GL 1.00000000 -0.05693622 -0.7674796 -0.5743929 -0.8113306
## AL -0.05693622 1.00000000 0.2177369 -0.4559437 0.2099262
## CL -0.76747956 0.21773691 1.0000000 0.3907585 0.8856847
## EL -0.57439291 -0.45594372 0.3907585 1.0000000 0.6773354
## TR -0.81133060 0.20992620 0.8856847 0.6773354 1.0000000
## GL AL CL EL TR
## GL 1.0000000 -0.7000000 -0.8720816 -0.6000000 -0.8000000
## AL -0.7000000 1.0000000 0.5642881 -0.1000000 0.5000000
## CL -0.8720816 0.5642881 1.0000000 0.5642881 0.9746794
## EL -0.6000000 -0.1000000 0.5642881 1.0000000 0.6000000
## TR -0.8000000 0.5000000 0.9746794 0.6000000 1.0000000
## [1] 0.881075
## [1] 0.6549413
## [1] 0.9693679
## [1] 0.9719783
sum(TR^2)/5
## [1] 0.3148208
AL <- c(sqrt(0.526^2 + 0.302^2), sqrt(0.702^2 + 0.380^2), sqrt(0.611^2 + 0.260^2), 0.783, sqrt(0.464^2 + 0.296^2))
CL <- c(sqrt(0.481^2 + 0.395^2), sqrt(0.259^2 + 0.213^2), 0.561, sqrt(0.279^2 + 0.116^2), 0.735)
EL <- c(sqrt(0.226^2 + 0.440^2), sqrt(0.145^2 + 0.479^2), sqrt(0.237^2 + 0.434^2), sqrt(0.118^2 + 0.531^2), sqrt(0.286^2 + 0.274^2))
dj <- data.frame("GL" = GL, "AL" = AL, "CL" = CL, "EL" = EL)
cor(dj); cor(dj, method = "spearman"); CONGO(GL, AL); CONGO(GL, CL); CONGO(GL, EL); CONGO(AL, CL); CONGO(AL, EL); CONGO(CL, EL)
## GL AL CL EL
## GL 1.0000000 -0.5915056 0.6150544 -0.4859072
## AL -0.5915056 1.0000000 -0.9910312 0.7879045
## CL 0.6150544 -0.9910312 1.0000000 -0.8229844
## EL -0.4859072 0.7879045 -0.8229844 1.0000000
## GL AL CL EL
## GL 1.0 -0.5 0.3 -0.6
## AL -0.5 1.0 -0.9 0.8
## CL 0.3 -0.9 1.0 -0.9
## EL -0.6 0.8 -0.9 1.0
## [1] 0.9808275
## [1] 0.9622601
## [1] 0.988522
## [1] 0.8975559
## [1] 0.9961421
## [1] 0.9203698
These correlational results are obvious based on the prior result, but they're qualitatively like what Jensen effect analyses typically amount to. The (quite tired) point here is that even though a Jensen effect is observed for A in most cases and not for C or E, some part of the (genetic and) environmental variance will still affect g. More importantly, without a common pathway model, there is no reason to expect a Jensen effect unless it reflects, say, common genetic effects or something else that needs explained; but then, justifying g is harder. If there is no common pathway model, it stands to reason that g is somehow formative and the effect has only appeared because of its commonality with a single variance component which is usually large.
It appears that there is, at least with these simplistic models, mixed evidence that g is "an entity" (see Shikishima et al., 2009). As a result, the concept of "genes for" g may be tenable, at least in some cases. C effects were quite large in these samples but that may be due to their young ages (Bouchard, 2013). The large C effects are likely to fade and become A effects in older samples. I do not mean to imply that C is large; these samples included kids and they may even shift to AE (more typical) if parameters are pruned or indicator-specific influences are allowed to covary. Pruned models may show slightly different results and they could be pursued, but they're not advisable since they lead to overfitting.
The "mixed" part of the evidence comes from the National Merit Twin results (and possibly the second set of Osborne results). They appeared to show evidence for an independent pathway model. In that sample, the test content was not varied, which may mean that the phenotypic factor was not really g (or was too "polluted" with specific variance), per se. Whatever the reason, it indicates that at the very least, the evidence based needs to be expanded to add some sureness. It is possible that the result is explained by the low commonality of some subtests (e.g. WCST, faces, in difference datasets), which may mean that, with a larger model including group factors, that would fit well.
Nikolasevic et al. (2020) recently compared common and independent pathway models in a similarly constrained sample, but their conclusions were too incautious. They had four measures - the WCST, TMT, a fluency measure (these had subtests), and the RAPM - and they found that an independent pathway model fit the data better than a common pathway model. This is fairly unsurprising, but they took it to mean that "the covariation of general cognitive ability and executive functions could be explained by shared genes". But, it is far more likely to have represented a measurement problem. Firstly, the WCST is not a very good test. In the VES, I found that it didn't load on any factors and I dropped it from my analysis. In their analysis, the measure had a loading of 0.361 on the common genetic factor, but otherwise, wholly specific variance, of which only 12% was genetic. This leaves open the possibility of massive measurement error. They did not provide the factor correlations from their phenotypic CFA so there's not much to do here. Moreover, their sample may have been range restricted (perhaps like the National Merit Twin sample?), resulting in a sort of SLODR effect. Another problem is that the RAPM is interpreted as g; individual indicators are not equivalent to constructs, so this is an error and it completely repudiates the interpretation of the results given in the study. Yet another comes in the form of ignoring or misinterpreting prior evidence. Because many researchers have construed a relationship between, say, factors from small batteries, or observed outcomes from them, as evidence of independence between g and executive function, or in particular, between g and general-executive function, they err. Tested with sensible models, in decent batteries, the result is that general EF and g are one and the same and both sets of tests have predominantly general common variance (see, e.g., Floyd et al., 2010; https://rpubs.com/JLLJ/FRE20). This points to a more general problem with a lot of psychological research: when measurement is insufficient, researchers still seem to interpret their results as if they had good measurement models and conditions, as well as theoretical coherence, when they emphatically did not. This is the sort of incaution that leads to results that pollute the literature rather than contribute to it because naieve researchers will believe their interpretations, proper or not. It is disappointing that reviewers and editors fail to stop this variety of overreach (and also that they fail to require data - when possible - and code reporting - which is always possible; as the data availability statement makes clear, data could be reported in this case).
Shikishima, C., Hiraishi, K., Yamagata, S., Sugimoto, Y., Takemura, R., Ozaki, K., Okada, M., Toda, T., & Ando, J. (2009). Is g an entity? A Japanese twin study using syllogisms and intelligence tests. Intelligence, 37(3), 256-267. https://doi.org/10.1016/j.intell.2008.10.010
Franic, S., Dolan, C. V., Borsboom, D., Hudziak, J. J., van Beijsterveldt, C. E. M., & Boomsma, D. I. (2013). Can genetics help psychometrics? Improving dimensionality assessment through genetic factor modeling. Psychological Methods, 18(3), 406-433. https://doi.org/10.1037/a0032755
Franic, S., Borsboom, D., Dolan, C. V., & Boomsma, D. I. (2014). The Big Five Personality Traits: Psychological Entities or Statistical Constructs? Behavior Genetics, 44(6), 591-604. https://doi.org/10.1007/s10519-013-9625-7
Panizzon, M. S., Vuoksimaa, E., Spoon, K. M., Jacobson, K. C., Lyons, M. J., Franz, C. E., Xian, H., Vasilopoulos, T., & Kremen, W. S. (2014). Genetic and Environmental Influences of General Cognitive Ability: Is g a valid latent construct? Intelligence, 43, 65-76. https://doi.org/10.1016/j.intell.2014.01.008
Goldberger, A. S. (2002). Structural equation models in human behavior genetics. In Working papers (No. 22; Working Papers). Wisconsin Madison - Social Systems. https://ideas.repec.org/p/att/wimass/200222.html
Bouchard, T. J. (2013). The Wilson Effect: The increase in heritability of IQ with age. Twin Research and Human Genetics: The Official Journal of the International Society for Twin Studies, 16(5), 923-930. https://doi.org/10.1017/thg.2013.54
Nikolasevic, Z., Smederevac, S., Bugarski Ignjatovic, V., Kodzopeljic, J., Milovanovic, I., Prinz, M., & Budimlija, Z. (2020). Executive functions and intelligence- are there genetic difference? Intelligence, 82, 101480. https://doi.org/10.1016/j.intell.2020.101480
Floyd, R. G., Bergeron, R., Hamilton, G., & Parra, G. R. (2010). How do executive functions fit with the Cattell-Horn-Carroll model? Some evidence from a joint factor analysis of the Delis-Kaplan Executive Function System and the Woodcock-Johnson III tests of cognitive abilities. Psychology in the Schools, 721-738. https://doi.org/10.1002/pits.20500
One reason C may appear higher than normal is assortative mating. This issue is widely discussed and several estimates for IQ are available, like Jensen's (1978) estimate of r = 0.42 or Bouchard & McGue's (1981) of 0.33 (see also Watkins & Meredith, 1981). There are even studies which have investigated the issue at the molecular level (Hugh-Jones et al., 2016; Robinson et al., 2017). The gist is that if parents mate to some degree on the basis of their own intelligence (or a correlated trait), they will inflate the resemblance of the dizygotic twin pairs for that trait above the random mating level of 0.50. For instance, if heritability were truly 80% and there was a spouse correlation of 0.35, the real dizygotic twin genetic correlation would be \(0.5 + 0.5h^2r_p = 0.5 + 0.5 \times 0.8 \times 0.35 = 0.64\) (Loehlin, Harden & Turkheimer, 2009). Correcting for this is not simple because values of both expected heritability and the marital correlation are required. Martin (1978) corrected C for assortative mating with the following formula:
\[c^2_{adj} = c^2_R - h^2_R \frac{A}{1-A}\]
with \(h^2_R\) and \(c^2_R\) the genetic and shared environmental variances estimated by the initial model and A is the genetic correlation between spouses which is given as a function of the \(r_p\) (which he called \(\mu\)) and \(h^2_R\); so
\[A = 0.5 \times \left [1 - \sqrt{1 - 4 \mu h^2_R} \right ]\]
So if we go with Jensen's and Bouchard's estimates for the Georgia and National Merit Twin studies and note that the variance is soaked up by A
MC <- function(c, h, r){
temp = max(1 - 4 * r * h, 0)
A = 0.5 * (1 - sqrt(temp))
ca = c - h * A/(1-A)
return(ca)}
message(paste0("With a marital correlation of 0.42, C becomes ", MC(0.591, 0.328, 0.42), ". which means that A becomes ", 0.591 - MC(0.591, 0.328, 0.42) + 0.328), "; if the correlation is 0.33, C becomes ", MC(0.591, 0.328, 0.33), " and A becomes ", 0.591 - MC(0.591, 0.328, 0.33) + 0.328)
## With a marital correlation of 0.42, C becomes 0.526196160265843. which means that A becomes 0.392803839734157; if the correlation is 0.33, C becomes 0.5447887637091 and A becomes 0.3742112362909
message(paste0("With a marital correlation of 0.42, C becomes ", MC(0.551, 0.399, 0.42), ". which means that A becomes ", 0.551 - MC(0.551, 0.399, 0.42) + 0.399), "; if the correlation is 0.33, C becomes ", MC(0.551, 0.399, 0.33), " and A becomes ", 0.551 - MC(0.551, 0.399, 0.33) + 0.399)
## With a marital correlation of 0.42, C becomes 0.443068658155341. which means that A becomes 0.506931341844659; if the correlation is 0.33, C becomes 0.477246299876425 and A becomes 0.472753700123575
message(paste0("With a marital correlation of 0.42, C becomes ", MC(0.475, 0.453, 0.42), ". which means that A becomes ", 0.475 - MC(0.475, 0.453, 0.42) + 0.453), "; if the correlation is 0.33, C becomes ", MC(0.475, 0.453, 0.33), " and A becomes ", 0.475 - MC(0.475, 0.453, 0.33) + 0.453)
## With a marital correlation of 0.42, C becomes 0.319470655480787. which means that A becomes 0.608529344519213; if the correlation is 0.33, C becomes 0.37355491302447 and A becomes 0.55444508697553
#Osborne
pmanumACE <- '
A1 =~ NA * pmanum1 + c(a, a) * pmanum1 + c(0.5, 0.5) * pmanum1
A2 =~ NA * pmanum2 + c(a, a) * pmanum2 + c(0.5, 0.5) * pmanum2
C1 =~ NA * pmanum1 + c(d, d) * pmanum1
C2 =~ NA * pmanum2 + c(d, d) * pmanum2
A1 ~~ c(1, 0.5) * A2
A1 ~~ 0 * C1 + 0 * C2
A2 ~~ 0 * C1 + 0 * C2
C1 ~~ c(1, 1) * C2
A1 ~~ 1 * A1
A2 ~~ 1 * A2
C1 ~~ 1 * C1
C2 ~~ 1 * C2
pmanum1 ~~ c(e2, e2) * pmanum1
pmanum2 ~~ c(e2, e2) * pmanum2'
pmaverbACE <- '
A1 =~ NA * pmaverb1 + c(a, a) * pmaverb1 + c(0.5, 0.5) * pmaverb1
A2 =~ NA * pmaverb2 + c(a, a) * pmaverb2 + c(0.5, 0.5) * pmaverb2
C1 =~ NA * pmaverb1 + c(d, d) * pmaverb1
C2 =~ NA * pmaverb2 + c(d, d) * pmaverb2
A1 ~~ c(1, 0.5) * A2
A1 ~~ 0 * C1 + 0 * C2
A2 ~~ 0 * C1 + 0 * C2
C1 ~~ c(1, 1) * C2
A1 ~~ 1 * A1
A2 ~~ 1 * A2
C1 ~~ 1 * C1
C2 ~~ 1 * C2
pmaverb1 ~~ c(e2, e2) * pmaverb1
pmaverb2 ~~ c(e2, e2) * pmaverb2'
pmareaACE <- '
A1 =~ NA * pmarea1 + c(a, a) * pmarea1 + c(0.5, 0.5) * pmarea1
A2 =~ NA * pmarea2 + c(a, a) * pmarea2 + c(0.5, 0.5) * pmarea2
C1 =~ NA * pmarea1 + c(d, d) * pmarea1
C2 =~ NA * pmarea2 + c(d, d) * pmarea2
A1 ~~ c(1, 0.5) * A2
A1 ~~ 0 * C1 + 0 * C2
A2 ~~ 0 * C1 + 0 * C2
C1 ~~ c(1, 1) * C2
A1 ~~ 1 * A1
A2 ~~ 1 * A2
C1 ~~ 1 * C1
C2 ~~ 1 * C2
pmarea1 ~~ c(e2, e2) * pmarea1
pmarea2 ~~ c(e2, e2) * pmarea2'
pmaspaACE <- '
A1 =~ NA * pmaspa1 + c(a, a) * pmaspa1 + c(0.5, 0.5) * pmaspa1
A2 =~ NA * pmaspa2 + c(a, a) * pmaspa2 + c(0.5, 0.5) * pmaspa2
C1 =~ NA * pmaspa1 + c(d, d) * pmaspa1
C2 =~ NA * pmaspa2 + c(d, d) * pmaspa2
A1 ~~ c(1, 0.5) * A2
A1 ~~ 0 * C1 + 0 * C2
A2 ~~ 0 * C1 + 0 * C2
C1 ~~ c(1, 1) * C2
A1 ~~ 1 * A1
A2 ~~ 1 * A2
C1 ~~ 1 * C1
C2 ~~ 1 * C2
pmaspa1 ~~ c(e2, e2) * pmaspa1
pmaspa2 ~~ c(e2, e2) * pmaspa2'
pmanumACE <- sem(pmanumACE, data = OSBO, group = "zygf")
pmaverbACE <- sem(pmaverbACE, data = OSBO, group = "zygf")
pmareaACE <- sem(pmareaACE, data = OSBO, group = "zygf")
pmaspaACE <- sem(pmaspaACE, data = OSBO, group = "zygf")
#Osborne Alternate
facesACE <- '
A1 =~ NA * faces1 + c(a, a) * faces1 + c(0.5, 0.5) * faces1
A2 =~ NA * faces2 + c(a, a) * faces2 + c(0.5, 0.5) * faces2
C1 =~ NA * faces1 + c(d, d) * faces1
C2 =~ NA * faces2 + c(d, d) * faces2
A1 ~~ c(1, 0.5) * A2
A1 ~~ 0 * C1 + 0 * C2
A2 ~~ 0 * C1 + 0 * C2
C1 ~~ c(1, 1) * C2
A1 ~~ 1 * A1
A2 ~~ 1 * A2
C1 ~~ 1 * C1
C2 ~~ 1 * C2
faces1 ~~ c(e2, e2) * faces1
faces2 ~~ c(e2, e2) * faces2'
logicalACE <- '
A1 =~ NA * logical1 + c(a, a) * logical1 + c(0.5, 0.5) * logical1
A2 =~ NA * logical2 + c(a, a) * logical2 + c(0.5, 0.5) * logical2
C1 =~ NA * logical1 + c(d, d) * logical1
C2 =~ NA * logical2 + c(d, d) * logical2
A1 ~~ c(1, 0.5) * A2
A1 ~~ 0 * C1 + 0 * C2
A2 ~~ 0 * C1 + 0 * C2
C1 ~~ c(1, 1) * C2
A1 ~~ 1 * A1
A2 ~~ 1 * A2
C1 ~~ 1 * C1
C2 ~~ 1 * C2
logical1 ~~ c(e2, e2) * logical1
logical2 ~~ c(e2, e2) * logical2'
socpercACE <- '
A1 =~ NA * socperc1 + c(a, a) * socperc1 + c(0.5, 0.5) * socperc1
A2 =~ NA * socperc2 + c(a, a) * socperc2 + c(0.5, 0.5) * socperc2
C1 =~ NA * socperc1 + c(d, d) * socperc1
C2 =~ NA * socperc2 + c(d, d) * socperc2
A1 ~~ c(1, 0.5) * A2
A1 ~~ 0 * C1 + 0 * C2
A2 ~~ 0 * C1 + 0 * C2
C1 ~~ c(1, 1) * C2
A1 ~~ 1 * A1
A2 ~~ 1 * A2
C1 ~~ 1 * C1
C2 ~~ 1 * C2
socperc1 ~~ c(e2, e2) * socperc1
socperc2 ~~ c(e2, e2) * socperc2'
cancalACE <- '
A1 =~ NA * cancal1 + c(a, a) * cancal1 + c(0.5, 0.5) * cancal1
A2 =~ NA * cancal2 + c(a, a) * cancal2 + c(0.5, 0.5) * cancal2
C1 =~ NA * cancal1 + c(d, d) * cancal1
C2 =~ NA * cancal2 + c(d, d) * cancal2
A1 ~~ c(1, 0.5) * A2
A1 ~~ 0 * C1 + 0 * C2
A2 ~~ 0 * C1 + 0 * C2
C1 ~~ c(1, 1) * C2
A1 ~~ 1 * A1
A2 ~~ 1 * A2
C1 ~~ 1 * C1
C2 ~~ 1 * C2
cancal1 ~~ c(e2, e2) * cancal1
cancal2 ~~ c(e2, e2) * cancal2'
crdrotACE <- '
A1 =~ NA * crdrot1 + c(a, a) * crdrot1 + c(0.5, 0.5) * crdrot1
A2 =~ NA * crdrot2 + c(a, a) * crdrot2 + c(0.5, 0.5) * crdrot2
C1 =~ NA * crdrot1 + c(d, d) * crdrot1
C2 =~ NA * crdrot2 + c(d, d) * crdrot2
A1 ~~ c(1, 0.5) * A2
A1 ~~ 0 * C1 + 0 * C2
A2 ~~ 0 * C1 + 0 * C2
C1 ~~ c(1, 1) * C2
A1 ~~ 1 * A1
A2 ~~ 1 * A2
C1 ~~ 1 * C1
C2 ~~ 1 * C2
crdrot1 ~~ c(e2, e2) * crdrot1
crdrot2 ~~ c(e2, e2) * crdrot2'
shipACE <- '
A1 =~ NA * ship1 + c(a, a) * ship1 + c(0.5, 0.5) * ship1
A2 =~ NA * ship2 + c(a, a) * ship2 + c(0.5, 0.5) * ship2
C1 =~ NA * ship1 + c(d, d) * ship1
C2 =~ NA * ship2 + c(d, d) * ship2
A1 ~~ c(1, 0.5) * A2
A1 ~~ 0 * C1 + 0 * C2
A2 ~~ 0 * C1 + 0 * C2
C1 ~~ c(1, 1) * C2
A1 ~~ 1 * A1
A2 ~~ 1 * A2
C1 ~~ 1 * C1
C2 ~~ 1 * C2
ship1 ~~ c(e2, e2) * ship1
ship2 ~~ c(e2, e2) * ship2'
facesACE <- sem(facesACE, data = OSBOA, group = "zygf")
logicalACE <- sem(logicalACE, data = OSBOA, group = "zygf")
## Warning in lav_model_vcov(lavmodel = lavmodel, lavsamplestats = lavsamplestats, : lavaan WARNING:
## The variance-covariance matrix of the estimated parameters (vcov)
## does not appear to be positive definite! The smallest eigenvalue
## (= 1.049085e-12) is close to zero. This may be a symptom that the
## model is not identified.
socpercACE <- sem(socpercACE, data = OSBOA, group = "zygf")
cancalACE <- sem(cancalACE, data = OSBOA, group = "zygf")
crdrotACE <- sem(crdrotACE, data = OSBOA, group = "zygf")
shipACE <- sem(shipACE, data = OSBOA, group = "zygf")
#National Merit
ENGACE <- '
A1 =~ NA * ENG1 + c(a, a) * ENG1 + c(0.5, 0.5) * ENG1
A2 =~ NA * ENG2 + c(a, a) * ENG2 + c(0.5, 0.5) * ENG2
C1 =~ NA * ENG1 + c(d, d) * ENG1
C2 =~ NA * ENG2 + c(d, d) * ENG2
A1 ~~ c(1, 0.5) * A2
A1 ~~ 0 * C1 + 0 * C2
A2 ~~ 0 * C1 + 0 * C2
C1 ~~ c(1, 1) * C2
A1 ~~ 1 * A1
A2 ~~ 1 * A2
C1 ~~ 1 * C1
C2 ~~ 1 * C2
ENG1 ~~ c(e2, e2) * ENG1
ENG2 ~~ c(e2, e2) * ENG2'
MATACE <- '
A1 =~ NA * MAT1 + c(a, a) * MAT1 + c(0.5, 0.5) * MAT1
A2 =~ NA * MAT2 + c(a, a) * MAT2 + c(0.5, 0.5) * MAT2
C1 =~ NA * MAT1 + c(d, d) * MAT1
C2 =~ NA * MAT2 + c(d, d) * MAT2
A1 ~~ c(1, 0.5) * A2
A1 ~~ 0 * C1 + 0 * C2
A2 ~~ 0 * C1 + 0 * C2
C1 ~~ c(1, 1) * C2
A1 ~~ 1 * A1
A2 ~~ 1 * A2
C1 ~~ 1 * C1
C2 ~~ 1 * C2
MAT1 ~~ c(e2, e2) * MAT1
MAT2 ~~ c(e2, e2) * MAT2'
SOCACE <- '
A1 =~ NA * SOC1 + c(a, a) * SOC1 + c(0.5, 0.5) * SOC1
A2 =~ NA * SOC2 + c(a, a) * SOC2 + c(0.5, 0.5) * SOC2
C1 =~ NA * SOC1 + c(d, d) * SOC1
C2 =~ NA * SOC2 + c(d, d) * SOC2
A1 ~~ c(1, 0.5) * A2
A1 ~~ 0 * C1 + 0 * C2
A2 ~~ 0 * C1 + 0 * C2
C1 ~~ c(1, 1) * C2
A1 ~~ 1 * A1
A2 ~~ 1 * A2
C1 ~~ 1 * C1
C2 ~~ 1 * C2
SOC1 ~~ c(e2, e2) * SOC1
SOC2 ~~ c(e2, e2) * SOC2'
NATACE <- '
A1 =~ NA * NAT1 + c(a, a) * NAT1 + c(0.5, 0.5) * NAT1
A2 =~ NA * NAT2 + c(a, a) * NAT2 + c(0.5, 0.5) * NAT2
C1 =~ NA * NAT1 + c(d, d) * NAT1
C2 =~ NA * NAT2 + c(d, d) * NAT2
A1 ~~ c(1, 0.5) * A2
A1 ~~ 0 * C1 + 0 * C2
A2 ~~ 0 * C1 + 0 * C2
C1 ~~ c(1, 1) * C2
A1 ~~ 1 * A1
A2 ~~ 1 * A2
C1 ~~ 1 * C1
C2 ~~ 1 * C2
NAT1 ~~ c(e2, e2) * NAT1
NAT2 ~~ c(e2, e2) * NAT2'
VOCACE <- '
A1 =~ NA * VOC1 + c(a, a) * VOC1 + c(0.5, 0.5) * VOC1
A2 =~ NA * VOC2 + c(a, a) * VOC2 + c(0.5, 0.5) * VOC2
C1 =~ NA * VOC1 + c(d, d) * VOC1
C2 =~ NA * VOC2 + c(d, d) * VOC2
A1 ~~ c(1, 0.5) * A2
A1 ~~ 0 * C1 + 0 * C2
A2 ~~ 0 * C1 + 0 * C2
C1 ~~ c(1, 1) * C2
A1 ~~ 1 * A1
A2 ~~ 1 * A2
C1 ~~ 1 * C1
C2 ~~ 1 * C2
VOC1 ~~ c(e2, e2) * VOC1
VOC2 ~~ c(e2, e2) * VOC2'
ENGACE <- sem(ENGACE, data = NMTP, group = "ZYG")
MATACE <- sem(MATACE, data = NMTP, group = "ZYG")
SOCACE <- sem(SOCACE, data = NMTP, group = "ZYG")
NATACE <- sem(NATACE, data = NMTP, group = "ZYG")
VOCACE <- sem(VOCACE, data = NMTP, group = "ZYG")
| Study | Subtest | A | C | E |
|---|---|---|---|---|
| Osborne | Numerical | 0.282 | 0.424 | 0.294 |
| Osborne | Verbal | 0.203 | 0.430 | 0.366 |
| Osborne | Reasoning | 0.381 | 0.456 | 0.164 |
| Osborne | Spatial | 0.360 | 0.237 | 0.403 |
| Osborne - A | Faces | 0.037 | 0.327 | 0.636 |
| Osborne - A | Logical | 0.000 | 0.454 | 0.546 |
| Osborne - A | Social Perception | 0.291 | 0.310 | 0.398 |
| Osborne - A | Cancellation | 0.331 | 0.365 | 0.304 |
| Osborne - A | Card Rotation | 0.429 | 0.305 | 0.266 |
| Osborne - A | Ships | 0.672 | 0.080 | 0.247 |
| Loehlin & Nichols | English | 0.341 | 0.413 | 0.246 |
| Loehlin & Nichols | Mathematics | 0.545 | 0.208 | 0.248 |
| Loehlin & Nichols | Social Science | 0.498 | 0.276 | 0.226 |
| Loehlin & Nichols | Natural Science | 0.254 | 0.434 | 0.311 |
| Loehlin & Nichols | Word Usage | 0.381 | 0.472 | 0.147 |
The univariate ACE results may clarify why the common and independent pathway models for the Osborne - A dataset were so marginally different: there's near-zero genetic variance for the Faces subtest, so common variance shared with all the other subtests is not going to play a role. In the independent pathway model for it, the common C and E variance factors are both insignificantly loaded on it, and the A loading is only 0.365 (very similar to Nikolasevic et al.'s findings). This should further reinforce the reality that a sufficient number of indicators and factors is required to adequately compare the common and independent pathway models. With assignment to a group factor, there would be no issue with subtests like Faces, which are at the lower end of g loadings (in part due to range restriction, in many cases). The same is evidently true for the logical subtest and perhaps the ships subtest, if we accept an ACE model.
Something I've never seen used is the method introduced by Molenaar et al. (1990) for assessing the interactive portions of the A, C, and E factors. This requires a slight alteration to get the same ACE models as the above but with the 'E' in 'ACE' explicitly modeled. To get the estimated influence with E in this model, just do the same standardized loading squaring for E as you would for the A or C loadings in the earlier models. Note that the reason I've left parameters like C1 ~~ c(1, 1) \* .... is for the ease of conversion to an ADE model; they can obviously just be changed to C1 ~~ 1 \* ....
The gist here is that the A, C, and E factors, despite being statistically identified, can be the products of an interaction. As such, A, for example, may not be a "pure" additive genetic factor; it can, instead, represent an interaction with C or E, and the same is true of those factors. This cannot be inferred from second-order moments nor from the multivariate normality of the scores, which will be unaffected. However, the fourth-order moments can be used to test if they have their nominal additive genetic and environmental interpretations. Molenaar et al. stated that, in "pure" cases (i.e., without interactions), \(E[X^4]\) will equal 3 and \(E[X^2Y^2]\) will equal 1; with interactions, they will equal 9 and 3 respectively. Partially interactive factors are also possible, with intermediate values. To the extent that \(E[X^4]\) is between the expected pure value and the interactive one, the nominal factor's value will be overestimated, but the factor interacting with it will not be underestimated - the interaction variance should be considered separate. If an impure factor is found, it can be decomposed into the interactive and traditionally interpreted component.
When A is the result of an interaction with C, \(E[A^4]\) will approach 9 and \(E[A^2C^2]\) will be around 3. When E results from an \(E \times G\), it will return a value for \(E[E^4]\) near 9 and \(E[E^2A^2]\) near 3; when it results from an \(E \times C\), it will return a value for \(E[E^4]\) near 9 and \(E[E^2C^2]\) near 3. For pure interactive factors, this approach is apparently quite accurate. When A or C are the result of interactions with E, a second E factor emerges, and similarly, if C is due to an interaction with A, we get something that looks like a second genetic factor. These situations are not typically identifiable and would yield bad fit for an ACE model; alternatively, they could be modeled with independent pathways. One would typically perform this whole analysis in the multivariate factor case. The dizygotic twin factor scores are used for obvious reasons; I have decided to average the results from them after computing their expectations but the choice to do so is arbitrary.
ENGACE <- '
A1 =~ NA * ENG1 + c(a, a) * ENG1 + c(0.5, 0.5) * ENG1
A2 =~ NA * ENG2 + c(a, a) * ENG2 + c(0.5, 0.5) * ENG2
C1 =~ NA * ENG1 + c(d, d) * ENG1
C2 =~ NA * ENG2 + c(d, d) * ENG2
E1 =~ NA * ENG1 + c(f, f) * ENG1
E2 =~ NA * ENG2 + c(f, f) * ENG2
A1 ~~ c(1, 0.5) * A2
A1 ~~ 0 * C1 + 0 * C2 + 0 * E1 + 0 * E2
A2 ~~ 0 * C1 + 0 * C2 + 0 * E1 + 0 * E2
C1 ~~ c(1, 1) * C2 + 0 * E1 + 0 * E2
C2 ~~ 0 * E1 + 0 * E2
E1 ~~ 0 * E2
A1 ~~ 1 * A1
A2 ~~ 1 * A2
C1 ~~ 1 * C1
C2 ~~ 1 * C2
E1 ~~ 1 * E1
E2 ~~ 1 * E2
ENG1 ~~ 0 * ENG1
ENG2 ~~ 0 * ENG2'
ENGACE <- sem(ENGACE, data = NMTP, group = "ZYG")
react <- function(d, o = 2){
require(pander)
q <- lavPredict(d, method = "regression")
names(q)[o] <- "2"
h <- cbind(q$'2')
y <- scale(h)
G4A = mean(y[, 1]^4); C4 = mean(y[, 3]^4); E4A = mean(y[, 5]^4); G4B = mean(y[, 2]^4); E4B = mean(y[, 6]^4)
GCA = mean(y[, 1]^2 * y[, 3]^2); GCB = mean(y[, 2]^2 * y[, 4]^2)
EGA = mean(y[, 1]^2 * y[, 5]^2); EGB = mean(y[, 2]^2 * y[, 6]^2)
ECA = mean(y[, 3]^2 * y[, 5]^2); ECB = mean(y[, 4]^2 * y[, 6]^2)
G4 = mean(c(G4A, G4B)); E4 = mean(c(E4A, E4B)); GC = mean(c(GCA, GCB)); EG = mean(c(EGA, EGB)); EC = mean(c(ECA, ECB))
od <- data.frame(c(G4, C4, E4, GC, EG, EC)); names(od) <- NULL
row.names(od) <- c("$E[A^4]$", "$E[C^4]$", "$E[E^4]$", "$E[A^2C^2]$", "$E[E^2A^2]$", "$E[E^2C^2]$")
nm <- deparse(substitute(d))
pander(od, caption = nm)}
react(pmanumACE); react(pmaverbACE); react(pmareaACE); react(pmaspaACE)
## Loading required package: pander
| \(E[A^4]\) | 2.82 |
| \(E[C^4]\) | 3.024 |
| \(E[E^4]\) | 2.802 |
| \(E[A^2C^2]\) | 2.383 |
| \(E[E^2A^2]\) | 2.283 |
| \(E[E^2C^2]\) | 1.31 |
| \(E[A^4]\) | 3.165 |
| \(E[C^4]\) | 2.793 |
| \(E[E^4]\) | 4.778 |
| \(E[A^2C^2]\) | 2.317 |
| \(E[E^2A^2]\) | 3.132 |
| \(E[E^2C^2]\) | 1.431 |
| \(E[A^4]\) | 2.948 |
| \(E[C^4]\) | 2.838 |
| \(E[E^4]\) | 4.352 |
| \(E[A^2C^2]\) | 2.142 |
| \(E[E^2A^2]\) | 2.892 |
| \(E[E^2C^2]\) | 1.17 |
| \(E[A^4]\) | 2.524 |
| \(E[C^4]\) | 2.454 |
| \(E[E^4]\) | 2.981 |
| \(E[A^2C^2]\) | 2.115 |
| \(E[E^2A^2]\) | 2.233 |
| \(E[E^2C^2]\) | 1.317 |
react(facesACE); react(logicalACE); react(socpercACE); react(cancalACE); react(crdrotACE); react(shipACE)
| \(E[A^4]\) | 2.326 |
| \(E[C^4]\) | 2.397 |
| \(E[E^4]\) | 2.475 |
| \(E[A^2C^2]\) | 2.072 |
| \(E[E^2A^2]\) | 1.931 |
| \(E[E^2C^2]\) | 1.243 |
| \(E[A^4]\) | 2.453 |
| \(E[C^4]\) | 2.577 |
| \(E[E^4]\) | 2.721 |
| \(E[A^2C^2]\) | 2.156 |
| \(E[E^2A^2]\) | 2.042 |
| \(E[E^2C^2]\) | 1.223 |
| \(E[A^4]\) | 3.23 |
| \(E[C^4]\) | 3.603 |
| \(E[E^4]\) | 3.035 |
| \(E[A^2C^2]\) | 2.996 |
| \(E[E^2A^2]\) | 2.46 |
| \(E[E^2C^2]\) | 1.628 |
| \(E[A^4]\) | 3.231 |
| \(E[C^4]\) | 2.884 |
| \(E[E^4]\) | 3.615 |
| \(E[A^2C^2]\) | 2.484 |
| \(E[E^2A^2]\) | 2.846 |
| \(E[E^2C^2]\) | 1.626 |
| \(E[A^4]\) | 2.116 |
| \(E[C^4]\) | 2.191 |
| \(E[E^4]\) | 2.9 |
| \(E[A^2C^2]\) | 1.789 |
| \(E[E^2A^2]\) | 1.908 |
| \(E[E^2C^2]\) | 0.979 |
| \(E[A^4]\) | 2.274 |
| \(E[C^4]\) | 2.728 |
| \(E[E^4]\) | 2.44 |
| \(E[A^2C^2]\) | 2.257 |
| \(E[E^2A^2]\) | 1.696 |
| \(E[E^2C^2]\) | 1.108 |
react(ENGACE); react(MATACE); react(SOCACE); react(SOCACE); react(VOCACE)
| \(E[A^4]\) | 2.94 |
| \(E[C^4]\) | 2.943 |
| \(E[E^4]\) | 2.89 |
| \(E[A^2C^2]\) | 2.343 |
| \(E[E^2A^2]\) | 2.44 |
| \(E[E^2C^2]\) | 1.38 |
| \(E[A^4]\) | 2.546 |
| \(E[C^4]\) | 2.692 |
| \(E[E^4]\) | 2.969 |
| \(E[A^2C^2]\) | 2.222 |
| \(E[E^2A^2]\) | 2.152 |
| \(E[E^2C^2]\) | 1.235 |
| \(E[A^4]\) | 2.649 |
| \(E[C^4]\) | 2.599 |
| \(E[E^4]\) | 2.859 |
| \(E[A^2C^2]\) | 2.148 |
| \(E[E^2A^2]\) | 2.275 |
| \(E[E^2C^2]\) | 1.293 |
| \(E[A^4]\) | 2.649 |
| \(E[C^4]\) | 2.599 |
| \(E[E^4]\) | 2.859 |
| \(E[A^2C^2]\) | 2.148 |
| \(E[E^2A^2]\) | 2.275 |
| \(E[E^2C^2]\) | 1.293 |
| \(E[A^4]\) | 2.805 |
| \(E[C^4]\) | 2.83 |
| \(E[E^4]\) | 3.059 |
| \(E[A^2C^2]\) | 2.108 |
| \(E[E^2A^2]\) | 2.436 |
| \(E[E^2C^2]\) | 1.195 |
These values are, of course, subject to a lot of both model- and sampling-based error (quartic term sensitivity is extreme and it also is for these crosses). With that in mind, there was no clear evidence for interaction in most of the univariate models and there may have been some evidence from the Primary Mental Abilities (PMA) verbal and reasoning subtests as well as cancellation that E was an \(E \times G\) factor. There's comparatively minor evidence that the social perception A was an \(A \times C\) factor. As was hinted at above, with a multivariate factor, the error will be reduced a bit. The multivariate A, C, and E factors will be modeled for the common variance (i.e., ACE independent pathways). I will not attempt to look at the parts of the unique factor variances.
react(NMTIPACE); react(ALTIPACE); react(PMAIPACE)
| \(E[A^4]\) | 2.515 |
| \(E[C^4]\) | 2.993 |
| \(E[E^4]\) | 2.974 |
| \(E[A^2C^2]\) | 0.9765 |
| \(E[E^2A^2]\) | 1.219 |
| \(E[E^2C^2]\) | 1.126 |
| \(E[A^4]\) | 2.304 |
| \(E[C^4]\) | 2.778 |
| \(E[E^4]\) | 3.145 |
| \(E[A^2C^2]\) | 1.254 |
| \(E[E^2A^2]\) | 0.92 |
| \(E[E^2C^2]\) | 0.8383 |
| \(E[A^4]\) | 3.102 |
| \(E[C^4]\) | 2.945 |
| \(E[E^4]\) | 4.714 |
| \(E[A^2C^2]\) | 1.5 |
| \(E[E^2A^2]\) | 2.192 |
| \(E[E^2C^2]\) | 1.195 |
There is no clear evidence for interaction in either sets of Osborne data. Because \(E[E^4]\) is higher than expected coupled with a quite high \(E[E^2A^2]\), there may be evidence that the E factor is partially an \(E \times G\) factor in the National Merit sample. If this is true, it will lead to a slight overestimation of E. Similar fourth-order moment methods can be used to allow the detection of dominance when it remains normally unidentifiable alongside C. Once I have access to a much larger sample (sampling error is extreme with these methods), I may add that here later alongside the method for calculating the proportion a factor score is interactive versus as-intended. This can be trivially extended to the many different residual and psychometric factor-related factors in the fully parameterized model or the common pathway model as well. Try it out for the common pathway model; the result for the g-mediated factors doesn't differ much. For further reading, see Molenaar (2010).
Jensen, A. R. (1978). Genetic and behavioral effects on nonrandom mating. In R. T. Osborne, C. E. Noble, & N. J. Wey, Human variation: The Biopsychology of Age, Race and Sex. Waltham Academic Press. https://www.scirp.org/(S(351jmbntvnsjt1aadkposzje))/reference/ReferencesPapers.aspx?ReferenceID=678803
Bouchard, T. J., & McGue, M. (1981). Familial studies of intelligence: A review. Science (New York, N.Y.), 212(4498), 1055-1059. https://doi.org/10.1126/science.7195071
Watkins, M. P., & Meredith, W. (1981). Spouse similarity in newlyweds with respect to specific cognitive abilities, socioeconomic status, and education. Behavior Genetics, 11(1), 1-21. https://doi.org/10.1007/BF01065824
Hugh-Jones, D., Verweij, K. J. H., St. Pourcain, B., & Abdellaoui, A. (2016). Assortative mating on educational attainment leads to genetic spousal resemblance for polygenic scores. Intelligence, 59, 103-108. https://doi.org/10.1016/j.intell.2016.08.005
Robinson, M. R., Kleinman, A., Graff, M., Vinkhuyzen, A. A. E., Couper, D., Miller, M. B., Peyrot, W. J., Abdellaoui, A., Zietsch, B. P., Nolte, I. M., van Vliet-Ostaptchouk, J. V., Snieder, H., Medland, S. E., Martin, N. G., Magnusson, P. K. E., Iacono, W. G., McGue, M., North, K. E., Yang, J., & Visscher, P. M. (2017). Genetic evidence of assortative mating in humans. Nature Human Behaviour, 1(1), 1-13. https://doi.org/10.1038/s41562-016-0016.
Loehlin, J. C., Harden, K. P., & Turkheimer, E. (2009). The Effect of Assumptions About Parental Assortative Mating and Genotype-Income Correlation on Estimates of Genotype-Environment Interaction in the National Merit Twin Study. Behavior Genetics, 39(2), 165-169. https://doi.org/10.1007/s10519-008-9253-9
Martin, N. (1978). Genetics of Sexual and Social Attitudes in Twins. In Twin Research: Psychology and Methodology (pp. 13-23). Alan R. Liss, Inc.
Molenaar, P. C. M., Boomsma, D. I., Neeleman, D., Dolan, C. V., Rao, D. C., & Vogler, G. P. (1990). Using factor scores to detect G \(\times\) E Interactive Origin of "pure" genetic or environmental factors obtained in genetic covariance structure analysis. Genetic Epidemiology, 7(1), 93-100. https://doi.org/10.1002/gepi.1370070116
Molenaar, P. C. M. (2010). On the Limits of Standard Quantitative Genetic Modeling of Inter-Individual Variation. In Handbook of Developmental Science, Behavior, and Genetics (pp. 626-648). John Wiley & Sons, Ltd. https://doi.org/10.1002/9781444327632.ch19