You will need the following packages for this assignment:
library(psych)
library(lavaan)
library(semTools)
library(semPlot)
library(GPArotation)
library(MCMCpack)
a) Use ‘4FactorCov2.txt’ and run a higher order and a bifactor model with four lower-order factors.
F4 should be definedby V10, V11, & V12;
Use your output for the higher-order model and do an SL transformation.
Tabulate your results as shown in lab slides.
b) Use the psych package to run a Schmid–Leiman Transformation and report the results. Do the SL results match what you calculated from the higher-order model?
c) Compare your results from the SL model to bifactor models and explain the differences if any.
#Count fields in matrix using the "4FactorCov2.txt" COVARIANCE amatrix
max_col <- max(count.fields("4FactorCov2.txt", sep = " ") ) #returns 12
###Read in actual matrix
mat1 <- read.table("4FactorCov2.txt", sep=" ", fill=TRUE, col.names=1:max_col)
cov <- xpnd(mat1[lower.tri(mat1, diag=T)], max_col) #xpnd creates symmetric matrix nicely
Convert co-variance matrix to correlation matrix
###Convert covariance matrix to CORRELATION matrix using 'cov2cor'
cor.mat<-cov2cor(cov)
Sometimes, R will not keep any or default to any labels for correlation matrices. In this case, let’s read in our own labels we would like to use. I defined the variables based on what was originally given in the homework and called these ‘labels.’
labels<- c('v1', 'v2', 'v3', 'v4', 'v5', 'v6', 'v7', 'v8', 'v9', 'v10', 'v11', 'v12')
Now using ‘colnames’ command, we can read in our defined object ‘labels’ so that each column in our matrix now has an actual variable name. Use ‘head’ to double check that this worked correctly.
colnames(cor.mat) <- labels #for CORRELATION MATRIX
head(cor.mat)
## v1 v2 v3 v4 v5 v6 v7
## [1,] 1.0000000 0.8205194 0.7342802 0.7379216 0.7074975 0.7380231 0.6847249
## [2,] 0.8205194 1.0000000 0.6916956 0.6663752 0.6497231 0.6683099 0.6177171
## [3,] 0.7342802 0.6916956 1.0000000 0.6436019 0.6079965 0.6430714 0.5951489
## [4,] 0.7379216 0.6663752 0.6436019 1.0000000 0.8173749 0.8539891 0.7529774
## [5,] 0.7074975 0.6497231 0.6079965 0.8173749 1.0000000 0.8003800 0.7012569
## [6,] 0.7380231 0.6683099 0.6430714 0.8539891 0.8003800 1.0000000 0.7447529
## v8 v9 v10 v11 v12
## [1,] 0.6503193 0.7087937 0.6204803 0.5831740 0.4884983
## [2,] 0.5954041 0.6435145 0.5493752 0.5185293 0.4377008
## [3,] 0.5537917 0.6144130 0.4926688 0.4727914 0.3928089
## [4,] 0.7319530 0.7781770 0.5588333 0.4954283 0.4348045
## [5,] 0.6944542 0.7358334 0.5357233 0.4835502 0.4131786
## [6,] 0.7372121 0.7746764 0.5625954 0.5177142 0.4498539
colnames(cov) <- labels #for COVARIANCE MATRIX
HO.model<- 'F1 =~ v1 + v2 + v3
F2 =~ v4 + v5 + v6
F3 =~ v7 + v8 + v9
F4 =~ v10 + v11 + v12
G =~ F1 + F2 + F3 + F4'
HOmodel.cfa<- cfa(HO.model, std.lv=TRUE, orthogonal=TRUE,
sample.cov = cor.mat, sample.nobs = 1000) #remember for covariance matrixs we have to specify the number of individuals in our sample and tell lavaan that we are using a covariance/correlation matrix.
Extract the fit summary of the higher-order model:
summary(HOmodel.cfa, standardized=T, fit.measures=T)
## lavaan 0.6-3 ended normally after 61 iterations
##
## Optimization method NLMINB
## Number of free parameters 28
##
## Number of observations 1000
##
## Estimator ML
## Model Fit Test Statistic 131.236
## Degrees of freedom 50
## P-value (Chi-square) 0.000
##
## Model test baseline model:
##
## Minimum Function Test Statistic 10795.964
## Degrees of freedom 66
## P-value 0.000
##
## User model versus baseline model:
##
## Comparative Fit Index (CFI) 0.992
## Tucker-Lewis Index (TLI) 0.990
##
## Loglikelihood and Information Criteria:
##
## Loglikelihood user model (H0) -11688.895
## Loglikelihood unrestricted model (H1) -11623.277
##
## Number of free parameters 28
## Akaike (AIC) 23433.791
## Bayesian (BIC) 23571.208
## Sample-size adjusted Bayesian (BIC) 23482.278
##
## Root Mean Square Error of Approximation:
##
## RMSEA 0.040
## 90 Percent Confidence Interval 0.032 0.049
## P-value RMSEA <= 0.05 0.971
##
## Standardized Root Mean Square Residual:
##
## SRMR 0.029
##
## Parameter Estimates:
##
## Information Expected
## Information saturated (h1) model Structured
## Standard Errors Standard
##
## Latent Variables:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## F1 =~
## v1 0.435 0.018 24.300 0.000 0.939 0.940
## v2 0.402 0.017 23.665 0.000 0.868 0.869
## v3 0.366 0.017 21.955 0.000 0.790 0.791
## F2 =~
## v4 0.215 0.025 8.661 0.000 0.927 0.927
## v5 0.203 0.024 8.634 0.000 0.876 0.876
## v6 0.213 0.025 8.661 0.000 0.920 0.920
## F3 =~
## v7 0.319 0.018 17.610 0.000 0.880 0.880
## v8 0.315 0.018 17.520 0.000 0.868 0.868
## v9 0.332 0.019 17.781 0.000 0.916 0.916
## F4 =~
## v10 0.592 0.023 25.906 0.000 0.868 0.868
## v11 0.531 0.022 24.152 0.000 0.779 0.779
## v12 0.472 0.022 21.370 0.000 0.693 0.693
## G =~
## F1 1.915 0.101 18.994 0.000 0.886 0.886
## F2 4.191 0.514 8.157 0.000 0.973 0.973
## F3 2.570 0.168 15.321 0.000 0.932 0.932
## F4 1.073 0.060 17.774 0.000 0.732 0.732
##
## Variances:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## .v1 0.117 0.011 10.374 0.000 0.117 0.117
## .v2 0.245 0.014 17.139 0.000 0.245 0.245
## .v3 0.374 0.019 19.706 0.000 0.374 0.375
## .v4 0.140 0.009 15.113 0.000 0.140 0.140
## .v5 0.232 0.012 18.532 0.000 0.232 0.232
## .v6 0.153 0.010 15.848 0.000 0.153 0.153
## .v7 0.225 0.013 17.214 0.000 0.225 0.225
## .v8 0.246 0.014 17.807 0.000 0.246 0.246
## .v9 0.161 0.011 14.507 0.000 0.161 0.161
## .v10 0.246 0.022 11.191 0.000 0.246 0.246
## .v11 0.393 0.024 16.431 0.000 0.393 0.393
## .v12 0.519 0.027 18.989 0.000 0.519 0.519
## F1 1.000 0.214 0.214
## F2 1.000 0.054 0.054
## F3 1.000 0.132 0.132
## F4 1.000 0.465 0.465
## G 1.000 1.000 1.000
semPaths(HOmodel.cfa, what="std", residuals=F)
Specify the model:
bfm.model<- 'G =~ v1 + v2 + v3 + v4 + v5 + v6 + v7 + v8 + v9 + v10 + v11 + v12
F1 =~ v1 + v2 + v3
F2 =~ v4 + v5 + v6
F3 =~ v7 + v8 + v9
F4 =~ v10 + v11 + v12'
Fitting the bifactor model
bfm.cfa<- cfa(bfm.model, std.lv=T, orthogonal=T, sample.cov = cor.mat, sample.nobs = 1000)
Extract Results:
summary(bfm.cfa, standardized=T, fit.measures=T)
Toggle Output
## lavaan 0.6-3 ended normally after 38 iterations
##
## Optimization method NLMINB
## Number of free parameters 36
##
## Number of observations 1000
##
## Estimator ML
## Model Fit Test Statistic 116.730
## Degrees of freedom 42
## P-value (Chi-square) 0.000
##
## Model test baseline model:
##
## Minimum Function Test Statistic 10795.964
## Degrees of freedom 66
## P-value 0.000
##
## User model versus baseline model:
##
## Comparative Fit Index (CFI) 0.993
## Tucker-Lewis Index (TLI) 0.989
##
## Loglikelihood and Information Criteria:
##
## Loglikelihood user model (H0) -11681.642
## Loglikelihood unrestricted model (H1) -11623.277
##
## Number of free parameters 36
## Akaike (AIC) 23435.285
## Bayesian (BIC) 23611.964
## Sample-size adjusted Bayesian (BIC) 23497.626
##
## Root Mean Square Error of Approximation:
##
## RMSEA 0.042
## 90 Percent Confidence Interval 0.033 0.051
## P-value RMSEA <= 0.05 0.918
##
## Standardized Root Mean Square Residual:
##
## SRMR 0.028
##
## Parameter Estimates:
##
## Information Expected
## Information saturated (h1) model Structured
## Standard Errors Standard
##
## Latent Variables:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## G =~
## v1 0.834 0.026 31.989 0.000 0.834 0.835
## v2 0.756 0.027 27.580 0.000 0.756 0.757
## v3 0.719 0.028 25.719 0.000 0.719 0.720
## v4 0.898 0.025 35.625 0.000 0.898 0.898
## v5 0.853 0.026 32.625 0.000 0.853 0.853
## v6 0.898 0.025 35.654 0.000 0.898 0.898
## v7 0.822 0.026 31.130 0.000 0.822 0.823
## v8 0.801 0.027 29.924 0.000 0.801 0.802
## v9 0.855 0.026 33.152 0.000 0.855 0.856
## v10 0.633 0.029 21.826 0.000 0.633 0.634
## v11 0.578 0.030 19.496 0.000 0.578 0.578
## v12 0.500 0.030 16.433 0.000 0.500 0.500
## F1 =~
## v1 0.413 0.026 16.147 0.000 0.413 0.414
## v2 0.456 0.029 15.673 0.000 0.456 0.457
## v3 0.322 0.028 11.495 0.000 0.322 0.322
## F2 =~
## v4 0.267 0.042 6.418 0.000 0.267 0.267
## v5 0.192 0.041 4.734 0.000 0.192 0.192
## v6 0.177 0.037 4.820 0.000 0.177 0.177
## F3 =~
## v7 0.308 0.028 10.906 0.000 0.308 0.309
## v8 0.351 0.030 11.695 0.000 0.351 0.352
## v9 0.316 0.027 11.664 0.000 0.316 0.316
## F4 =~
## v10 0.600 0.032 19.010 0.000 0.600 0.601
## v11 0.513 0.031 16.416 0.000 0.513 0.513
## v12 0.482 0.032 14.970 0.000 0.482 0.482
##
## Covariances:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## G ~~
## F1 0.000 0.000 0.000
## F2 0.000 0.000 0.000
## F3 0.000 0.000 0.000
## F4 0.000 0.000 0.000
## F1 ~~
## F2 0.000 0.000 0.000
## F3 0.000 0.000 0.000
## F4 0.000 0.000 0.000
## F2 ~~
## F3 0.000 0.000 0.000
## F4 0.000 0.000 0.000
## F3 ~~
## F4 0.000 0.000 0.000
##
## Variances:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## .v1 0.132 0.016 8.272 0.000 0.132 0.132
## .v2 0.219 0.021 10.627 0.000 0.219 0.219
## .v3 0.378 0.019 19.700 0.000 0.378 0.378
## .v4 0.122 0.018 6.772 0.000 0.122 0.122
## .v5 0.235 0.014 17.353 0.000 0.235 0.235
## .v6 0.161 0.010 15.395 0.000 0.161 0.162
## .v7 0.228 0.014 15.706 0.000 0.228 0.228
## .v8 0.233 0.017 13.692 0.000 0.233 0.233
## .v9 0.168 0.013 12.729 0.000 0.168 0.168
## .v10 0.238 0.030 8.026 0.000 0.238 0.238
## .v11 0.402 0.027 14.874 0.000 0.402 0.402
## .v12 0.517 0.029 17.718 0.000 0.517 0.517
## G 1.000 1.000 1.000
## F1 1.000 1.000 1.000
## F2 1.000 1.000 1.000
## F3 1.000 1.000 1.000
## F4 1.000 1.000 1.000
Extract residuals:
inspect(bfm.cfa, what="std")$theta
## v1 v2 v3 v4 v5 v6 v7 v8 v9 v10 v11
## v1 0.132
## v2 0.000 0.219
## v3 0.000 0.000 0.378
## v4 0.000 0.000 0.000 0.122
## v5 0.000 0.000 0.000 0.000 0.235
## v6 0.000 0.000 0.000 0.000 0.000 0.162
## v7 0.000 0.000 0.000 0.000 0.000 0.000 0.228
## v8 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.233
## v9 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.168
## v10 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.238
## v11 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.402
## v12 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000
## v12
## v1
## v2
## v3
## v4
## v5
## v6
## v7
## v8
## v9
## v10
## v11
## v12 0.517
Plot the model:
semPaths(bfm.cfa, what="std", residuals=F)
Calculate Omega H and Omega S from bifactor output (using SemTools)
reliability(bfm.cfa)
## G F1 F2 F3 F4 total
## alpha 0.9492601 0.8994387 0.93349823 0.9181954 0.8218173 0.9492601
## omega 0.9650485 0.6611272 0.43802662 0.6023332 0.6873941 0.9670682
## omega2 0.9092831 0.1898107 0.05094612 0.1231591 0.3835987 0.9670682
## omega3 0.9066385 0.1898107 0.05094613 0.1231591 0.3835986 0.9642555
## avevar NA NA NA NA NA 0.7471131
For the code below, we will be using the ‘psych’ package.
We canrun the SL transformation from the ground up like we have done previously using ‘omega’ or ‘omegaSem.’ We just need to specify the correlation matrix and number of factors/observations.
#Results match the bifactor model when using this code
omegaSem(m=cor.mat, nfactors=4, fm="minres", flip=TRUE,
digits=3, title="SL Transformation",
sl=TRUE, plot=TRUE, n.obs=1000, rotate="oblimin")
Toggle Output
##
## Call: omegaSem(m = cor.mat, nfactors = 4, fm = "minres", flip = TRUE,
## digits = 3, title = "SL Transformation", sl = TRUE, plot = TRUE,
## n.obs = 1000, rotate = "oblimin")
## SL Transformation
## Call: omega(m = m, nfactors = nfactors, fm = fm, key = key, flip = flip,
## digits = digits, title = title, sl = sl, labels = labels,
## plot = plot, n.obs = n.obs, rotate = rotate, Phi = Phi, option = option)
## Alpha: 0.95
## G.6: 0.96
## Omega Hierarchical: 0.9
## Omega H asymptotic: 0.93
## Omega Total 0.97
##
## Schmid Leiman Factor loadings greater than 0.2
## g F1* F2* F3* F4* h2 u2 p2
## v1 0.86 0.36 0.87 0.13 0.85
## v2 0.79 0.39 0.77 0.23 0.81
## v3 0.74 0.30 0.62 0.38 0.87
## v4 0.89 0.28 0.88 0.12 0.91
## v5 0.84 0.24 0.76 0.24 0.93
## v6 0.88 0.24 0.84 0.16 0.93
## v7 0.80 0.37 0.77 0.23 0.82
## v8 0.78 0.41 0.77 0.23 0.78
## v9 0.83 0.38 0.83 0.17 0.83
## v10 0.64 0.59 0.76 0.24 0.54
## v11 0.59 0.49 0.60 0.40 0.58
## v12 0.51 0.48 0.48 0.52 0.53
##
## With eigenvalues of:
## g F1* F2* F3* F4*
## 7.13 0.45 0.83 0.37 0.20
##
## general/max 8.59 max/min = 4.24
## mean percent general = 0.78 with sd = 0.15 and cv of 0.19
## Explained Common Variance of the general factor = 0.79
##
## The degrees of freedom are 24 and the fit is 0.01
## The number of observations was 1000 with Chi Square = 10.49 with prob < 0.99
## The root mean square of the residuals is 0
## The df corrected root mean square of the residuals is 0.01
## RMSEA index = 0 and the 10 % confidence intervals are 0 0
## BIC = -155.29
##
## Compare this with the adequacy of just a general factor and no group factors
## The degrees of freedom for just the general factor are 54 and the fit is 1.44
## The number of observations was 1000 with Chi Square = 1434.47 with prob < 1.5e-264
## The root mean square of the residuals is 0.08
## The df corrected root mean square of the residuals is 0.09
##
## RMSEA index = 0.16 and the 10 % confidence intervals are 0.153 0.167
## BIC = 1061.46
##
## Measures of factor score adequacy
## g F1* F2* F3* F4*
## Correlation of scores with factors 0.96 0.80 0.84 0.69 0.58
## Multiple R square of scores with factors 0.93 0.65 0.70 0.47 0.34
## Minimum correlation of factor score estimates 0.86 0.29 0.40 -0.05 -0.32
##
## Total, General and Subset omega for each subset
## g F1* F2* F3* F4*
## Omega total for total scores and subscales 0.97 0.92 0.82 0.90 0.93
## Omega general for total scores and subscales 0.90 0.75 0.45 0.76 0.86
## Omega group for total scores and subscales 0.06 0.17 0.37 0.14 0.07
##
## The following analyses were done using the lavaan package
##
## Omega Hierarchical from a confirmatory model using sem = 0.91
## Omega Total from a confirmatory model using sem = 0.97
## With loadings of
## g F1* F2* F3* F4* h2 u2 p2
## v1 0.83 0.41 0.87 0.13 0.79
## v2 0.76 0.46 0.78 0.22 0.74
## v3 0.72 0.32 0.62 0.38 0.84
## v4 0.90 0.27 0.88 0.12 0.92
## v5 0.85 0.76 0.24 0.95
## v6 0.90 0.84 0.16 0.96
## v7 0.82 0.31 0.77 0.23 0.87
## v8 0.80 0.35 0.77 0.23 0.83
## v9 0.86 0.32 0.83 0.17 0.89
## v10 0.63 0.60 0.76 0.24 0.52
## v11 0.58 0.51 0.60 0.40 0.56
## v12 0.50 0.48 0.48 0.52 0.52
##
## With eigenvalues of:
## g F1* F2* F3* F4*
## 7.16 0.32 0.86 0.48 0.14
##
## The degrees of freedom of the confimatory model are 42 and the fit is 116.7303 with p = 5.835016e-09
## general/max 8.37 max/min = 6.13
## mean percent general = 0.78 with sd = 0.16 and cv of 0.21
## Explained Common Variance of the general factor = 0.8
##
## Measures of factor score adequacy
## g F1* F2* F3* F4*
## Correlation of scores with factors 0.97 0.69 0.87 0.81 0.52
## Multiple R square of scores with factors 0.94 0.48 0.75 0.65 0.27
## Minimum correlation of factor score estimates 0.88 -0.05 0.50 0.31 -0.46
##
## Total, General and Subset omega for each subset
## g F1* F2* F3* F4*
## Omega total for total scores and subscales 0.97 0.92 0.82 0.90 0.93
## Omega general for total scores and subscales 0.91 0.79 0.44 0.71 0.88
## Omega group for total scores and subscales 0.06 0.12 0.38 0.19 0.05
##
## To get the standard sem fit statistics, ask for summary on the fitted object
To use a lavaan fitted object to perform a SL transformation on that previous model, we can use the code below:
Using the ‘bifactor’ fitted object to perform a SL
omegaFromSem(bfm.cfa, flip=TRUE, plot=FALSE)
##
##
## The following analyses were done using the lavaan package
##
## Omega Hierarchical from a confirmatory model using sem = 0.91
## Omega Total from a confirmatory model using sem = 0.97
## With loadings of
## g F1* F2* F3* F4* h2 u2 p2
## v1 0.83 0.41 0.87 0.13 0.79
## v2 0.76 0.46 0.78 0.22 0.74
## v3 0.72 0.32 0.62 0.38 0.84
## v4 0.90 0.27 0.88 0.12 0.92
## v5 0.85 0.76 0.24 0.95
## v6 0.90 0.84 0.16 0.96
## v7 0.82 0.31 0.77 0.23 0.87
## v8 0.80 0.35 0.77 0.23 0.83
## v9 0.86 0.32 0.83 0.17 0.89
## v10 0.63 0.60 0.76 0.24 0.52
## v11 0.58 0.51 0.60 0.40 0.56
## v12 0.50 0.48 0.48 0.52 0.52
##
## With eigenvalues of:
## g F1* F2* F3* F4*
## 7.16 0.48 0.14 0.32 0.86
##
## The degrees of freedom of the confimatory model are 42 and the fit is 116.7303 with p = 5.835016e-09
## general/max 8.37 max/min = 6.13
## mean percent general = 0.78 with sd = 0.16 and cv of 0.21
## Explained Common Variance of the general factor = 0.8
##
## Measures of factor score adequacy
## g F1* F2* F3* F4*
## Correlation of scores with factors 0.97 0.81 0.52 0.69 0.87
## Multiple R square of scores with factors 0.94 0.65 0.27 0.48 0.75
## Minimum correlation of factor score estimates 0.88 0.31 -0.46 -0.05 0.50
##
## Total, General and Subset omega for each subset
## g F1* F2* F3* F4*
## Omega total for total scores and subscales 0.97 0.90 0.93 0.92 0.82
## Omega general for total scores and subscales 0.91 0.71 0.88 0.79 0.44
## Omega group for total scores and subscales 0.06 0.19 0.05 0.12 0.38
##
## To get the standard sem fit statistics, ask for summary on the fitted object
Using the fitted Higher-Order Model:
omegaFromSem(HOmodel.cfa, flip=TRUE, plot=FALSE)
##
##
## The following analyses were done using the lavaan package
##
## Omega Hierarchical from a confirmatory model using sem = 0.02
## Omega Total from a confirmatory model using sem = 0.89
## With loadings of
## g F1* F2* F3* F4* h2 u2 p2
## v1 0.43 0.19 0.81 0.97
## v2 0.40 0.16 0.84 1.00
## v3 0.37 0.13 0.87 1.05
## v4 0.22 0.05 0.95 0.00
## v5 0.20 0.04 0.96 0.00
## v6 0.21 0.05 0.95 0.00
## v7 0.32 0.10 0.90 0.00
## v8 0.31 0.10 0.90 0.00
## v9 0.33 0.11 0.89 0.00
## v10 0.59 0.35 0.65 0.00
## v11 0.53 0.28 0.72 0.00
## v12 0.47 0.22 0.78 0.00
##
## With eigenvalues of:
## g F1* F2* F3* F4*
## 0.48 0.13 0.31 0.85 0.00
##
## The degrees of freedom of the confimatory model are 50 and the fit is 131.2363 with p = 3.239155e-09
## general/max 0.57 max/min = Inf
## mean percent general = 0.25 with sd = 0.46 and cv of 1.81
## Explained Common Variance of the general factor = 0.27
##
## Measures of factor score adequacy
## g F1* F2* F3* F4*
## Correlation of scores with factors 0.82 0.49 0.69 0.87 0
## Multiple R square of scores with factors 0.67 0.24 0.47 0.75 0
## Minimum correlation of factor score estimates 0.33 -0.53 -0.05 0.50 -1
##
## Total, General and Subset omega for each subset
## g F1* F2* F3* F4*
## Omega total for total scores and subscales 0.89 0.07 0.12 0.38 NA
## Omega general for total scores and subscales 0.02 0.05 0.00 0.00 NA
## Omega group for total scores and subscales 0.04 0.01 0.12 0.38 NA
##
## To get the standard sem fit statistics, ask for summary on the fitted object
Q1b: Do the SL results match what you calculated from the higher-order model using transformation equations?
Not quite. The calculations in excel were higher for the general factor and lower for some of the specific factors in the excel calculations versus what R gave. (Excel omega H was .93, versus .90 in R).
Q1c: Compare your results from the SL model to bifactor models and explain the differences if any. Use your results from the bifactor model and calculate omega-h and omega subscale for each cluster.
Q2: On the general factor, the SL loadings tended to be slightly higher compared to the G loadings from the CFA bifactor model. For the specific factors, the SL transformation attenuated the specific factor loadings on average compared to the CFA bifactor model.
HSSchool.csv contains raw data collected from school children. The first column is school type (public=1, private=2), followed by 9 indicators (x-1 through x9) thought to measure three domains: Visual (measured by x1-x3), Textual (measured by x4-x6) and Speed (measured by x7-x9). * Run a multigroup CFA to test for measurement invariance across school type using Millsap & Tein’s guideline. At what level doe invariance hold? Discuss your results and upload output files on Canvas.
Importing the data
mi.data<-read.csv("/Users/alliechoate/Desktop/Psychometric Materials/Psychometrics/HSSchoolupdated.csv", header=F)
#telling R the first variable is categorical
mi.data$V1<- factor(mi.data$V1, levels=c('1','2'), labels=c("public","private"))
###changing the variable names to ones I want
cols <- c("schooltype", "x1", "x2","x3","x4","x5","x6","x7","x8","x9")
colnames(mi.data) <- cols
head(mi.data)
## schooltype x1 x2 x3 x4 x5 x6 x7 x8
## 1 public 3.333333 7.75 0.375 2.333333 5.75 1.2857143 3.391304 5.75
## 2 public 5.333333 5.25 2.125 1.666667 3.00 1.2857143 3.782609 6.25
## 3 public 4.500000 5.25 1.875 1.000000 1.75 0.4285714 3.260870 3.90
## 4 public 5.333333 7.75 3.000 2.666667 4.50 2.4285714 3.000000 5.30
## 5 public 4.833333 4.75 0.875 2.666667 4.00 2.5714286 3.695652 6.30
## 6 public 5.333333 5.00 2.250 1.000000 3.00 0.8571429 4.347826 6.65
## x9
## 1 6.361111
## 2 7.916667
## 3 4.416667
## 4 4.861111
## 5 5.916667
## 6 7.500000
Run the actual tests of measurement invariance using this command in ‘SemTools’
base.model<- 'visual =~ x1 + x2 + x3
textual =~ x4 + x5 + x6
speed =~ x7 + x8 + x9'
Now let’s run the series of MI tests: * All you need to do is specify the grouping variable in the data, in this case ‘schooltype,’ as well as what the baseline model is (your configural model), and R will do the rest for you!
measurementInvariance(model = base.model, group = "schooltype",
data = mi.data) #will default to one item being marker vbl
## Warning: The measurementInvariance function is deprecated, and it will
## cease to be included in future versions of semTools. See help('semTools-
## deprecated) for details.
##
## Measurement invariance models:
##
## Model 1 : fit.configural
## Model 2 : fit.loadings
## Model 3 : fit.intercepts
## Model 4 : fit.means
##
## Chi Square Difference Test
##
## Df AIC BIC Chisq Chisq diff Df diff Pr(>Chisq)
## fit.configural 48 7484.4 7706.8 115.85
## fit.loadings 54 7480.6 7680.8 124.04 8.192 6 0.2244
## fit.intercepts 60 7508.6 7686.6 164.10 40.059 6 4.435e-07 ***
## fit.means 63 7543.1 7710.0 204.61 40.502 3 8.338e-09 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
##
## Fit measures:
##
## cfi rmsea cfi.delta rmsea.delta
## fit.configural 0.923 0.097 NA NA
## fit.loadings 0.921 0.093 0.002 0.004
## fit.intercepts 0.882 0.107 0.038 0.015
## fit.means 0.840 0.122 0.042 0.015
In addition to looking to see if the chi-square LR test is significantly different from each of the levels of measurement invariance, it is also recommended to check the change in alternative fit indices (i.e.,CFI & RMSEA).
Overall, our LR tests for each level of invariance suggests that factor loadings (i.e., metric invariance) holds and are equivlent between groups, but each level thereafter failed. When evaluating alternative fit indices, our change in CFI was congruent to the results of the LR test. For RMSEA, it is a bit unclear if scalar invariance does or does not hold, since the change in RMSEA was .015. However, given the CFI change was ~ .04 and chi-square LR was significant, it is probably safe to conclude that this model still only holds at the metric level of invariance.