Нам понадобятся три пакета.
library(psych)
library(lavaan)
library(semTools)
Если их нет на компьютере, установите (удалите “#”)
# install.packages('psych')
# install.packages('lavaan')
# install.packages('semTools')
Scree plot
str(bfi)
## 'data.frame': 2800 obs. of 28 variables:
## $ A1 : int 2 2 5 4 2 6 2 4 4 2 ...
## $ A2 : int 4 4 4 4 3 6 5 3 3 5 ...
## $ A3 : int 3 5 5 6 3 5 5 1 6 6 ...
## $ A4 : int 4 2 4 5 4 6 3 5 3 6 ...
## $ A5 : int 4 5 4 5 5 5 5 1 3 5 ...
## $ C1 : int 2 5 4 4 4 6 5 3 6 6 ...
## $ C2 : int 3 4 5 4 4 6 4 2 6 5 ...
## $ C3 : int 3 4 4 3 5 6 4 4 3 6 ...
## $ C4 : int 4 3 2 5 3 1 2 2 4 2 ...
## $ C5 : int 4 4 5 5 2 3 3 4 5 1 ...
## $ E1 : int 3 1 2 5 2 2 4 3 5 2 ...
## $ E2 : int 3 1 4 3 2 1 3 6 3 2 ...
## $ E3 : int 3 6 4 4 5 6 4 4 NA 4 ...
## $ E4 : int 4 4 4 4 4 5 5 2 4 5 ...
## $ E5 : int 4 3 5 4 5 6 5 1 3 5 ...
## $ N1 : int 3 3 4 2 2 3 1 6 5 5 ...
## $ N2 : int 4 3 5 5 3 5 2 3 5 5 ...
## $ N3 : int 2 3 4 2 4 2 2 2 2 5 ...
## $ N4 : int 2 5 2 4 4 2 1 6 3 2 ...
## $ N5 : int 3 5 3 1 3 3 1 4 3 4 ...
## $ O1 : int 3 4 4 3 3 4 5 3 6 5 ...
## $ O2 : int 6 2 2 3 3 3 2 2 6 1 ...
## $ O3 : int 3 4 5 4 4 5 5 4 6 5 ...
## $ O4 : int 4 3 5 3 3 6 6 5 6 5 ...
## $ O5 : int 3 3 2 5 3 1 1 3 1 2 ...
## $ gender : int 1 2 2 2 1 2 1 1 1 2 ...
## $ education: int NA NA NA NA NA 3 NA 2 1 NA ...
## $ age : int 16 18 17 17 17 21 18 19 19 17 ...
fa.parallel(bfi[,1:15], fa='fa')
## Parallel analysis suggests that the number of factors = 5 and the number of components = NA
Выделим три фактора
efa.results<-fa(bfi[,1:15], nfactors = 3, rotate="varimax")
efa.results
## Factor Analysis using method = minres
## Call: fa(r = bfi[, 1:15], nfactors = 3, rotate = "varimax")
## Standardized loadings (pattern matrix) based upon correlation matrix
## MR1 MR2 MR3 h2 u2 com
## A1 -0.01 0.01 -0.41 0.16 0.84 1.0
## A2 0.20 0.13 0.64 0.47 0.53 1.3
## A3 0.30 0.10 0.68 0.56 0.44 1.4
## A4 0.17 0.21 0.42 0.25 0.75 1.8
## A5 0.42 0.10 0.51 0.44 0.56 2.0
## C1 0.08 0.56 0.01 0.31 0.69 1.0
## C2 0.02 0.64 0.10 0.42 0.58 1.0
## C3 0.01 0.55 0.12 0.32 0.68 1.1
## C4 -0.10 -0.63 -0.07 0.41 0.59 1.1
## C5 -0.18 -0.55 -0.08 0.34 0.66 1.3
## E1 -0.60 0.03 -0.09 0.37 0.63 1.0
## E2 -0.71 -0.12 -0.10 0.53 0.47 1.1
## E3 0.54 0.11 0.25 0.37 0.63 1.5
## E4 0.66 0.10 0.25 0.51 0.49 1.3
## E5 0.46 0.32 0.14 0.33 0.67 2.0
##
## MR1 MR2 MR3
## SS loadings 2.19 1.94 1.67
## Proportion Var 0.15 0.13 0.11
## Cumulative Var 0.15 0.28 0.39
## Proportion Explained 0.38 0.33 0.29
## Cumulative Proportion 0.38 0.71 1.00
##
## Mean item complexity = 1.3
## Test of the hypothesis that 3 factors are sufficient.
##
## The degrees of freedom for the null model are 105 and the objective function was 3.81 with Chi Square of 10631.24
## The degrees of freedom for the model are 63 and the objective function was 0.34
##
## The root mean square of the residuals (RMSR) is 0.04
## The df corrected root mean square of the residuals is 0.05
##
## The harmonic number of observations is 2762 with the empirical chi square 806.53 with prob < 1.6e-129
## The total number of observations was 2800 with Likelihood Chi Square = 961.82 with prob < 6.4e-161
##
## Tucker Lewis Index of factoring reliability = 0.858
## RMSEA index = 0.072 and the 90 % confidence intervals are 0.067 0.075
## BIC = 461.77
## Fit based upon off diagonal values = 0.98
## Measures of factor score adequacy
## MR1 MR2 MR3
## Correlation of (regression) scores with factors 0.86 0.85 0.82
## Multiple R square of scores with factors 0.74 0.73 0.68
## Minimum correlation of possible factor scores 0.47 0.46 0.35
lavaan package Отличный сайт пакета http://lavaan.org/.
myModel <-
'
# regression
y ~ f1 + f2 + x1 + x2
f1 ~ f2 + f3
f2 ~ f3 + x1 + x2
# latent variables
f1 =~ item1 + item2 + item3
f2 =~ item4 + item5 + item6 + item7
f3 =~ f1 + f2
# (residual) variances and covariances
item1 ~~ item1
item1 ~~ item2
# intercepts
item1 ~ 1
f1 ~ 19
'
Посмотрим на данные HolzingerSwineford1939. Это данные о тестировании способностей 7- и 8-классников двых школ (Pasteur and Grant-White).
str(HolzingerSwineford1939)
## 'data.frame': 301 obs. of 15 variables:
## $ id : int 1 2 3 4 5 6 7 8 9 11 ...
## $ sex : int 1 2 2 1 2 2 1 2 2 2 ...
## $ ageyr : int 13 13 13 13 12 14 12 12 13 12 ...
## $ agemo : int 1 7 1 2 2 1 1 2 0 5 ...
## $ school: Factor w/ 2 levels "Grant-White",..: 2 2 2 2 2 2 2 2 2 2 ...
## $ grade : int 7 7 7 7 7 7 7 7 7 7 ...
## $ x1 : num 3.33 5.33 4.5 5.33 4.83 ...
## $ x2 : num 7.75 5.25 5.25 7.75 4.75 5 6 6.25 5.75 5.25 ...
## $ x3 : num 0.375 2.125 1.875 3 0.875 ...
## $ x4 : num 2.33 1.67 1 2.67 2.67 ...
## $ x5 : num 5.75 3 1.75 4.5 4 3 6 4.25 5.75 5 ...
## $ x6 : num 1.286 1.286 0.429 2.429 2.571 ...
## $ x7 : num 3.39 3.78 3.26 3 3.7 ...
## $ x8 : num 5.75 6.25 3.9 5.3 6.3 6.65 6.2 5.15 4.65 4.55 ...
## $ x9 : num 6.36 7.92 4.42 4.86 5.92 ...
Проверим CFA модель, состоящую 3 коррелирующих латентных переменных (или факторов), каждый из которых имеет по 3 индикатора:
visual factor измеряется переменными x1, x2, x3
textual factor измеряется переменными x4, x5, x6
a speed factor измеряется переменными x7, x8, x9
Зададим модель.
my_model <- '
visual =~ x1 + x2 + x3
textual =~ x4 + x5 + x6
speed =~ x7 + x8 + x9
'
Теперь мы можем проверить соответствие этой модели эмпирическим данным:
fit1 <- cfa(my_model, data = HolzingerSwineford1939)
ВНИМИНИЕ! Ряд параметров включены по умолчанию. Всегда можно и полезно посмотреть таблицу со всеми параметрами модели:
parTable(fit1)
## id lhs op rhs user block group free ustart exo label plabel
## 1 1 visual =~ x1 1 1 1 0 1 0 .p1.
## 2 2 visual =~ x2 1 1 1 1 NA 0 .p2.
## 3 3 visual =~ x3 1 1 1 2 NA 0 .p3.
## 4 4 textual =~ x4 1 1 1 0 1 0 .p4.
## 5 5 textual =~ x5 1 1 1 3 NA 0 .p5.
## 6 6 textual =~ x6 1 1 1 4 NA 0 .p6.
## 7 7 speed =~ x7 1 1 1 0 1 0 .p7.
## 8 8 speed =~ x8 1 1 1 5 NA 0 .p8.
## 9 9 speed =~ x9 1 1 1 6 NA 0 .p9.
## 10 10 x1 ~~ x1 0 1 1 7 NA 0 .p10.
## 11 11 x2 ~~ x2 0 1 1 8 NA 0 .p11.
## 12 12 x3 ~~ x3 0 1 1 9 NA 0 .p12.
## 13 13 x4 ~~ x4 0 1 1 10 NA 0 .p13.
## 14 14 x5 ~~ x5 0 1 1 11 NA 0 .p14.
## 15 15 x6 ~~ x6 0 1 1 12 NA 0 .p15.
## 16 16 x7 ~~ x7 0 1 1 13 NA 0 .p16.
## 17 17 x8 ~~ x8 0 1 1 14 NA 0 .p17.
## 18 18 x9 ~~ x9 0 1 1 15 NA 0 .p18.
## 19 19 visual ~~ visual 0 1 1 16 NA 0 .p19.
## 20 20 textual ~~ textual 0 1 1 17 NA 0 .p20.
## 21 21 speed ~~ speed 0 1 1 18 NA 0 .p21.
## 22 22 visual ~~ textual 0 1 1 19 NA 0 .p22.
## 23 23 visual ~~ speed 0 1 1 20 NA 0 .p23.
## 24 24 textual ~~ speed 0 1 1 21 NA 0 .p24.
## start est se
## 1 1.000 1.000 0.000
## 2 0.778 0.554 0.100
## 3 1.107 0.729 0.109
## 4 1.000 1.000 0.000
## 5 1.133 1.113 0.065
## 6 0.924 0.926 0.055
## 7 1.000 1.000 0.000
## 8 1.225 1.180 0.165
## 9 0.854 1.082 0.151
## 10 0.679 0.549 0.114
## 11 0.691 1.134 0.102
## 12 0.637 0.844 0.091
## 13 0.675 0.371 0.048
## 14 0.830 0.446 0.058
## 15 0.598 0.356 0.043
## 16 0.592 0.799 0.081
## 17 0.511 0.488 0.074
## 18 0.508 0.566 0.071
## 19 0.050 0.809 0.145
## 20 0.050 0.979 0.112
## 21 0.050 0.384 0.086
## 22 0.000 0.408 0.074
## 23 0.000 0.262 0.056
## 24 0.000 0.173 0.049
Посмотрим результаты для модели.
summary(fit1, fit.measures = TRUE)
## lavaan (0.5-23.1097) converged normally after 35 iterations
##
## Number of observations 301
##
## Estimator ML
## Minimum Function Test Statistic 85.306
## Degrees of freedom 24
## P-value (Chi-square) 0.000
##
## Model test baseline model:
##
## Minimum Function Test Statistic 918.852
## Degrees of freedom 36
## P-value 0.000
##
## User model versus baseline model:
##
## Comparative Fit Index (CFI) 0.931
## Tucker-Lewis Index (TLI) 0.896
##
## Loglikelihood and Information Criteria:
##
## Loglikelihood user model (H0) -3737.745
## Loglikelihood unrestricted model (H1) -3695.092
##
## Number of free parameters 21
## Akaike (AIC) 7517.490
## Bayesian (BIC) 7595.339
## Sample-size adjusted Bayesian (BIC) 7528.739
##
## Root Mean Square Error of Approximation:
##
## RMSEA 0.092
## 90 Percent Confidence Interval 0.071 0.114
## P-value RMSEA <= 0.05 0.001
##
## Standardized Root Mean Square Residual:
##
## SRMR 0.065
##
## Parameter Estimates:
##
## Information Expected
## Standard Errors Standard
##
## Latent Variables:
## Estimate Std.Err z-value P(>|z|)
## visual =~
## x1 1.000
## x2 0.554 0.100 5.554 0.000
## x3 0.729 0.109 6.685 0.000
## textual =~
## x4 1.000
## x5 1.113 0.065 17.014 0.000
## x6 0.926 0.055 16.703 0.000
## speed =~
## x7 1.000
## x8 1.180 0.165 7.152 0.000
## x9 1.082 0.151 7.155 0.000
##
## Covariances:
## Estimate Std.Err z-value P(>|z|)
## visual ~~
## textual 0.408 0.074 5.552 0.000
## speed 0.262 0.056 4.660 0.000
## textual ~~
## speed 0.173 0.049 3.518 0.000
##
## Variances:
## Estimate Std.Err z-value P(>|z|)
## .x1 0.549 0.114 4.833 0.000
## .x2 1.134 0.102 11.146 0.000
## .x3 0.844 0.091 9.317 0.000
## .x4 0.371 0.048 7.779 0.000
## .x5 0.446 0.058 7.642 0.000
## .x6 0.356 0.043 8.277 0.000
## .x7 0.799 0.081 9.823 0.000
## .x8 0.488 0.074 6.573 0.000
## .x9 0.566 0.071 8.003 0.000
## visual 0.809 0.145 5.564 0.000
## textual 0.979 0.112 8.737 0.000
## speed 0.384 0.086 4.451 0.000
Нарисуем модель.Для этого есть прекрасный пакет lavaan package, Сайт с примерами http://sachaepskamp.com/semPlot/examples.
library(semPlot)
semPaths(fit1)
Добавим на картинку стандартизованные параметры.
semPaths(fit1, "std")
Можно всё менять и улучшать вид картинки.
semPaths(fit1, 'eq', 'std', style="lisrel", layout='tree2', shapeMan = "rectangle",
edge.color="black", intercepts=F, rotation=2, curvature=T, sizeLat=10,
sizeMan1=9, sizeMan2=5, edge.label.cex=0.9, fixedStyle=1, mar=c(1, 8, 1, 8),
groups = "latents", pastel = TRUE)
Если соответствие модели плохое,можно посмотреть на индкесы модификации.
MI <- modificationIndices(fit1)
subset(MI, mi > 10)
## lhs op rhs mi epc sepc.lv sepc.all sepc.nox
## 28 visual =~ x7 18.631 -0.422 -0.380 -0.349 -0.349
## 30 visual =~ x9 36.411 0.577 0.519 0.515 0.515
## 76 x7 ~~ x8 34.145 0.536 0.536 0.488 0.488
## 78 x8 ~~ x9 14.946 -0.423 -0.423 -0.415 -0.415
Набор данных PoliticalDemocracy содержит ряд змерений демократии и индустриализации развивающихся стран. Проверим модель с тремя латентными переменными.
model2 <- '
# measurement model
ind60 =~ x1 + x2 + x3
dem60 =~ y1 + y2 + y3 + y4
dem65 =~ y5 + y6 + y7 + y8
# regressions
dem60 ~ ind60
dem65 ~ ind60 + dem60
# residual covariances
y1 ~~ y5
y2 ~~ y4 + y6
y3 ~~ y7
y4 ~~ y8
'
fit2 <- sem(model2, data = PoliticalDemocracy)
summary(fit2, standardized = TRUE)
## lavaan (0.5-23.1097) converged normally after 64 iterations
##
## Number of observations 75
##
## Estimator ML
## Minimum Function Test Statistic 45.418
## Degrees of freedom 36
## P-value (Chi-square) 0.135
##
## Parameter Estimates:
##
## Information Expected
## Standard Errors Standard
##
## Latent Variables:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## ind60 =~
## x1 1.000 0.670 0.920
## x2 2.181 0.139 15.720 0.000 1.460 0.973
## x3 1.819 0.152 11.966 0.000 1.218 0.872
## dem60 =~
## y1 1.000 2.214 0.849
## y2 1.260 0.185 6.814 0.000 2.791 0.713
## y3 1.046 0.153 6.855 0.000 2.316 0.711
## y4 1.272 0.147 8.669 0.000 2.817 0.846
## dem65 =~
## y5 1.000 2.046 0.786
## y6 1.288 0.176 7.330 0.000 2.634 0.786
## y7 1.312 0.170 7.719 0.000 2.684 0.820
## y8 1.357 0.166 8.153 0.000 2.775 0.863
##
## Regressions:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## dem60 ~
## ind60 1.471 0.398 3.691 0.000 0.445 0.445
## dem65 ~
## ind60 0.516 0.215 2.398 0.016 0.169 0.169
## dem60 0.809 0.099 8.213 0.000 0.876 0.876
##
## Covariances:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## .y1 ~~
## .y5 0.760 0.361 2.108 0.035 0.760 0.343
## .y2 ~~
## .y4 1.475 0.696 2.120 0.034 1.475 0.302
## .y6 2.194 0.760 2.888 0.004 2.194 0.385
## .y3 ~~
## .y7 1.098 0.616 1.782 0.075 1.098 0.256
## .y4 ~~
## .y8 0.383 0.460 0.833 0.405 0.383 0.133
##
## Variances:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## .x1 0.082 0.020 4.180 0.000 0.082 0.154
## .x2 0.120 0.070 1.707 0.088 0.120 0.053
## .x3 0.466 0.090 5.172 0.000 0.466 0.239
## .y1 1.898 0.446 4.256 0.000 1.898 0.279
## .y2 7.551 1.395 5.412 0.000 7.551 0.492
## .y3 5.233 0.972 5.384 0.000 5.233 0.494
## .y4 3.153 0.748 4.215 0.000 3.153 0.284
## .y5 2.587 0.503 5.140 0.000 2.587 0.382
## .y6 4.301 0.825 5.212 0.000 4.301 0.383
## .y7 3.510 0.711 4.935 0.000 3.510 0.328
## .y8 2.651 0.615 4.313 0.000 2.651 0.256
## ind60 0.448 0.087 5.171 0.000 1.000 1.000
## .dem60 3.934 0.918 4.285 0.000 0.802 0.802
## .dem65 0.306 0.198 1.545 0.122 0.073 0.073
Добавим в модель ещё одну ковариацию остатков.
model2_nested <- '
# measurement model
ind60 =~ x1 + x2 + x3
dem60 =~ y1 + y2 + y3 + y4
dem65 =~ y5 + y6 + y7 + y8
# regressions
dem60 ~ ind60
dem65 ~ ind60 + dem60
# residual covariances
y1 ~~ y5
y2 ~~ y4 + y6
y3 ~~ y7
y4 ~~ y8
y6 ~~ y8 # one more
'
fit2_nested <- sem(model2_nested, data = PoliticalDemocracy)
Эти модели являются вложенными (nested), поэтому их можно сравнить друг с другом с помощью функции anova(), которая посчитает хи-квадрат (Chi Square Difference Test).
anova(fit2, fit2_nested)
## Chi Square Difference Test
##
## Df AIC BIC Chisq Chisq diff Df diff Pr(>Chisq)
## fit2_nested 35 3157.6 3229.4 38.125
## fit2 36 3162.9 3232.4 45.418 7.2928 1 0.006923 **
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
SEM модели можно считать не из сырых данных, а из ковариационной матрицы. Для этого создадим ковариационную матрицу.
cov.matr <- lower2full(c(648.07, 30.05, 8.64, 140.18, 25.57, 233.21))
colnames(cov.matr) <- rownames(cov.matr) <- c("salary", "school", "iq")
cov.matr
## salary school iq
## salary 648.07 30.05 140.18
## school 30.05 8.64 25.57
## iq 140.18 25.57 233.21
Специфицируем путевую модель с непрямым эффектом
model_ind <- '
salary ~ a*school + c*iq
school ~ b*iq
ind:= b*c # непрямой эффект
'
Оценим параметры
fit_ind <- sem(model_ind, sample.cov=cov.matr, sample.nobs=300)
summary(fit_ind)
## lavaan (0.5-23.1097) converged normally after 23 iterations
##
## Number of observations 300
##
## Estimator ML
## Minimum Function Test Statistic 0.000
## Degrees of freedom 0
## Minimum Function Value 0.0000000000000
##
## Parameter Estimates:
##
## Information Expected
## Standard Errors Standard
##
## Regressions:
## Estimate Std.Err z-value P(>|z|)
## salary ~
## school (a) 2.515 0.549 4.585 0.000
## iq (c) 0.325 0.106 3.081 0.002
## school ~
## iq (b) 0.110 0.009 12.005 0.000
##
## Variances:
## Estimate Std.Err z-value P(>|z|)
## .salary 525.129 42.877 12.247 0.000
## .school 5.817 0.475 12.247 0.000
##
## Defined Parameters:
## Estimate Std.Err z-value P(>|z|)
## ind 0.036 0.012 2.984 0.003
Multiple-Group CFA has become the standard to investigate the degree to which measures are invariant across groups.
Testing for measurement invariance consists of a series of model comparisons that define more and more stringent equality constraints.
First, a baseline model is fit in which the loading pattern is similar in all groups but the magnitude of all parameters – loadings, intercepts, variances, etc. - may vary.
Configural invariance exists if this baseline model has a good fit and the same loadings are significant in all groups.
Second, a weak-invariance model in which the factor loadings are constrained to be equal is fit to the data and the fit of this model is compared to the baseline model. Weak invariance exists if the fit of the metric invariance model is not substantially worse than the fit of the baseline model.
Third, a strong-invariance model in which factor loadings and item intercepts are constrained to be equal is fit to the data and compared against the weak measurement invariance model. Strong invariance exists if the fit of the scalar invariance model is not substantially worse than the fit of the weak invariance model.
Fourth, a strict invariance model in which factor loadings, intercepts, and residual variances are constrained to be equal is fit to the data and compared to the strong measurement invariance model.
Again have a look to data.
str(HolzingerSwineford1939)
## 'data.frame': 301 obs. of 15 variables:
## $ id : int 1 2 3 4 5 6 7 8 9 11 ...
## $ sex : int 1 2 2 1 2 2 1 2 2 2 ...
## $ ageyr : int 13 13 13 13 12 14 12 12 13 12 ...
## $ agemo : int 1 7 1 2 2 1 1 2 0 5 ...
## $ school: Factor w/ 2 levels "Grant-White",..: 2 2 2 2 2 2 2 2 2 2 ...
## $ grade : int 7 7 7 7 7 7 7 7 7 7 ...
## $ x1 : num 3.33 5.33 4.5 5.33 4.83 ...
## $ x2 : num 7.75 5.25 5.25 7.75 4.75 5 6 6.25 5.75 5.25 ...
## $ x3 : num 0.375 2.125 1.875 3 0.875 ...
## $ x4 : num 2.33 1.67 1 2.67 2.67 ...
## $ x5 : num 5.75 3 1.75 4.5 4 3 6 4.25 5.75 5 ...
## $ x6 : num 1.286 1.286 0.429 2.429 2.571 ...
## $ x7 : num 3.39 3.78 3.26 3 3.7 ...
## $ x8 : num 5.75 6.25 3.9 5.3 6.3 6.65 6.2 5.15 4.65 4.55 ...
## $ x9 : num 6.36 7.92 4.42 4.86 5.92 ...
Specify baseline model
baseline.model <- '
visual =~ x1 + x2 + x3
textual =~ x4 + x5 + x6
speed =~ x7 + x8 + x9
'
measurementInvariance(baseline.model, data=HolzingerSwineford1939, group='school')
##
## Measurement invariance models:
##
## Model 1 : fit.configural
## Model 2 : fit.loadings
## Model 3 : fit.intercepts
## Model 4 : fit.means
##
## Chi Square Difference Test
##
## Df AIC BIC Chisq Chisq diff Df diff Pr(>Chisq)
## fit.configural 48 7484.4 7706.8 115.85
## fit.loadings 54 7480.6 7680.8 124.04 8.192 6 0.2244
## fit.intercepts 60 7508.6 7686.6 164.10 40.059 6 4.435e-07 ***
## fit.means 63 7543.1 7710.0 204.61 40.502 3 8.338e-09 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
##
## Fit measures:
##
## cfi rmsea cfi.delta rmsea.delta
## fit.configural 0.923 0.097 NA NA
## fit.loadings 0.921 0.093 0.002 0.004
## fit.intercepts 0.882 0.107 0.038 0.015
## fit.means 0.840 0.122 0.042 0.015
config <- cfa(baseline.model, data=HolzingerSwineford1939, group="school")
#summary(config, standardized = TRUE, fit.measures = TRUE)
metric <- cfa(baseline.model, data=HolzingerSwineford1939, group="school", group.equal="loadings")
#summary(metric, standardized = TRUE, fit.measures = TRUE)
scalar <- cfa(baseline.model, data=HolzingerSwineford1939, group="school", group.equal = c("loadings", "intercepts"))
summary(scalar, standardized = TRUE, fit.measures = TRUE)
## lavaan (0.5-23.1097) converged normally after 60 iterations
##
## Number of observations per group
## Pasteur 156
## Grant-White 145
##
## Estimator ML
## Minimum Function Test Statistic 164.103
## Degrees of freedom 60
## P-value (Chi-square) 0.000
##
## Chi-square for each group:
##
## Pasteur 90.210
## Grant-White 73.892
##
## Model test baseline model:
##
## Minimum Function Test Statistic 957.769
## Degrees of freedom 72
## P-value 0.000
##
## User model versus baseline model:
##
## Comparative Fit Index (CFI) 0.882
## Tucker-Lewis Index (TLI) 0.859
##
## Loglikelihood and Information Criteria:
##
## Loglikelihood user model (H0) -3706.323
## Loglikelihood unrestricted model (H1) -3624.272
##
## Number of free parameters 48
## Akaike (AIC) 7508.647
## Bayesian (BIC) 7686.588
## Sample-size adjusted Bayesian (BIC) 7534.359
##
## Root Mean Square Error of Approximation:
##
## RMSEA 0.107
## 90 Percent Confidence Interval 0.088 0.127
## P-value RMSEA <= 0.05 0.000
##
## Standardized Root Mean Square Residual:
##
## SRMR 0.082
##
## Parameter Estimates:
##
## Information Expected
## Standard Errors Standard
##
##
## Group 1 [Pasteur]:
##
## Latent Variables:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## visual =~
## x1 1.000 0.892 0.768
## x2 (.p2.) 0.576 0.101 5.713 0.000 0.514 0.411
## x3 (.p3.) 0.798 0.112 7.146 0.000 0.712 0.591
## textual =~
## x4 1.000 0.938 0.815
## x5 (.p5.) 1.120 0.066 16.965 0.000 1.050 0.829
## x6 (.p6.) 0.932 0.056 16.608 0.000 0.874 0.862
## speed =~
## x7 1.000 0.568 0.516
## x8 (.p8.) 1.130 0.145 7.786 0.000 0.641 0.657
## x9 (.p9.) 1.009 0.132 7.667 0.000 0.573 0.578
##
## Covariances:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## visual ~~
## textual 0.410 0.095 4.293 0.000 0.490 0.490
## speed 0.178 0.066 2.687 0.007 0.351 0.351
## textual ~~
## speed 0.180 0.062 2.900 0.004 0.338 0.338
##
## Intercepts:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## .x1 (.25.) 5.001 0.090 55.760 0.000 5.001 4.302
## .x2 (.26.) 6.151 0.077 79.905 0.000 6.151 4.925
## .x3 (.27.) 2.271 0.083 27.387 0.000 2.271 1.885
## .x4 (.28.) 2.778 0.087 31.953 0.000 2.778 2.413
## .x5 (.29.) 4.035 0.096 41.858 0.000 4.035 3.184
## .x6 (.30.) 1.926 0.079 24.426 0.000 1.926 1.900
## .x7 (.31.) 4.242 0.073 57.975 0.000 4.242 3.855
## .x8 (.32.) 5.630 0.072 78.531 0.000 5.630 5.771
## .x9 (.33.) 5.465 0.069 79.016 0.000 5.465 5.516
## visual 0.000 0.000 0.000
## textual 0.000 0.000 0.000
## speed 0.000 0.000 0.000
##
## Variances:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## .x1 0.555 0.139 3.983 0.000 0.555 0.411
## .x2 1.296 0.158 8.186 0.000 1.296 0.831
## .x3 0.944 0.136 6.929 0.000 0.944 0.650
## .x4 0.445 0.069 6.430 0.000 0.445 0.336
## .x5 0.502 0.082 6.136 0.000 0.502 0.313
## .x6 0.263 0.050 5.264 0.000 0.263 0.256
## .x7 0.888 0.120 7.416 0.000 0.888 0.734
## .x8 0.541 0.095 5.706 0.000 0.541 0.568
## .x9 0.654 0.096 6.805 0.000 0.654 0.666
## visual 0.796 0.172 4.641 0.000 1.000 1.000
## textual 0.879 0.131 6.694 0.000 1.000 1.000
## speed 0.322 0.082 3.914 0.000 1.000 1.000
##
##
## Group 2 [Grant-White]:
##
## Latent Variables:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## visual =~
## x1 1.000 0.841 0.721
## x2 (.p2.) 0.576 0.101 5.713 0.000 0.484 0.442
## x3 (.p3.) 0.798 0.112 7.146 0.000 0.672 0.643
## textual =~
## x4 1.000 0.933 0.847
## x5 (.p5.) 1.120 0.066 16.965 0.000 1.045 0.862
## x6 (.p6.) 0.932 0.056 16.608 0.000 0.869 0.796
## speed =~
## x7 1.000 0.711 0.668
## x8 (.p8.) 1.130 0.145 7.786 0.000 0.803 0.773
## x9 (.p9.) 1.009 0.132 7.667 0.000 0.717 0.704
##
## Covariances:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## visual ~~
## textual 0.427 0.097 4.417 0.000 0.544 0.544
## speed 0.329 0.082 4.006 0.000 0.550 0.550
## textual ~~
## speed 0.236 0.073 3.224 0.001 0.356 0.356
##
## Intercepts:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## .x1 (.25.) 5.001 0.090 55.760 0.000 5.001 4.286
## .x2 (.26.) 6.151 0.077 79.905 0.000 6.151 5.618
## .x3 (.27.) 2.271 0.083 27.387 0.000 2.271 2.174
## .x4 (.28.) 2.778 0.087 31.953 0.000 2.778 2.522
## .x5 (.29.) 4.035 0.096 41.858 0.000 4.035 3.330
## .x6 (.30.) 1.926 0.079 24.426 0.000 1.926 1.763
## .x7 (.31.) 4.242 0.073 57.975 0.000 4.242 3.991
## .x8 (.32.) 5.630 0.072 78.531 0.000 5.630 5.422
## .x9 (.33.) 5.465 0.069 79.016 0.000 5.465 5.369
## visual -0.148 0.122 -1.211 0.226 -0.176 -0.176
## textual 0.576 0.117 4.918 0.000 0.618 0.618
## speed -0.177 0.090 -1.968 0.049 -0.250 -0.250
##
## Variances:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## .x1 0.654 0.128 5.094 0.000 0.654 0.480
## .x2 0.964 0.123 7.812 0.000 0.964 0.804
## .x3 0.641 0.101 6.316 0.000 0.641 0.587
## .x4 0.343 0.062 5.534 0.000 0.343 0.283
## .x5 0.376 0.073 5.133 0.000 0.376 0.256
## .x6 0.437 0.067 6.559 0.000 0.437 0.366
## .x7 0.625 0.095 6.574 0.000 0.625 0.553
## .x8 0.434 0.088 4.914 0.000 0.434 0.403
## .x9 0.522 0.086 6.102 0.000 0.522 0.504
## visual 0.708 0.160 4.417 0.000 1.000 1.000
## textual 0.870 0.131 6.659 0.000 1.000 1.000
## speed 0.505 0.115 4.379 0.000 1.000 1.000
lavTestScore(scalar)
## $test
##
## total score test:
##
## test X2 df p.value
## 1 score 46.956 15 0
##
## $uni
##
## univariate score tests:
##
## lhs op rhs X2 df p.value
## 1 .p2. == .p38. 0.306 1 0.580
## 2 .p3. == .p39. 1.636 1 0.201
## 3 .p5. == .p41. 2.744 1 0.098
## 4 .p6. == .p42. 2.627 1 0.105
## 5 .p8. == .p44. 0.027 1 0.871
## 6 .p9. == .p45. 0.004 1 0.952
## 7 .p25. == .p61. 5.847 1 0.016
## 8 .p26. == .p62. 6.863 1 0.009
## 9 .p27. == .p63. 19.193 1 0.000
## 10 .p28. == .p64. 2.139 1 0.144
## 11 .p29. == .p65. 1.563 1 0.211
## 12 .p30. == .p66. 0.032 1 0.857
## 13 .p31. == .p67. 15.021 1 0.000
## 14 .p32. == .p68. 4.710 1 0.030
## 15 .p33. == .p69. 1.498 1 0.221
Bigger modification indices is .p27. == .p63. (X2 = 19.193)
inspect(scalar, what = "list")
## id lhs op rhs user block group free ustart exo label plabel
## 1 1 visual =~ x1 1 1 1 0 1 0 .p1.
## 2 2 visual =~ x2 1 1 1 1 NA 0 .p2. .p2.
## 3 3 visual =~ x3 1 1 1 2 NA 0 .p3. .p3.
## 4 4 textual =~ x4 1 1 1 0 1 0 .p4.
## 5 5 textual =~ x5 1 1 1 3 NA 0 .p5. .p5.
## 6 6 textual =~ x6 1 1 1 4 NA 0 .p6. .p6.
## 7 7 speed =~ x7 1 1 1 0 1 0 .p7.
## 8 8 speed =~ x8 1 1 1 5 NA 0 .p8. .p8.
## 9 9 speed =~ x9 1 1 1 6 NA 0 .p9. .p9.
## 10 10 x1 ~~ x1 0 1 1 7 NA 0 .p10.
## 11 11 x2 ~~ x2 0 1 1 8 NA 0 .p11.
## 12 12 x3 ~~ x3 0 1 1 9 NA 0 .p12.
## 13 13 x4 ~~ x4 0 1 1 10 NA 0 .p13.
## 14 14 x5 ~~ x5 0 1 1 11 NA 0 .p14.
## 15 15 x6 ~~ x6 0 1 1 12 NA 0 .p15.
## 16 16 x7 ~~ x7 0 1 1 13 NA 0 .p16.
## 17 17 x8 ~~ x8 0 1 1 14 NA 0 .p17.
## 18 18 x9 ~~ x9 0 1 1 15 NA 0 .p18.
## 19 19 visual ~~ visual 0 1 1 16 NA 0 .p19.
## 20 20 textual ~~ textual 0 1 1 17 NA 0 .p20.
## 21 21 speed ~~ speed 0 1 1 18 NA 0 .p21.
## 22 22 visual ~~ textual 0 1 1 19 NA 0 .p22.
## 23 23 visual ~~ speed 0 1 1 20 NA 0 .p23.
## 24 24 textual ~~ speed 0 1 1 21 NA 0 .p24.
## 25 25 x1 ~1 0 1 1 22 NA 0 .p25. .p25.
## 26 26 x2 ~1 0 1 1 23 NA 0 .p26. .p26.
## 27 27 x3 ~1 0 1 1 24 NA 0 .p27. .p27.
## 28 28 x4 ~1 0 1 1 25 NA 0 .p28. .p28.
## 29 29 x5 ~1 0 1 1 26 NA 0 .p29. .p29.
## 30 30 x6 ~1 0 1 1 27 NA 0 .p30. .p30.
## 31 31 x7 ~1 0 1 1 28 NA 0 .p31. .p31.
## 32 32 x8 ~1 0 1 1 29 NA 0 .p32. .p32.
## 33 33 x9 ~1 0 1 1 30 NA 0 .p33. .p33.
## 34 34 visual ~1 0 1 1 0 0 0 .p34.
## 35 35 textual ~1 0 1 1 0 0 0 .p35.
## 36 36 speed ~1 0 1 1 0 0 0 .p36.
## 37 37 visual =~ x1 1 2 2 0 1 0 .p37.
## 38 38 visual =~ x2 1 2 2 31 NA 0 .p2. .p38.
## 39 39 visual =~ x3 1 2 2 32 NA 0 .p3. .p39.
## 40 40 textual =~ x4 1 2 2 0 1 0 .p40.
## 41 41 textual =~ x5 1 2 2 33 NA 0 .p5. .p41.
## 42 42 textual =~ x6 1 2 2 34 NA 0 .p6. .p42.
## 43 43 speed =~ x7 1 2 2 0 1 0 .p43.
## 44 44 speed =~ x8 1 2 2 35 NA 0 .p8. .p44.
## 45 45 speed =~ x9 1 2 2 36 NA 0 .p9. .p45.
## 46 46 x1 ~~ x1 0 2 2 37 NA 0 .p46.
## 47 47 x2 ~~ x2 0 2 2 38 NA 0 .p47.
## 48 48 x3 ~~ x3 0 2 2 39 NA 0 .p48.
## 49 49 x4 ~~ x4 0 2 2 40 NA 0 .p49.
## 50 50 x5 ~~ x5 0 2 2 41 NA 0 .p50.
## 51 51 x6 ~~ x6 0 2 2 42 NA 0 .p51.
## 52 52 x7 ~~ x7 0 2 2 43 NA 0 .p52.
## 53 53 x8 ~~ x8 0 2 2 44 NA 0 .p53.
## 54 54 x9 ~~ x9 0 2 2 45 NA 0 .p54.
## 55 55 visual ~~ visual 0 2 2 46 NA 0 .p55.
## 56 56 textual ~~ textual 0 2 2 47 NA 0 .p56.
## 57 57 speed ~~ speed 0 2 2 48 NA 0 .p57.
## 58 58 visual ~~ textual 0 2 2 49 NA 0 .p58.
## 59 59 visual ~~ speed 0 2 2 50 NA 0 .p59.
## 60 60 textual ~~ speed 0 2 2 51 NA 0 .p60.
## 61 61 x1 ~1 0 2 2 52 NA 0 .p25. .p61.
## 62 62 x2 ~1 0 2 2 53 NA 0 .p26. .p62.
## 63 63 x3 ~1 0 2 2 54 NA 0 .p27. .p63.
## 64 64 x4 ~1 0 2 2 55 NA 0 .p28. .p64.
## 65 65 x5 ~1 0 2 2 56 NA 0 .p29. .p65.
## 66 66 x6 ~1 0 2 2 57 NA 0 .p30. .p66.
## 67 67 x7 ~1 0 2 2 58 NA 0 .p31. .p67.
## 68 68 x8 ~1 0 2 2 59 NA 0 .p32. .p68.
## 69 69 x9 ~1 0 2 2 60 NA 0 .p33. .p69.
## 70 70 visual ~1 0 2 2 61 NA 0 .p70.
## 71 71 textual ~1 0 2 2 62 NA 0 .p71.
## 72 72 speed ~1 0 2 2 63 NA 0 .p72.
## 73 73 .p2. == .p38. 2 0 0 0 NA 0
## 74 74 .p3. == .p39. 2 0 0 0 NA 0
## 75 75 .p5. == .p41. 2 0 0 0 NA 0
## 76 76 .p6. == .p42. 2 0 0 0 NA 0
## 77 77 .p8. == .p44. 2 0 0 0 NA 0
## 78 78 .p9. == .p45. 2 0 0 0 NA 0
## 79 79 .p25. == .p61. 2 0 0 0 NA 0
## 80 80 .p26. == .p62. 2 0 0 0 NA 0
## 81 81 .p27. == .p63. 2 0 0 0 NA 0
## 82 82 .p28. == .p64. 2 0 0 0 NA 0
## 83 83 .p29. == .p65. 2 0 0 0 NA 0
## 84 84 .p30. == .p66. 2 0 0 0 NA 0
## 85 85 .p31. == .p67. 2 0 0 0 NA 0
## 86 86 .p32. == .p68. 2 0 0 0 NA 0
## 87 87 .p33. == .p69. 2 0 0 0 NA 0
## start est se
## 1 1.000 1.000 0.000
## 2 0.769 0.576 0.101
## 3 1.186 0.798 0.112
## 4 1.000 1.000 0.000
## 5 1.237 1.120 0.066
## 6 0.865 0.932 0.056
## 7 1.000 1.000 0.000
## 8 1.227 1.130 0.145
## 9 0.827 1.009 0.132
## 10 0.698 0.555 0.139
## 11 0.752 1.296 0.158
## 12 0.673 0.944 0.136
## 13 0.660 0.445 0.069
## 14 0.854 0.502 0.082
## 15 0.487 0.263 0.050
## 16 0.585 0.888 0.120
## 17 0.476 0.541 0.095
## 18 0.489 0.654 0.096
## 19 0.050 0.796 0.172
## 20 0.050 0.879 0.131
## 21 0.050 0.322 0.082
## 22 0.000 0.410 0.095
## 23 0.000 0.178 0.066
## 24 0.000 0.180 0.062
## 25 4.941 5.001 0.090
## 26 5.984 6.151 0.077
## 27 2.487 2.271 0.083
## 28 2.823 2.778 0.087
## 29 3.995 4.035 0.096
## 30 1.922 1.926 0.079
## 31 4.432 4.242 0.073
## 32 5.563 5.630 0.072
## 33 5.418 5.465 0.069
## 34 0.000 0.000 0.000
## 35 0.000 0.000 0.000
## 36 0.000 0.000 0.000
## 37 1.000 1.000 0.000
## 38 0.896 0.576 0.101
## 39 1.155 0.798 0.112
## 40 1.000 1.000 0.000
## 41 0.991 1.120 0.066
## 42 0.962 0.932 0.056
## 43 1.000 1.000 0.000
## 44 1.282 1.130 0.145
## 45 0.895 1.009 0.132
## 46 0.659 0.654 0.128
## 47 0.613 0.964 0.123
## 48 0.537 0.641 0.101
## 49 0.629 0.343 0.062
## 50 0.671 0.376 0.073
## 51 0.640 0.437 0.067
## 52 0.531 0.625 0.095
## 53 0.547 0.434 0.088
## 54 0.526 0.522 0.086
## 55 0.050 0.708 0.160
## 56 0.050 0.870 0.131
## 57 0.050 0.505 0.115
## 58 0.000 0.427 0.097
## 59 0.000 0.329 0.082
## 60 0.000 0.236 0.073
## 61 4.930 5.001 0.090
## 62 6.200 6.151 0.077
## 63 1.996 2.271 0.083
## 64 3.317 2.778 0.087
## 65 4.712 4.035 0.096
## 66 2.469 1.926 0.079
## 67 3.921 4.242 0.073
## 68 5.488 5.630 0.072
## 69 5.327 5.465 0.069
## 70 0.000 -0.148 0.122
## 71 0.000 0.576 0.117
## 72 0.000 -0.177 0.090
## 73 0.000 0.000 0.000
## 74 0.000 0.000 0.000
## 75 0.000 0.000 0.000
## 76 0.000 0.000 0.000
## 77 0.000 0.000 0.000
## 78 0.000 0.000 0.000
## 79 0.000 0.000 0.000
## 80 0.000 0.000 0.000
## 81 0.000 0.000 0.000
## 82 0.000 0.000 0.000
## 83 0.000 0.000 0.000
## 84 0.000 0.000 0.000
## 85 0.000 0.000 0.000
## 86 0.000 0.000 0.000
## 87 0.000 0.000 0.000
.p27.-> x3 ~1 (group 1) .p63. -> x3 ~1 (group 2)
scalar_partial <- cfa(baseline.model, data=HolzingerSwineford1939, group="school", group.equal = c("loadings", "intercepts"), group.partial=c('x3 ~1'))
summary(scalar_partial, standardized = TRUE, fit.measures = TRUE)
## lavaan (0.5-23.1097) converged normally after 63 iterations
##
## Number of observations per group
## Pasteur 156
## Grant-White 145
##
## Estimator ML
## Minimum Function Test Statistic 144.579
## Degrees of freedom 59
## P-value (Chi-square) 0.000
##
## Chi-square for each group:
##
## Pasteur 79.471
## Grant-White 65.108
##
## Model test baseline model:
##
## Minimum Function Test Statistic 957.769
## Degrees of freedom 72
## P-value 0.000
##
## User model versus baseline model:
##
## Comparative Fit Index (CFI) 0.903
## Tucker-Lewis Index (TLI) 0.882
##
## Loglikelihood and Information Criteria:
##
## Loglikelihood user model (H0) -3696.561
## Loglikelihood unrestricted model (H1) -3624.272
##
## Number of free parameters 49
## Akaike (AIC) 7491.123
## Bayesian (BIC) 7672.771
## Sample-size adjusted Bayesian (BIC) 7517.371
##
## Root Mean Square Error of Approximation:
##
## RMSEA 0.098
## 90 Percent Confidence Interval 0.078 0.119
## P-value RMSEA <= 0.05 0.000
##
## Standardized Root Mean Square Residual:
##
## SRMR 0.077
##
## Parameter Estimates:
##
## Information Expected
## Standard Errors Standard
##
##
## Group 1 [Pasteur]:
##
## Latent Variables:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## visual =~
## x1 1.000 0.893 0.766
## x2 (.p2.) 0.606 0.101 5.988 0.000 0.541 0.433
## x3 (.p3.) 0.791 0.109 7.264 0.000 0.706 0.601
## textual =~
## x4 1.000 0.938 0.815
## x5 (.p5.) 1.120 0.066 16.964 0.000 1.050 0.829
## x6 (.p6.) 0.932 0.056 16.604 0.000 0.874 0.862
## speed =~
## x7 1.000 0.568 0.516
## x8 (.p8.) 1.129 0.145 7.788 0.000 0.642 0.658
## x9 (.p9.) 1.008 0.131 7.667 0.000 0.573 0.578
##
## Covariances:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## visual ~~
## textual 0.404 0.095 4.248 0.000 0.483 0.483
## speed 0.177 0.066 2.679 0.007 0.349 0.349
## textual ~~
## speed 0.180 0.062 2.900 0.004 0.338 0.338
##
## Intercepts:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## .x1 (.25.) 4.914 0.092 53.532 0.000 4.914 4.219
## .x2 (.26.) 6.087 0.079 77.018 0.000 6.087 4.874
## .x3 2.487 0.094 26.476 0.000 2.487 2.120
## .x4 (.28.) 2.778 0.087 31.953 0.000 2.778 2.413
## .x5 (.29.) 4.035 0.096 41.856 0.000 4.035 3.184
## .x6 (.30.) 1.926 0.079 24.426 0.000 1.926 1.900
## .x7 (.31.) 4.242 0.073 57.966 0.000 4.242 3.855
## .x8 (.32.) 5.631 0.072 78.521 0.000 5.631 5.770
## .x9 (.33.) 5.465 0.069 79.039 0.000 5.465 5.517
## visual 0.000 0.000 0.000
## textual 0.000 0.000 0.000
## speed 0.000 0.000 0.000
##
## Variances:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## .x1 0.560 0.137 4.084 0.000 0.560 0.413
## .x2 1.267 0.156 8.107 0.000 1.267 0.813
## .x3 0.879 0.128 6.854 0.000 0.879 0.638
## .x4 0.446 0.069 6.431 0.000 0.446 0.336
## .x5 0.502 0.082 6.131 0.000 0.502 0.313
## .x6 0.263 0.050 5.261 0.000 0.263 0.256
## .x7 0.888 0.120 7.415 0.000 0.888 0.734
## .x8 0.540 0.095 5.703 0.000 0.540 0.568
## .x9 0.654 0.096 6.808 0.000 0.654 0.666
## visual 0.797 0.170 4.694 0.000 1.000 1.000
## textual 0.879 0.131 6.693 0.000 1.000 1.000
## speed 0.323 0.082 3.914 0.000 1.000 1.000
##
##
## Group 2 [Grant-White]:
##
## Latent Variables:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## visual =~
## x1 1.000 0.846 0.724
## x2 (.p2.) 0.606 0.101 5.988 0.000 0.512 0.467
## x3 (.p3.) 0.791 0.109 7.264 0.000 0.669 0.652
## textual =~
## x4 1.000 0.933 0.847
## x5 (.p5.) 1.120 0.066 16.964 0.000 1.045 0.862
## x6 (.p6.) 0.932 0.056 16.604 0.000 0.869 0.796
## speed =~
## x7 1.000 0.711 0.669
## x8 (.p8.) 1.129 0.145 7.788 0.000 0.803 0.773
## x9 (.p9.) 1.008 0.131 7.667 0.000 0.717 0.704
##
## Covariances:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## visual ~~
## textual 0.426 0.097 4.412 0.000 0.540 0.540
## speed 0.327 0.082 3.993 0.000 0.544 0.544
## textual ~~
## speed 0.236 0.073 3.222 0.001 0.356 0.356
##
## Intercepts:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## .x1 (.25.) 4.914 0.092 53.532 0.000 4.914 4.204
## .x2 (.26.) 6.087 0.079 77.018 0.000 6.087 5.552
## .x3 1.956 0.108 18.178 0.000 1.956 1.908
## .x4 (.28.) 2.778 0.087 31.953 0.000 2.778 2.522
## .x5 (.29.) 4.035 0.096 41.856 0.000 4.035 3.330
## .x6 (.30.) 1.926 0.079 24.426 0.000 1.926 1.763
## .x7 (.31.) 4.242 0.073 57.966 0.000 4.242 3.991
## .x8 (.32.) 5.631 0.072 78.521 0.000 5.631 5.422
## .x9 (.33.) 5.465 0.069 79.039 0.000 5.465 5.368
## visual 0.051 0.129 0.392 0.695 0.060 0.060
## textual 0.576 0.117 4.918 0.000 0.618 0.618
## speed -0.177 0.090 -1.968 0.049 -0.250 -0.250
##
## Variances:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## .x1 0.651 0.127 5.135 0.000 0.651 0.476
## .x2 0.939 0.122 7.723 0.000 0.939 0.782
## .x3 0.603 0.096 6.255 0.000 0.603 0.574
## .x4 0.343 0.062 5.535 0.000 0.343 0.283
## .x5 0.376 0.073 5.133 0.000 0.376 0.256
## .x6 0.437 0.067 6.559 0.000 0.437 0.366
## .x7 0.624 0.095 6.568 0.000 0.624 0.553
## .x8 0.433 0.088 4.906 0.000 0.433 0.402
## .x9 0.523 0.086 6.109 0.000 0.523 0.505
## visual 0.716 0.160 4.476 0.000 1.000 1.000
## textual 0.870 0.131 6.659 0.000 1.000 1.000
## speed 0.505 0.115 4.381 0.000 1.000 1.000
anova(scalar, scalar_partial)
## Chi Square Difference Test
##
## Df AIC BIC Chisq Chisq diff Df diff Pr(>Chisq)
## scalar_partial 59 7491.1 7672.8 144.58
## scalar 60 7508.6 7686.6 164.10 19.524 1 9.935e-06 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
measurementInvariance(baseline.model, data=HolzingerSwineford1939, group='school', group.partial=c('x3 ~1'))
##
## Measurement invariance models:
##
## Model 1 : fit.configural
## Model 2 : fit.loadings
## Model 3 : fit.intercepts
## Model 4 : fit.means
##
## Chi Square Difference Test
##
## Df AIC BIC Chisq Chisq diff Df diff Pr(>Chisq)
## fit.configural 48 7484.4 7706.8 115.85
## fit.loadings 54 7480.6 7680.8 124.04 8.192 6 0.2243577
## fit.intercepts 59 7491.1 7672.8 144.58 20.535 5 0.0009912 ***
## fit.means 62 7520.0 7690.6 179.49 34.909 3 1.273e-07 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
##
## Fit measures:
##
## cfi rmsea cfi.delta rmsea.delta
## fit.configural 0.923 0.097 NA NA
## fit.loadings 0.921 0.093 0.002 0.004
## fit.intercepts 0.903 0.098 0.018 0.005
## fit.means 0.867 0.112 0.036 0.014
Free one more parameter.
measurementInvariance(baseline.model, data=HolzingerSwineford1939, group='school', group.partial=c('x3 ~1', 'x7~1'))
##
## Measurement invariance models:
##
## Model 1 : fit.configural
## Model 2 : fit.loadings
## Model 3 : fit.intercepts
## Model 4 : fit.means
##
## Chi Square Difference Test
##
## Df AIC BIC Chisq Chisq diff Df diff Pr(>Chisq)
## fit.configural 48 7484.4 7706.8 115.85
## fit.loadings 54 7480.6 7680.8 124.04 8.1922 6 0.2244
## fit.intercepts 58 7478.0 7663.3 129.42 5.3789 4 0.2506
## fit.means 61 7501.2 7675.5 158.67 29.2519 3 1.982e-06 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
##
## Fit measures:
##
## cfi rmsea cfi.delta rmsea.delta
## fit.configural 0.923 0.097 NA NA
## fit.loadings 0.921 0.093 0.002 0.004
## fit.intercepts 0.919 0.090 0.002 0.002
## fit.means 0.890 0.103 0.030 0.013