These are notes I put together while following the tutorial ‘Confirmatory factor analysis in R’ (Youtube link), by Mike Crowson. It uses data from the article ‘Three-factor structure for Epistemic Belief Inventory: A cross-validation study’, by F. Leal-Soto and R. Ferrer-Urbina.
As you can see below, each section of the document is divided into separate tabs. This is in order to make the document a bit more manageable so that it’s easier to switch between different parts, especially between R output and notes. The ordering is as you’d expect: read the document top to bottom, going through the tabs left-to-right. If you are running the code in R as you read through the document, then bear in mind that you must also run the bits of code in the order that they are presented.
First make sure that you have the following necessary packages (use install.packages('packagename') otherwise) installed:
Let’s attach lavaan to R’s search path (so we don’t have to write lavaan::functionname when calling its functions) since we’ll be using it so much:
library(lavaan)
Now let’s load in the data. We’re doing one thing differently from Crowson here, which is using the read_sav function from the haven package. We’re also loading in the data directly from the web. We do this by feeding read_sav the url pointing to where the journal has uploaded the study data. This means that you need to have a working internet connection when running these commands.
data_url <- 'https://doi.org/10.1371/journal.pone.0173295.s002'
ebi_df <- haven::read_sav(data_url) # produces a tibble
ebi_df <- data.frame(ebi_df, stringsAsFactors = FALSE) # converts from 'tibble' to vanilla data frame (since this is what Crowson uses)
We’ll only use data from entries where the value in the data frame’s submuestra (subsample) column is equal to 1. This is because we’re only doing a CFA and the other data (with submuestra=0) had been used for prior exploratory factor analysis (EFA) by the article authors. This is a common pattern: EFA first with a subset of data to find a model, then CFA with the remaining data to put the model to the test.
ebi1_df <- ebi_df[ebi_df$submuestra==1, ]
rm(ebi_df) # removes the original data frame to make sure we don't accidentally use it
Let’s take a quick look at our data frame.
str(ebi1_df)
## 'data.frame': 746 obs. of 34 variables:
## $ ce1 : num 5 5 4 5 5 5 3 4 3 4 ...
## $ ce2 : num 3 4 3 5 4 5 3 3 4 1 ...
## $ ce3 : num 3 3 4 5 2 1 5 5 4 5 ...
## $ ce4 : num 4 2 3 5 4 5 2 4 3 5 ...
## $ ce5 : num 2 3 2 2 2 2 2 3 3 1 ...
## $ ce6 : num 3 3 3 3 1 4 3 3 3 5 ...
## $ ce7 : num 4 4 4 2 5 5 5 5 4 5 ...
## $ ce8 : num 1 5 5 3 3 4 2 2 3 5 ...
## $ ce9 : num 2 4 3 2 5 1 2 4 4 5 ...
## $ ce10 : num 3 5 4 5 4 1 4 3 4 3 ...
## $ ce11 : num 2 5 4 1 5 4 3 4 2 2 ...
## $ ce12 : num 3 5 3 5 4 3 3 4 3 5 ...
## $ ce13 : num 4 3 2 1 3 4 4 5 3 5 ...
## $ ce14 : num 3 3 2 5 4 3 2 2 2 5 ...
## $ ce15 : num 2 4 3 5 1 5 1 2 2 5 ...
## $ ce16 : num 2 5 4 5 4 3 5 5 4 5 ...
## $ ce17 : num 3 5 3 5 3 4 3 3 4 3 ...
## $ ce18 : num 2 5 4 5 5 5 3 5 4 5 ...
## $ ce19 : num 3 5 4 5 5 4 3 3 3 5 ...
## $ ce20 : num 2 5 1 5 1 2 2 2 2 3 ...
## $ ce21 : num 2 3 3 1 4 4 1 2 3 2 ...
## $ ce22 : num 3 4 3 2 5 4 5 4 4 5 ...
## $ ce23 : num 3 5 2 4 5 4 2 4 2 5 ...
## $ ce24 : num 2 5 2 2 3 5 2 5 3 5 ...
## $ ce25 : num 4 1 2 4 2 5 3 4 3 5 ...
## $ ce26 : num 4 2 2 1 4 3 3 3 3 5 ...
## $ ce27 : num 2 2 3 1 3 1 3 4 2 5 ...
## $ ce28 : num 3 3 4 3 4 3 5 4 4 5 ...
## $ edad : num 12 12 13 12 13 12 12 12 12 12 ...
## $ sexo : dbl+lbl [1:746] 1, 2, 2, 1, 2, 2, 2, 2, 2, 1, 2, 2, 1, 1...
## ..@ label : chr "Sexo"
## ..@ format.spss : chr "F1.0"
## ..@ display_width: int 1
## ..@ labels : Named num 1 2
## .. ..- attr(*, "names")= chr [1:2] "Hombre" "Mujer"
## $ origen : dbl+lbl [1:746] 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
## ..@ label : chr "Origen de los datos"
## ..@ format.spss : chr "F1.0"
## ..@ display_width: int 1
## ..@ labels : Named num 1 2
## .. ..- attr(*, "names")= chr [1:2] "Iquique" "Arica"
## $ submuestra: dbl+lbl [1:746] 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
## ..@ label : chr "Aproximadamente 40% de los casos (SAMPLE)"
## ..@ format.spss : chr "F1.0"
## ..@ display_width: int 10
## ..@ labels : Named num 0 1
## .. ..- attr(*, "names")= chr [1:2] "Not Selected" "Selected"
## $ ce2Rec : num 3 2 3 1 2 1 3 3 2 5 ...
## $ ce22Rec : num 3 2 3 4 1 2 1 2 2 1 ...
Each item in the instrument used in the study is represented in the data frame by a column with a name according to the pattern ce<item_number>. So e. g. ce1 has data on participants’ responses to the first item of the instrument. The instrument used in the study is labelled “Creencias epistemológicas” (epistemological beliefs) in the .sav file, which is why the abbreviation ‘ce’ is used for items.
First, we need to define the model. With lavaan, we do this within a string (a one-element character vector), delimited by ' or " symbols.
The model has three factors (also called latent variables):
Note: If you want to know how and why the researchers decided on using these factors, you can read the article.
To specify a latent variable and the indicator variables that are to load onto it, the syntax 'LV=~IV1+IV2', where LV is a latent variable while IV1 and IV2 are indicator variables (e. g. ce5 in our case), is used.
We also specify a correlated error between the ce3 and ce8 items. Crowson doesn’t describe why this is done in the video and I didn’t find this in the article, so maybe the correlation is just included to show how lavaan works. Specifying covariance/correlation between two specific variables is done using two tilde signs: ~~.
The same method as above can also be used for telling lavaan to compute a variance, by placing the same variable on both sides of the ~~ operator: myvar~~myvar. In this case though we don’t need to do so, because the lavaan package’s cfa function by default calculates the error variance for each of the indicator variables.
ebi1_model<-'
IA=~ce3+ce5+ce8+ce9+ce14+ce15+ce20+ce24+ce27
OA=~ce4+ce25+ce26
SC=~ce1+ce2+ce11+ce17+ce22
ce3~~ce8
'
We use lavaan’s cfa function to fit the model, also passing our data frame to the data argument.
ebi1_model_fit <- cfa(ebi1_model, data=ebi1_df)
We set fit.measures=TRUE to obtain various global fit indices. We set standardized=TRUE which is used to obtain standardized factor loadings in addition to unstandardized factor loadings.
summary(ebi1_model_fit, fit.measures=TRUE, standardized=TRUE)
## lavaan 0.6-6 ended normally after 54 iterations
##
## Estimator ML
## Optimization method NLMINB
## Number of free parameters 38
##
## Number of observations 746
##
## Model Test User Model:
##
## Test statistic 313.009
## Degrees of freedom 115
## P-value (Chi-square) 0.000
##
## Model Test Baseline Model:
##
## Test statistic 1493.289
## Degrees of freedom 136
## P-value 0.000
##
## User Model versus Baseline Model:
##
## Comparative Fit Index (CFI) 0.854
## Tucker-Lewis Index (TLI) 0.827
##
## Loglikelihood and Information Criteria:
##
## Loglikelihood user model (H0) -19691.841
## Loglikelihood unrestricted model (H1) -19535.336
##
## Akaike (AIC) 39459.681
## Bayesian (BIC) 39635.041
## Sample-size adjusted Bayesian (BIC) 39514.377
##
## Root Mean Square Error of Approximation:
##
## RMSEA 0.048
## 90 Percent confidence interval - lower 0.042
## 90 Percent confidence interval - upper 0.054
## P-value RMSEA <= 0.05 0.684
##
## Standardized Root Mean Square Residual:
##
## SRMR 0.050
##
## Parameter Estimates:
##
## Standard errors Standard
## Information Expected
## Information saturated (h1) model Structured
##
## Latent Variables:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## IA =~
## ce3 1.000 0.630 0.465
## ce5 1.109 0.122 9.074 0.000 0.699 0.544
## ce8 0.820 0.102 8.038 0.000 0.517 0.386
## ce9 0.571 0.095 5.983 0.000 0.360 0.288
## ce14 1.177 0.127 9.278 0.000 0.742 0.571
## ce15 1.053 0.113 9.285 0.000 0.664 0.572
## ce20 0.881 0.112 7.893 0.000 0.556 0.424
## ce24 0.966 0.113 8.537 0.000 0.609 0.484
## ce27 0.927 0.111 8.386 0.000 0.584 0.469
## OA =~
## ce4 1.000 0.819 0.679
## ce25 0.756 0.095 7.940 0.000 0.619 0.546
## ce26 0.766 0.097 7.921 0.000 0.627 0.522
## SC =~
## ce1 1.000 0.442 0.368
## ce2 0.986 0.199 4.944 0.000 0.435 0.420
## ce11 1.198 0.237 5.043 0.000 0.529 0.483
## ce17 0.692 0.160 4.322 0.000 0.305 0.304
## ce22 0.767 0.172 4.447 0.000 0.338 0.321
##
## Covariances:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## .ce3 ~~
## .ce8 0.206 0.061 3.380 0.001 0.206 0.139
## IA ~~
## OA 0.093 0.030 3.141 0.002 0.180 0.180
## SC 0.038 0.018 2.078 0.038 0.138 0.138
## OA ~~
## SC 0.080 0.027 2.931 0.003 0.221 0.221
##
## Variances:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## .ce3 1.438 0.083 17.231 0.000 1.438 0.783
## .ce5 1.163 0.071 16.279 0.000 1.163 0.704
## .ce8 1.526 0.085 17.944 0.000 1.526 0.851
## .ce9 1.435 0.077 18.667 0.000 1.435 0.917
## .ce14 1.141 0.072 15.825 0.000 1.141 0.675
## .ce15 0.909 0.057 15.806 0.000 0.909 0.673
## .ce20 1.407 0.079 17.739 0.000 1.407 0.820
## .ce24 1.212 0.071 17.113 0.000 1.212 0.766
## .ce27 1.210 0.070 17.286 0.000 1.210 0.780
## .ce4 0.784 0.090 8.692 0.000 0.784 0.539
## .ce25 0.902 0.066 13.629 0.000 0.902 0.702
## .ce26 1.052 0.073 14.436 0.000 1.052 0.728
## .ce1 1.242 0.078 15.965 0.000 1.242 0.864
## .ce2 0.885 0.060 14.695 0.000 0.885 0.824
## .ce11 0.920 0.072 12.794 0.000 0.920 0.767
## .ce17 0.917 0.053 17.168 0.000 0.917 0.908
## .ce22 0.997 0.059 16.884 0.000 0.997 0.897
## IA 0.397 0.070 5.673 0.000 1.000 1.000
## OA 0.670 0.103 6.536 0.000 1.000 1.000
## SC 0.195 0.058 3.385 0.001 1.000 1.000
Model Test User Model)Historically, a significant p-value for the Chi-Square test was taken as an indication of a lack of fit. However, Chi-Square tests are impacted by sample size, and given that CFA/SEM are generally large-sample procedures, very often the p-value for the Chi-Square test will be significant. For this reason, nowadays much less weight is given to this test.
User Model versus Baseline Model)Note: ‘Tucker-Lewis Index’ is sometimes referred to as the ‘non-normed fit index’
Typically values for the CFI and TLI are interpreted as indicative of the following:
Since the values we get (.854 and .827) are low relative to .90, they indicate a lack of fit.
Root Mean Square Error of Approximation (RMSEA) values are generally interpreted as indicative of the following:
The RMSEA value we got (0.048) is below 0.05. The ‘PCLOSE test’ of the RMSEA (P-value RMSEA <= 0.05) is non-significant, which is also taken as an indicator of close fit.
Standardized Root Mean Square Residual (SRMR) values at 0.05 or below are generally considered indicative of a well-fitting model.
The statistics point in different directions with regard to fit. CFI and TLI suggest less than optimal levels of fit. The RMSEA and SRMR both indicate close fit. One could also note here how it becomes obvious that binary ‘significant’/‘non-significant’ evaluations of statistics are inherently problematic.
The syntax LV1 ~~ LV2 specifies the correlation between latent variable LV1 and latent variable LV2. Note that this is the same syntax as for all other correlations, such as the correlation we specified for two individual variables, ce3 ~~ ce8.
The covariance (0.038) is statistically significant (p=0.038). The correlation is 0.138.
The covariance (0.206) is statistically significant (p=0.001). The correlation is 0.139.
Variance estimates (Estimate) for indicator (‘non-latent’) variables such as ce3 or ce5 represent estimates of error variance. The variance values for latent variables represent estimates of factor variance, for the unstandardized solution. In the standardized solution (Std.all) the variances are 1 for each of these latent factors.
The analysis we carried out above assumes that the variables are continuous. Sometimes you might want to treat variables as ordered categorical, such as when dealing with Likert type data as we are here. To achieve this, the cfa call can be modified using the ordered argument.
indicator_variables <- c("ce3", "ce5", "ce8", "ce9", "ce14", "ce15", "ce20", "ce20", "ce24", "ce27", "ce4", "ce25", "ce26", "ce1", "ce2", "ce11", "ce22")
ebi1_ordered_model_fit <- cfa(ebi1_model,data=ebi1_df, ordered=indicator_variables)
This tells the cfa function to treat the specified variables as ordered categorical. The cfa function will then use diagonally weighted least squares in order to estimate model parameters. You also get robust standard error estimates.
summary(ebi1_ordered_model_fit, fit.measures=TRUE, standardized=TRUE)
## lavaan 0.6-6 ended normally after 41 iterations
##
## Estimator DWLS
## Optimization method NLMINB
## Number of free parameters 87
##
## Number of observations 746
##
## Model Test User Model:
## Standard Robust
## Test Statistic 422.462 431.605
## Degrees of freedom 115 115
## P-value (Chi-square) 0.000 0.000
## Scaling correction factor 1.040
## Shift parameter 25.329
## simple second-order correction
##
## Model Test Baseline Model:
##
## Test statistic 3685.852 2597.932
## Degrees of freedom 136 136
## P-value 0.000 0.000
## Scaling correction factor 1.442
##
## User Model versus Baseline Model:
##
## Comparative Fit Index (CFI) 0.913 0.871
## Tucker-Lewis Index (TLI) 0.898 0.848
##
## Robust Comparative Fit Index (CFI) NA
## Robust Tucker-Lewis Index (TLI) NA
##
## Root Mean Square Error of Approximation:
##
## RMSEA 0.060 0.061
## 90 Percent confidence interval - lower 0.054 0.055
## 90 Percent confidence interval - upper 0.066 0.067
## P-value RMSEA <= 0.05 0.004 0.002
##
## Robust RMSEA NA
## 90 Percent confidence interval - lower NA
## 90 Percent confidence interval - upper NA
##
## Standardized Root Mean Square Residual:
##
## SRMR 0.059 0.059
##
## Parameter Estimates:
##
## Standard errors Robust.sem
## Information Expected
## Information saturated (h1) model Unstructured
##
## Latent Variables:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## IA =~
## ce3 1.000 0.512 0.512
## ce5 1.170 0.095 12.348 0.000 0.599 0.599
## ce8 0.805 0.085 9.494 0.000 0.412 0.412
## ce9 0.610 0.078 7.873 0.000 0.312 0.312
## ce14 1.208 0.083 14.532 0.000 0.618 0.618
## ce15 1.272 0.096 13.221 0.000 0.651 0.651
## ce20 0.977 0.091 10.680 0.000 0.500 0.500
## ce24 1.031 0.088 11.728 0.000 0.528 0.528
## ce27 1.011 0.087 11.620 0.000 0.517 0.517
## OA =~
## ce4 1.000 0.697 0.697
## ce25 0.815 0.074 11.059 0.000 0.568 0.568
## ce26 0.851 0.085 10.068 0.000 0.593 0.593
## SC =~
## ce1 1.000 0.488 0.488
## ce2 0.954 0.137 6.979 0.000 0.465 0.465
## ce11 1.037 0.152 6.813 0.000 0.506 0.506
## ce17 0.579 0.110 5.242 0.000 0.282 0.281
## ce22 0.664 0.127 5.247 0.000 0.324 0.324
##
## Covariances:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## .ce3 ~~
## .ce8 0.122 0.030 4.052 0.000 0.122 0.156
## IA ~~
## OA 0.067 0.018 3.720 0.000 0.187 0.187
## SC 0.029 0.014 2.107 0.035 0.115 0.115
## OA ~~
## SC 0.085 0.022 3.923 0.000 0.250 0.250
##
## Intercepts:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## .ce3 0.000 0.000 0.000
## .ce5 0.000 0.000 0.000
## .ce8 0.000 0.000 0.000
## .ce9 0.000 0.000 0.000
## .ce14 0.000 0.000 0.000
## .ce15 0.000 0.000 0.000
## .ce20 0.000 0.000 0.000
## .ce24 0.000 0.000 0.000
## .ce27 0.000 0.000 0.000
## .ce4 0.000 0.000 0.000
## .ce25 0.000 0.000 0.000
## .ce26 0.000 0.000 0.000
## .ce1 0.000 0.000 0.000
## .ce2 0.000 0.000 0.000
## .ce11 0.000 0.000 0.000
## .ce17 3.210 0.037 86.981 0.000 3.210 3.193
## .ce22 0.000 0.000 0.000
## IA 0.000 0.000 0.000
## OA 0.000 0.000 0.000
## SC 0.000 0.000 0.000
##
## Thresholds:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## ce3|t1 -0.953 0.054 -17.533 0.000 -0.953 -0.953
## ce3|t2 -0.265 0.047 -5.701 0.000 -0.265 -0.265
## ce3|t3 0.248 0.046 5.337 0.000 0.248 0.248
## ce3|t4 0.932 0.054 17.277 0.000 0.932 0.932
## ce5|t1 -0.433 0.048 -9.116 0.000 -0.433 -0.433
## ce5|t2 0.179 0.046 3.876 0.000 0.179 0.179
## ce5|t3 0.857 0.053 16.292 0.000 0.857 0.857
## ce5|t4 1.342 0.065 20.771 0.000 1.342 1.342
## ce8|t1 -0.728 0.051 -14.383 0.000 -0.728 -0.728
## ce8|t2 0.050 0.046 1.098 0.272 0.050 0.050
## ce8|t3 0.485 0.048 10.127 0.000 0.485 0.485
## ce8|t4 1.172 0.059 19.717 0.000 1.172 1.172
## ce9|t1 -1.059 0.057 -18.705 0.000 -1.059 -1.059
## ce9|t2 -0.307 0.047 -6.576 0.000 -0.307 -0.307
## ce9|t3 0.332 0.047 7.085 0.000 0.332 0.332
## ce9|t4 1.139 0.059 19.448 0.000 1.139 1.139
## ce14|t1 -0.838 0.052 -16.024 0.000 -0.838 -0.838
## ce14|t2 -0.067 0.046 -1.463 0.143 -0.067 -0.067
## ce14|t3 0.463 0.048 9.694 0.000 0.463 0.463
## ce14|t4 1.192 0.060 19.873 0.000 1.192 1.192
## ce15|t1 -0.084 0.046 -1.829 0.067 -0.084 -0.084
## ce15|t2 0.741 0.051 14.591 0.000 0.741 0.741
## ce15|t3 1.139 0.059 19.448 0.000 1.139 1.139
## ce15|t4 1.599 0.075 21.285 0.000 1.599 1.599
## ce20|t1 -0.067 0.046 -1.463 0.143 -0.067 -0.067
## ce20|t2 0.582 0.049 11.918 0.000 0.582 0.582
## ce20|t3 0.917 0.054 17.083 0.000 0.917 0.917
## ce20|t4 1.367 0.065 20.878 0.000 1.367 1.367
## ce24|t1 -0.728 0.051 -14.383 0.000 -0.728 -0.728
## ce24|t2 0.054 0.046 1.171 0.242 0.054 0.054
## ce24|t3 0.651 0.050 13.124 0.000 0.651 0.651
## ce24|t4 1.342 0.065 20.771 0.000 1.342 1.342
## ce27|t1 -0.441 0.048 -9.261 0.000 -0.441 -0.441
## ce27|t2 0.325 0.047 6.939 0.000 0.325 0.325
## ce27|t3 0.917 0.054 17.083 0.000 0.917 0.917
## ce27|t4 1.402 0.067 21.007 0.000 1.402 1.402
## ce4|t1 -1.488 0.070 -21.219 0.000 -1.488 -1.488
## ce4|t2 -0.887 0.053 -16.690 0.000 -0.887 -0.887
## ce4|t3 -0.148 0.046 -3.219 0.001 -0.148 -0.148
## ce4|t4 0.578 0.049 11.847 0.000 0.578 0.578
## ce25|t1 -1.439 0.068 -21.115 0.000 -1.439 -1.439
## ce25|t2 -0.777 0.051 -15.142 0.000 -0.777 -0.777
## ce25|t3 0.077 0.046 1.683 0.092 0.077 0.077
## ce25|t4 0.985 0.055 17.912 0.000 0.985 0.985
## ce26|t1 -1.145 0.059 -19.503 0.000 -1.145 -1.145
## ce26|t2 -0.397 0.047 -8.392 0.000 -0.397 -0.397
## ce26|t3 0.437 0.048 9.189 0.000 0.437 0.437
## ce26|t4 1.126 0.058 19.338 0.000 1.126 1.126
## ce1|t1 -1.393 0.066 -20.976 0.000 -1.393 -1.393
## ce1|t2 -0.809 0.052 -15.619 0.000 -0.809 -0.809
## ce1|t3 -0.027 0.046 -0.585 0.558 -0.027 -0.027
## ce1|t4 0.764 0.051 14.936 0.000 0.764 0.764
## ce2|t1 -1.850 0.090 -20.628 0.000 -1.850 -1.850
## ce2|t2 -1.107 0.058 -19.170 0.000 -1.107 -1.107
## ce2|t3 -0.193 0.046 -4.169 0.000 -0.193 -0.193
## ce2|t4 0.764 0.051 14.936 0.000 0.764 0.764
## ce11|t1 -1.662 0.078 -21.221 0.000 -1.662 -1.662
## ce11|t2 -1.002 0.055 -18.099 0.000 -1.002 -1.002
## ce11|t3 -0.176 0.046 -3.803 0.000 -0.176 -0.176
## ce11|t4 0.759 0.051 14.867 0.000 0.759 0.759
## ce22|t1 -1.850 0.090 -20.628 0.000 -1.850 -1.850
## ce22|t2 -1.249 0.062 -20.264 0.000 -1.249 -1.249
## ce22|t3 -0.570 0.049 -11.704 0.000 -0.570 -0.570
## ce22|t4 0.375 0.047 7.957 0.000 0.375 0.375
##
## Variances:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## .ce3 0.738 0.738 0.738
## .ce5 0.642 0.642 0.642
## .ce8 0.830 0.830 0.830
## .ce9 0.902 0.902 0.902
## .ce14 0.618 0.618 0.618
## .ce15 0.576 0.576 0.576
## .ce20 0.750 0.750 0.750
## .ce24 0.722 0.722 0.722
## .ce27 0.733 0.733 0.733
## .ce4 0.514 0.514 0.514
## .ce25 0.677 0.677 0.677
## .ce26 0.648 0.648 0.648
## .ce1 0.762 0.762 0.762
## .ce2 0.784 0.784 0.784
## .ce11 0.744 0.744 0.744
## .ce17 0.931 0.051 18.370 0.000 0.931 0.921
## .ce22 0.895 0.895 0.895
## IA 0.262 0.032 8.079 0.000 1.000 1.000
## OA 0.486 0.054 8.976 0.000 1.000 1.000
## SC 0.238 0.046 5.139 0.000 1.000 1.000
##
## Scales y*:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## ce3 1.000 1.000 1.000
## ce5 1.000 1.000 1.000
## ce8 1.000 1.000 1.000
## ce9 1.000 1.000 1.000
## ce14 1.000 1.000 1.000
## ce15 1.000 1.000 1.000
## ce20 1.000 1.000 1.000
## ce24 1.000 1.000 1.000
## ce27 1.000 1.000 1.000
## ce4 1.000 1.000 1.000
## ce25 1.000 1.000 1.000
## ce26 1.000 1.000 1.000
## ce1 1.000 1.000 1.000
## ce2 1.000 1.000 1.000
## ce11 1.000 1.000 1.000
## ce22 1.000 1.000 1.000
Now you get both Standard and Robust goodness of fit measures. The Estimator used is the DWLS, i. e. diagonally weighted least squares. Under Parameter Estimates you can see for Standard errors the specification Robust.sem, meaning that the standard error estimates presented in the output are robust.