Setup

Required Packages

knitr::opts_chunk$set(echo = TRUE)
#--loading my packages
if (!require("pacman")) install.packages("pacman")
library(pacman)
# Load installed packages, install and load the package if not installed
pacman::p_load(here, rio, conflicted, tidyverse, janitor, summarytools, skimr, 
               DataExplorer, inspectdf, Hmisc, readr, dplyr, tinytex,
               # loading SEM related packages
               lavaan, semTools, semoutput, semPlot, sjPlot, psych, 
               GPArotation, psychTools, semptools, lm.beta)

Background

The statistician George Box famously said “All models are wrong, but some models are useful”. This statement is particularly applicable to SEM. By definition, models are simplified representations of phenomena. Models need to be simple enough to facilitate our understanding of the phenomenon under study, but not so simple that we miss important, often subtle, aspects of the phenomenon.

When working within the SEM framework we postulate a model to account for the pattern of variances and covariances we observe in a set of relevant variables. Confirming that our model fits the data is the desired result, but we must keep in mind that our model may be one of several that could explain the data. So, while we can rule out some models because they do not fit the data, we cannot claim that our model is the model simply because it is consistent with the observed data.

In summarizing the strategic framework for testing structural equation models, Karl Joreskog distinguished between three scenarios that he called strictly confirmatory (SC), alternative models (AM), and model generating (MG).

In the SC mode the researcher postulates a single model based on theory, collects the data, then tests the fit of the hypothesized model to the sample data. Based on the results of this test, the researcher either rejects or fails to reject the model; no further modifications to the model are made.

In the AM mode, the researcher proposes several alternative (i.e., competing) models, all of which are grounded in theory. Following analysis of a single sample he/she selects one model as most appropriate.

The model generating (MG) mode represents the case where the researcher, having postulated and rejected a model on the basis of poor fit, proceeds in an exploratory fashion to modify and re-estimate the model. This involves inspection of what are called modification indices. MG mode is typically atheoretic and data-driven.

In this lecture we start with the SC case. We propose and test a model; we will either accept or reject it. (Heads up, the hypothesized model does not fit well.) This will provide opportunity to demonstrate how one may go about the MG approach. Finally, we examine a second hypothesized model (now we are working from the AM perspective) and compare it to the initial hypothesized model.

The Example

The Rosenburg Self-Esteem Scale contains 10 items hypothesized to measure Self-Esteem as a uni-dimensional construct. Here are the items, response scale options and citation. Note that to control for response set bias, five of the items are negatively worded (and reverse scored) while five are positively worded. The sample size for analyses is 973.

Reading in the Data

The data are in the form of a correlation matrix plus a vector of standard deviations. These are combined to form a covariance matrix for analysis.

corr <- '
1.000
.158 1.000
.463  .220 1.000
.302  .288  .296 1.000
.407  .293  .422  .348 1.000
.234  .285  .245  .319  .370 1.000
.332  .216  .359  .450  .359  .385 1.000
.388  .134  .417  .246  .354  .213  .313 1.000
.325  .245  .335  .368  .386  .457  .509  .243 1.000
.401  .212  .429  .295  .487  .285  .410  .382  .383 1.000
'
# add variable names and convert to full correlation matrix
corrfull <- lavaan::getCov(corr, names=c('v1','v2','v3','v4','v5','v6','v7','v8','v9','v10'))
# add SDs and convert to full covariance matrix
covfull <- lavaan::cor2cov(corrfull, sds=c(0.732,1.106,0.819,0.948,0.929,0.860,0.824,0.738,0.851,0.864))
# MEANS; not used 3.78 2.96 4.09 4.00 3.80 3.37 4.30 3.92 4.07 4.02   # N = 973

The Correlations

Here are the observed correlations among the 10 items, labeled v1 to v10.

# Uses sjPlot to print a nice looking correlation table
tab_corr(corrfull, na.deletion = "listwise", digits = 2, triangle = "lower",
         title="Correlations among Self-Esteem Items",
         string.diag=c('1','1','1','1','1','1','1','1','1','1'))
Correlations among Self-Esteem Items
  v1 v2 v3 v4 v5 v6 v7 v8 v9 v10
v1 1                  
v2 0.16 1                
v3 0.46 0.22 1              
v4 0.30 0.29 0.30 1            
v5 0.41 0.29 0.42 0.35 1          
v6 0.23 0.28 0.24 0.32 0.37 1        
v7 0.33 0.22 0.36 0.45 0.36 0.38 1      
v8 0.39 0.13 0.42 0.25 0.35 0.21 0.31 1    
v9 0.32 0.24 0.34 0.37 0.39 0.46 0.51 0.24 1  
v10 0.40 0.21 0.43 0.29 0.49 0.28 0.41 0.38 0.38 1
Computed correlation used pearson-method with listwise-deletion.

The CFA Model

The hypothesis to be tested is that the items making up the scale measure a uni-dimensional construct, Self-Esteem. A confirmatory factor model where all 10 items load onto a single latent variable and their residual variance terms are uncorrelated represents this hypothesis. We run the model; if it fits well we have support for our hypothesis, if the model fit is poor then our hypothesis of a single underlying factor is rejected.

Specify and run the model:

model1 <- ' SE  =~ v1 + v2 + v3 + v4 + v5 + v6 + v7 + v8 + v9 + v10 '
fit1 <- cfa(model1, sample.cov=covfull, sample.nobs=973, std.lv=F )

Path Diagram Model 1

#--making CFA path diagram---------------------mar(bot,lft,top,rgt)
fig1 <-semPlot::semPaths(fit1, whatLabels = "std", layout = "tree2", 
                         rotation = 1, style = "lisrel", optimizeLatRes = T, 
                         structural = F, layoutSplit = F,
                         intercepts = F, residuals = T, 
                         curve = 1, curvature = 3, nCharNodes = 8, 
                         sizeLat = 8, sizeMan = 6, sizeMan2 = 6, 
                         edge.label.cex = 0.9, label.cex=1.1, residScale=10, mar=c(19,2,19,2), 
                         edge.color = "black", edge.label.position = .60, DoNotPlot=T)
fig1.1 <- fig1 |> mark_sig(fit1, alpha = c("(n.s.)" = 1.00, "*" = .05))
plot(fig1.1)  

Note that the path connecting the latent factor with v1 is dashed. This is to indicate that v1 was used as a “reference variable” to define the metric of the latent variable, so there is no test of significance associated with it. The loadings range from .38 to .66. The residual terms, estimating the proportion of variance in each item that is not associated with the latent factor range from .58 to .86. v2 has considerable measurement error in comparison to the other items.

Full output 1

summary(fit1, fit.measures=T, standardized=T, rsquare=T)
## lavaan 0.6.16 ended normally after 32 iterations
## 
##   Estimator                                         ML
##   Optimization method                           NLMINB
##   Number of model parameters                        20
## 
##   Number of observations                           973
## 
## Model Test User Model:
##                                                       
##   Test statistic                               283.142
##   Degrees of freedom                                35
##   P-value (Chi-square)                           0.000
## 
## Model Test Baseline Model:
## 
##   Test statistic                              2603.344
##   Degrees of freedom                                45
##   P-value                                        0.000
## 
## User Model versus Baseline Model:
## 
##   Comparative Fit Index (CFI)                    0.903
##   Tucker-Lewis Index (TLI)                       0.875
## 
## Loglikelihood and Information Criteria:
## 
##   Loglikelihood user model (H0)             -11187.810
##   Loglikelihood unrestricted model (H1)     -11046.239
##                                                       
##   Akaike (AIC)                               22415.620
##   Bayesian (BIC)                             22513.228
##   Sample-size adjusted Bayesian (SABIC)      22449.708
## 
## Root Mean Square Error of Approximation:
## 
##   RMSEA                                          0.085
##   90 Percent confidence interval - lower         0.076
##   90 Percent confidence interval - upper         0.095
##   P-value H_0: RMSEA <= 0.050                    0.000
##   P-value H_0: RMSEA >= 0.080                    0.838
## 
## Standardized Root Mean Square Residual:
## 
##   SRMR                                           0.052
## 
## Parameter Estimates:
## 
##   Standard errors                             Standard
##   Information                                 Expected
##   Information saturated (h1) model          Structured
## 
## Latent Variables:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##   SE =~                                                                 
##     v1                1.000                               0.430    0.587
##     v2                0.971    0.096   10.097    0.000    0.417    0.377
##     v3                1.177    0.079   14.964    0.000    0.506    0.618
##     v4                1.214    0.088   13.748    0.000    0.521    0.550
##     v5                1.436    0.091   15.726    0.000    0.617    0.664
##     v6                1.057    0.079   13.330    0.000    0.454    0.528
##     v7                1.242    0.080   15.459    0.000    0.533    0.648
##     v8                0.897    0.068   13.206    0.000    0.385    0.522
##     v9                1.247    0.082   15.169    0.000    0.536    0.630
##     v10               1.304    0.084   15.477    0.000    0.560    0.649
## 
## Variances:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##    .v1                0.351    0.018   19.895    0.000    0.351    0.655
##    .v2                1.048    0.049   21.377    0.000    1.048    0.858
##    .v3                0.414    0.021   19.517    0.000    0.414    0.618
##    .v4                0.626    0.031   20.274    0.000    0.626    0.697
##    .v5                0.482    0.026   18.804    0.000    0.482    0.559
##    .v6                0.533    0.026   20.467    0.000    0.533    0.721
##    .v7                0.394    0.021   19.083    0.000    0.394    0.581
##    .v8                0.396    0.019   20.520    0.000    0.396    0.727
##    .v9                0.436    0.023   19.349    0.000    0.436    0.603
##    .v10               0.432    0.023   19.065    0.000    0.432    0.579
##     SE                0.185    0.020    9.297    0.000    1.000    1.000
## 
## R-Square:
##                    Estimate
##     v1                0.345
##     v2                0.142
##     v3                0.382
##     v4                0.303
##     v5                0.441
##     v6                0.279
##     v7                0.419
##     v8                0.273
##     v9                0.397
##     v10               0.421

The \(\chi^2_{M1}\) = 283.124 with df = 35, p < .001. Based on the significance of the \(\chi^2_{M1}\) test we reject the model, however the value of \(\chi^2_{M1}\) is influenced by sample size so we examine other fit indices.
The RSMEA = .085 , TLI = .875, CFI = .903, SRMR = .052. There is considerable room for improvement in fit; the RMSEA and SRMR could be smaller, and the TLI (and CFI) could be larger.
So, from a strictly confirmatory perspective (SC) we conclude that the covariance structure of the Self-Esteem Scale items is more complex than a uni-dimensional model with a single latent variable.

Reliability 1

The semTools package has a command compRelSEM to obtain reliability estimates such as Cronbach’s \(\alpha\) coefficient.

semTools::compRelSEM(fit1, tau.eq=T, obs.var=T, return.total=F) 
##    SE 
## 0.827

In this sample the reliability of the scale is estimated by \(\alpha\) = .827.

Residual Correlations 1

The values below are the residual correlations (i.e., observed correlation - model estimated correlation). The average absolute values of these residuals is provided by the standardized root mean square residual (SRMR). Note that the largest residual correlation (.12) is between v6 and v9. While semoutput::sem_residuals() makes a nice table, I don’t believe that we can change the number of digits; it is limited to only two. More precise values (3 digits) are shown in the output from lavResiduals() below.

semoutput::sem_residuals(fit1)
v1 v2 v3 v4 v5 v6 v7 v8 v9 v10
v1
v2 -0.06
v3 0.10 -0.01
v4 -0.02 0.08 -0.04
v5 0.02 0.04 0.01 -0.02
v6 -0.08 0.09 -0.08 0.03 0.02
v7 -0.05 -0.03 -0.04 0.09 -0.07 0.04
v8 0.08 -0.06 0.09 -0.04 0.01 -0.06 -0.03
v9 -0.04 0.01 -0.05 0.02 -0.03 0.12 0.10 -0.09
v10 0.02 -0.03 0.03 -0.06 0.06 -0.06 -0.01 0.04 -0.03

Analysis of Residuals 1

The output below displays residual correlations and z-scores indicating the magnitude of the residuals. Typically, z values > |2.58| (which is the two-tailed critical value for \(\alpha = .01\)) are interpreted as significant and indicate where the model is doing a poor job of reproducing the observed correlations.

lavResiduals(fit1, type = "cor.bollen")
## $type
## [1] "cor.bollen"
## 
## $cov
##         v1     v2     v3     v4     v5     v6     v7     v8     v9    v10
## v1   0.000                                                               
## v2  -0.064  0.000                                                        
## v3   0.100 -0.013  0.000                                                 
## v4  -0.021  0.080 -0.044  0.000                                          
## v5   0.017  0.042  0.012 -0.018  0.000                                   
## v6  -0.076  0.086 -0.082  0.028  0.019  0.000                            
## v7  -0.048 -0.028 -0.041  0.094 -0.071  0.043  0.000                     
## v8   0.081 -0.063  0.094 -0.041  0.007 -0.063 -0.025  0.000              
## v9  -0.045  0.007 -0.054  0.021 -0.033  0.124  0.101 -0.086  0.000       
## v10  0.020 -0.033  0.028 -0.062  0.056 -0.058 -0.010  0.043 -0.026  0.000
## 
## $cov.z
##         v1     v2     v3     v4     v5     v6     v7     v8     v9    v10
## v1   0.000                                                               
## v2  -2.879  0.000                                                        
## v3   5.229 -0.616  0.000                                                 
## v4  -1.094  3.369 -2.408  0.000                                          
## v5   1.015  2.079  0.727 -1.024  0.000                                   
## v6  -3.932  3.512 -4.402  1.345  1.056  0.000                            
## v7  -2.921 -1.394 -2.602  4.926 -5.035  2.292  0.000                     
## v8   3.907 -2.655  4.655 -2.002  0.400 -2.990 -1.385  0.000              
## v9  -2.630  0.341 -3.351  1.154 -2.157  6.111  5.758 -4.706  0.000       
## v10  1.168 -1.615  1.696 -3.600  3.524 -3.265 -0.658  2.303 -1.644  0.000
## 
## $summary
##                            cov
## crmr                     0.057
## crmr.se                  0.002
## crmr.exactfit.z         15.799
## crmr.exactfit.pvalue     0.000
## ucrmr                    0.054
## ucrmr.se                 0.005
## ucrmr.ci.lower           0.047
## ucrmr.cilupper           0.062
## ucrmr.closefit.h0.value  0.050
## ucrmr.closefit.z         0.918
## ucrmr.closefit.pvalue    0.179

Notice that the largest residual correlation (.124) between v6 and v9 has the largest z value associated with it (6.111). In other words, the model is doing a poor job of reproducing this correlation.

Ideally the residual correlations should be normally distributed with a mean of 0.0. One way to examine the distribution of these residuals is with a stem-and-leaf plot which is a type of sideways histogram. The first 2 digits of each residual correlation are to the left of the | symbol and are referred to as the stem. The third digit of each residual correlation is to the right of the | and is referred to as a leaf. The plot shows the number (and value) of the leaves for each stem. Note that there are no decimal points in the plot; the implied decimal point is two digits to the left of the | symbol.

stem( lavResiduals(fit1, output = "table")$res ) # gives stem/leaf plot of resid cors
## 
##   The decimal point is 2 digit(s) to the left of the |
## 
##    -8 | 62
##    -6 | 614332
##    -4 | 8485411
##    -2 | 338651
##    -0 | 830
##     0 | 77279
##     2 | 0188
##     4 | 2336
##     6 | 
##     8 | 01644
##    10 | 01
##    12 | 4

For example in the first row of the plot there are two residuals for which the first two digits are -.08 and the third digits are 6 and 2, printed to the right of the | symbol. These two residual correlations are thus -.086 and -.082. The largest positive residual correlation .124 appears in the last row as “12 | 4”. This distribution is far from being normal.

Looking at the entire plot shows how many residuals begin with the same stem value. So, there are two that start with -.08, six that start with -.06, seven that start with -.04, five that start with .00, etc.

Modification Indices 1

The values below indicate the drop in \(\chi^2_M\) (i.e., model improvement) anticipated if parameters currently fixed at 0.0 are freed to be estimated; all freely estimated parameters automatically have an MI of zero. Note that lhs op rhs refers to the variable on the left hand side and on the right hand side of the operation symbol ~~ which indicates a correlation between the residual variances of two items. The critical value of \(\chi^2\) with df = 1 and \(\alpha\) = .05 is 3.841, so any MI greater than this value is worth examination.

modificationIndices(fit1, sort. = TRUE, minimum.value = 3)
##    lhs op rhs     mi    epc sepc.lv sepc.all sepc.nox
## 59  v6 ~~  v9 43.804  0.115   0.115    0.239    0.239
## 62  v7 ~~  v9 40.031  0.100   0.100    0.241    0.241
## 23  v1 ~~  v3 31.612  0.079   0.079    0.206    0.206
## 48  v4 ~~  v7 27.683  0.096   0.096    0.193    0.193
## 43  v3 ~~  v8 24.152  0.071   0.071    0.176    0.176
## 53  v5 ~~  v7 22.438 -0.080  -0.080   -0.185   -0.185
## 64  v8 ~~  v9 20.753 -0.068  -0.068   -0.164   -0.164
## 41  v3 ~~  v6 18.246 -0.072  -0.072   -0.154   -0.154
## 28  v1 ~~  v8 16.553  0.054   0.054    0.144    0.144
## 26  v1 ~~  v6 14.713 -0.059  -0.059   -0.136   -0.136
## 56  v5 ~~ v10 13.975  0.066   0.066    0.146    0.146
## 34  v2 ~~  v6 12.909  0.091   0.091    0.122    0.122
## 51  v4 ~~ v10 12.165 -0.067  -0.067   -0.128   -0.128
## 32  v2 ~~  v4 11.887  0.095   0.095    0.118    0.118
## 44  v3 ~~  v9 10.491 -0.052  -0.052   -0.121   -0.121
## 60  v6 ~~ v10 10.097 -0.056  -0.056   -0.116   -0.116
## 58  v6 ~~  v8  8.670 -0.047  -0.047   -0.103   -0.103
## 22  v1 ~~  v2  8.098 -0.059  -0.059   -0.098   -0.098
## 27  v1 ~~  v7  8.044 -0.039  -0.039   -0.106   -0.106
## 36  v2 ~~  v8  6.931 -0.057  -0.057   -0.089   -0.089
## 29  v1 ~~  v9  6.589 -0.037  -0.037   -0.095   -0.095
## 42  v3 ~~  v7  6.385 -0.039  -0.039   -0.096   -0.096
## 65  v8 ~~ v10  5.584  0.036   0.036    0.086    0.086
## 39  v3 ~~  v4  5.569 -0.043  -0.043   -0.085   -0.085
## 57  v6 ~~  v7  5.505  0.039   0.039    0.086    0.086
## 33  v2 ~~  v5  4.467  0.053   0.053    0.075    0.075
## 55  v5 ~~  v9  4.407 -0.037  -0.037   -0.081   -0.081
## 49  v4 ~~  v8  3.909 -0.034  -0.034   -0.069   -0.069
## 45  v3 ~~ v10  3.009  0.028   0.028    0.066    0.066

Here the largest MI is listed first and it suggests that if we re-run the model and allow the residual terms between v6 and v9 to correlate, the \(\chi^2_M\) is expected to decrease by about 43.80. The other columns show the expected value (epc) of the parameter if it were to be estimated. Note that this finding confirms what we saw when inspecting the residual correlations; the residual correlation between v6 and v9 was the largest.
The heading sepc.lv is the value if the latent variabl(s) are standardized and the column heading sepc.all gives the completely standardized values. So the anticipated covariance and correlation between these two residual terms, if this parameter where to be estimated, are .115, .239, respectively.

A Revised Model

Here we are operating in the MG mode; our re-specification is atheoretic and simply data-driven. Nonetheless, we illustrate how one might specify and fit such a model.

Re-Specify and Re-Run the model:
I added one additional parameter v6 ~~ v9, the correlation between the residual variances of these two items because the MI from Model 1 indicated that doing so would have the greatest impact on improving model fit. There is no theoretical rationale for adding this type of parameter because in traditional CFA models the residual terms are assumed to be independent (i.e., uncorrelated).

model2 <- ' SE  =~ v1 + v2 + v3 + v4 + v5 + v6 + v7 + v8 + v9 + v10
            v6 ~~ v9 '
fit2 <- cfa(model2, sample.cov=covfull, sample.nobs=973, std.lv=F )

Path Diagram Model 2

#--making CFA path diagram---------------------mar(bot,lft,top,rgt)
fig2 <-semPlot::semPaths(fit2, whatLabels = "std", layout = "tree2", 
                         rotation = 1, style = "lisrel", optimizeLatRes = T, 
                         structural = F, layoutSplit = F,
                         intercepts = F, residuals = T, 
                         curve = 1, curvature = 3, nCharNodes = 8, 
                         sizeLat = 8, sizeMan = 6, sizeMan2 = 6, 
                         edge.label.cex = 0.9, label.cex=1.1, residScale=10, mar=c(19,2,19,2), 
                         edge.color = "black", edge.label.position = .60, DoNotPlot=T)
fig2.1 <- fig2 |> mark_sig(fit2, alpha = c("(n.s.)" = 1.00, "*" = .05))
plot(fig2.1)  

Note that the correlation between the two residuals is .23, as was suggested by the MI from Model 1.

Full output 2

summary(fit2, fit.measures=T, standardized=T, rsquare=T)
## lavaan 0.6.16 ended normally after 32 iterations
## 
##   Estimator                                         ML
##   Optimization method                           NLMINB
##   Number of model parameters                        21
## 
##   Number of observations                           973
## 
## Model Test User Model:
##                                                       
##   Test statistic                               239.464
##   Degrees of freedom                                34
##   P-value (Chi-square)                           0.000
## 
## Model Test Baseline Model:
## 
##   Test statistic                              2603.344
##   Degrees of freedom                                45
##   P-value                                        0.000
## 
## User Model versus Baseline Model:
## 
##   Comparative Fit Index (CFI)                    0.920
##   Tucker-Lewis Index (TLI)                       0.894
## 
## Loglikelihood and Information Criteria:
## 
##   Loglikelihood user model (H0)             -11165.971
##   Loglikelihood unrestricted model (H1)     -11046.239
##                                                       
##   Akaike (AIC)                               22373.942
##   Bayesian (BIC)                             22476.430
##   Sample-size adjusted Bayesian (SABIC)      22409.734
## 
## Root Mean Square Error of Approximation:
## 
##   RMSEA                                          0.079
##   90 Percent confidence interval - lower         0.070
##   90 Percent confidence interval - upper         0.088
##   P-value H_0: RMSEA <= 0.050                    0.000
##   P-value H_0: RMSEA >= 0.080                    0.430
## 
## Standardized Root Mean Square Residual:
## 
##   SRMR                                           0.049
## 
## Parameter Estimates:
## 
##   Standard errors                             Standard
##   Information                                 Expected
##   Information saturated (h1) model          Structured
## 
## Latent Variables:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##   SE =~                                                                 
##     v1                1.000                               0.438    0.598
##     v2                0.940    0.094   10.000    0.000    0.411    0.372
##     v3                1.179    0.077   15.319    0.000    0.516    0.630
##     v4                1.182    0.086   13.772    0.000    0.517    0.546
##     v5                1.418    0.089   15.956    0.000    0.620    0.668
##     v6                0.968    0.077   12.617    0.000    0.424    0.493
##     v7                1.202    0.078   15.464    0.000    0.526    0.639
##     v8                0.899    0.066   13.523    0.000    0.394    0.534
##     v9                1.171    0.079   14.796    0.000    0.512    0.602
##     v10               1.297    0.082   15.779    0.000    0.568    0.657
## 
## Covariances:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##  .v6 ~~                                                                 
##    .v9                0.117    0.019    6.260    0.000    0.117    0.230
## 
## Variances:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##    .v1                0.344    0.017   19.669    0.000    0.344    0.642
##    .v2                1.053    0.049   21.371    0.000    1.053    0.861
##    .v3                0.404    0.021   19.230    0.000    0.404    0.603
##    .v4                0.630    0.031   20.238    0.000    0.630    0.702
##    .v5                0.477    0.026   18.591    0.000    0.477    0.554
##    .v6                0.559    0.027   20.567    0.000    0.559    0.757
##    .v7                0.402    0.021   19.101    0.000    0.402    0.592
##    .v8                0.389    0.019   20.355    0.000    0.389    0.715
##    .v9                0.461    0.024   19.534    0.000    0.461    0.637
##    .v10               0.424    0.023   18.788    0.000    0.424    0.568
##     SE                0.192    0.020    9.475    0.000    1.000    1.000
## 
## R-Square:
##                    Estimate
##     v1                0.358
##     v2                0.139
##     v3                0.397
##     v4                0.298
##     v5                0.446
##     v6                0.243
##     v7                0.408
##     v8                0.285
##     v9                0.363
##     v10               0.432

The \(\chi^2_{M2}\) = 239.464 with df = 34, p < .001.
The RSMEA = .079, TLI = .894, CFI = .920, SRMR = .049. So, the fit is somewhat improved relative to Model 1. A formal test comparing the two models is shown below.

Residual Correlations 2

The values below are the residual correlations (i.e., observed correlation - model estimated correlation). The average absolute values of these residuals is provided by the standardized root mean square residual (SRMR). Note that the residual correlation between v6 and v9 is now 0.00.

sem_residuals(fit2)
v1 v2 v3 v4 v5 v6 v7 v8 v9 v10
v1
v2 -0.06
v3 0.09 -0.01
v4 -0.02 0.08 -0.05
v5 0.01 0.04 0.00 -0.02
v6 -0.06 0.10 -0.07 0.05 0.04
v7 -0.05 -0.02 -0.04 0.10 -0.07 0.07
v8 0.07 -0.06 0.08 -0.05 0.00 -0.05 -0.03
v9 -0.04 0.02 -0.04 0.04 -0.02 0.00 0.12 -0.08
v10 0.01 -0.03 0.01 -0.06 0.05 -0.04 -0.01 0.03 -0.01

Analysis of Residuals 2

The output below displays residual correlations and z-scores indicating the magnitude of the residuals. Typically, z values > |2.58| are interpreted as significant and indicate where the model is doing a poor job of reproducing the observed correlations.

lavResiduals(fit2, type = "cor.bollen")
## $type
## [1] "cor.bollen"
## 
## $cov
##         v1     v2     v3     v4     v5     v6     v7     v8     v9    v10
## v1   0.000                                                               
## v2  -0.065  0.000                                                        
## v3   0.086 -0.015  0.000                                                 
## v4  -0.025  0.085 -0.048  0.000                                          
## v5   0.007  0.044  0.001 -0.017  0.000                                   
## v6  -0.061  0.102 -0.066  0.050  0.041  0.000                            
## v7  -0.050 -0.022 -0.043  0.101 -0.068  0.070  0.000                     
## v8   0.069 -0.065  0.081 -0.045 -0.002 -0.050 -0.028  0.000              
## v9  -0.035  0.021 -0.045  0.039 -0.017  0.000  0.124 -0.078  0.000       
## v10  0.008 -0.033  0.015 -0.064  0.048 -0.039 -0.010  0.031 -0.013  0.000
## 
## $cov.z
##         v1     v2     v3     v4     v5     v6     v7     v8     v9    v10
## v1   0.000                                                               
## v2  -2.972  0.000                                                        
## v3   4.700 -0.693  0.000                                                 
## v4  -1.304  3.532 -2.703  0.000                                          
## v5   0.457  2.192  0.063 -0.999  0.000                                   
## v6  -3.108  4.027 -3.539  2.279  2.225  0.000                            
## v7  -3.062 -1.050 -2.796  5.233 -4.805  3.492  0.000                     
## v8   3.419 -2.752  4.145 -2.231 -0.144 -2.361 -1.537  0.000              
## v9  -2.041  0.937 -2.733  2.005 -1.053  0.000  6.634 -4.214  0.000       
## v10  0.477 -1.633  0.941 -3.800  3.126 -2.174 -0.642  1.731 -0.818  0.000
## 
## $summary
##                            cov
## crmr                     0.054
## crmr.se                  0.002
## crmr.exactfit.z         14.435
## crmr.exactfit.pvalue     0.000
## ucrmr                    0.051
## ucrmr.se                 0.005
## ucrmr.ci.lower           0.043
## ucrmr.cilupper           0.058
## ucrmr.closefit.h0.value  0.050
## ucrmr.closefit.z         0.115
## ucrmr.closefit.pvalue    0.454

Modification Indices 2

The values may have changed from those derived from the first model.The values below indicate the drop in \(\chi^2_M\) (i.e., model improvement) anticipated if parameters currently fixed at 0.0 are freed to be estimated. Note that lhs op rhs refers to the variable on the left hand side and on the right hand side of the operation symbol ~~ which indicates a correlation between the residual variances of two items. The critical value of \(\chi^2\) with df = 1 and \(\alpha = .05\) is 3.841, so any MI greater than this value is worth examination.

modificationIndices(fit2, sort. = TRUE, minimum.value = 3)
##    lhs op rhs     mi    epc sepc.lv sepc.all sepc.nox
## 62  v7 ~~  v9 43.209  0.102   0.102    0.236    0.236
## 49  v4 ~~  v7 31.570  0.104   0.104    0.207    0.207
## 24  v1 ~~  v3 25.358  0.070   0.070    0.188    0.188
## 54  v5 ~~  v7 20.424 -0.078  -0.078   -0.178   -0.178
## 44  v3 ~~  v8 19.030  0.063   0.063    0.159    0.159
## 35  v2 ~~  v6 15.698  0.098   0.098    0.128    0.128
## 52  v4 ~~ v10 13.446 -0.070  -0.070   -0.136   -0.136
## 33  v2 ~~  v4 13.106  0.101   0.101    0.124    0.124
## 64  v8 ~~  v9 12.614 -0.052  -0.052   -0.122   -0.122
## 29  v1 ~~  v8 12.604  0.047   0.047    0.128    0.128
## 57  v5 ~~ v10 10.927  0.059   0.059    0.132    0.132
## 28  v1 ~~  v7  8.786 -0.041  -0.041   -0.111   -0.111
## 23  v1 ~~  v2  8.611 -0.061  -0.061   -0.102   -0.102
## 42  v3 ~~  v6  8.146 -0.047  -0.047   -0.099   -0.099
## 37  v2 ~~  v8  7.433 -0.059  -0.059   -0.093   -0.093
## 43  v3 ~~  v7  7.313 -0.042  -0.042   -0.103   -0.103
## 40  v3 ~~  v4  6.954 -0.049  -0.049   -0.097   -0.097
## 27  v1 ~~  v6  6.782 -0.039  -0.039   -0.089   -0.089
## 53  v5 ~~  v6  6.304  0.046   0.046    0.089    0.089
## 34  v2 ~~  v5  4.987  0.057   0.057    0.080    0.080
## 50  v4 ~~  v8  4.833 -0.038  -0.038   -0.078   -0.078
## 58  v6 ~~  v7  4.317  0.034   0.034    0.072    0.072
## 60  v6 ~~ v10  3.840 -0.033  -0.033   -0.069   -0.069
## 48  v4 ~~  v6  3.563  0.037   0.037    0.063    0.063
## 45  v3 ~~  v9  3.333 -0.028  -0.028   -0.065   -0.065
## 65  v8 ~~ v10  3.123  0.027   0.027    0.065    0.065

Comparing Models 1 v 2

The difference in the fit of the two models is itself distributed as a \(\chi^2_\Delta\) with df equal to the difference in the number of df for the two models.
Here \(\chi^2_\Delta\) = \(\chi^2_{M1}\) - \(\chi^2_{M2}\).

# comparing nested models via chisq likelihood ratio test
lavTestLRT(fit2,fit1, type = "chisq")
## 
## Chi-Squared Difference Test
## 
##      Df   AIC   BIC  Chisq Chisq diff   RMSEA Df diff Pr(>Chisq)    
## fit2 34 22374 22476 239.46                                          
## fit1 35 22416 22513 283.14     43.678 0.20943       1  3.871e-11 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Note that the model comparison test is significant; \(\chi^2_\Delta\) = 43.678, df = 1, p < .05. (Recall that the anticipated drop in \(\chi^2_{M}\) based on the MI from Model 1 was 43.804.) Allowing the residual terms of v6 and v9 to correlate significantly improved the fit of the model. We could continue in the MG mode by freeing the parameter with the next largest MI and re-running the model to improve model fit, but this strategy is simply data driven and we are left with a model that we cannot defend on theoretical grounds.

An Alternative CFA Model

Inspection of the MI and residuals (from Models 1 and 2) led me to formulate a a second hypothesis regarding the factor structure of the Self-Esteem Scale. I noticed that pairs of items that are negatively worded had large MI values. This pattern suggested a type of two-factor model referred to in the psychometrics literature as a common method variance model.

My alternative model (Model 3) posits two correlated latent factors. The items that are negatively worded (and reverse scored) load on one factor (SEn) and the items that are positively worded load on the second factor (SEp). All 10 items measure Self-Esteem, but the way that the items are worded contributes systematically to their inter-item covariances/correlations.
Note that Model 3 has only one more parameter than Model 1, and the same number of parameters as Model 2, but unlike Model 2, Model 3 is theoretically based.

Specify and run the model:

model3 <- ' 
SEp  =~ v1 + v3 + v5 + v8 + v10 
SEn  =~ v2 + v4 + v6 + v7 + v9 
'
fit3 <- cfa(model3, sample.cov=covfull, sample.nobs=973, std.lv=F )

Path Diagram Model 3

#--making CFA path diagram---------------------mar(bot,lft,top,rgt)
fig3 <-semPlot::semPaths(fit3, whatLabels = "std", layout = "tree2", 
                         rotation = 1, style = "lisrel", optimizeLatRes = T, 
                         structural = F, layoutSplit = F,
                         intercepts = F, residuals = T, 
                         curve = 1, curvature = 3, nCharNodes = 8, 
                         sizeLat = 8, sizeMan = 6, sizeMan2 = 6, 
                         edge.label.cex = 0.9, label.cex=1.1, residScale=10, mar=c(19,2,19,2), 
                         edge.color = "black", edge.label.position = .60, DoNotPlot=T)
fig3.1 <- fig3 |> mark_sig(fit3, alpha = c("(n.s.)" = 1.00, "*" = .05))
plot(fig3.1)  

Note that the paths connecting v1 to SEp and v2 to SEn are dashed. This is to indicate that each was used as a “reference variable” to define the metric of the latent variables, so there are no tests of significance associated with them. For the SEp factor the loadings range from .56 to .68, and for the SEn factor, loadings range from .39 to .71.

The correlation between the two latent factors is .77, suggesting that the two factors are not very distinct from one another.

The residual terms, estimating the proportion of variance in each item that is not associated with its respective latent factor range from .50 to .85. v2 still has considerable measurement error.

Full output 3

summary(fit3, fit.measures=T, standardized=T, rsquare=T)
## lavaan 0.6.16 ended normally after 33 iterations
## 
##   Estimator                                         ML
##   Optimization method                           NLMINB
##   Number of model parameters                        21
## 
##   Number of observations                           973
## 
## Model Test User Model:
##                                                       
##   Test statistic                               119.926
##   Degrees of freedom                                34
##   P-value (Chi-square)                           0.000
## 
## Model Test Baseline Model:
## 
##   Test statistic                              2603.344
##   Degrees of freedom                                45
##   P-value                                        0.000
## 
## User Model versus Baseline Model:
## 
##   Comparative Fit Index (CFI)                    0.966
##   Tucker-Lewis Index (TLI)                       0.956
## 
## Loglikelihood and Information Criteria:
## 
##   Loglikelihood user model (H0)             -11106.202
##   Loglikelihood unrestricted model (H1)     -11046.239
##                                                       
##   Akaike (AIC)                               22254.404
##   Bayesian (BIC)                             22356.892
##   Sample-size adjusted Bayesian (SABIC)      22290.196
## 
## Root Mean Square Error of Approximation:
## 
##   RMSEA                                          0.051
##   90 Percent confidence interval - lower         0.041
##   90 Percent confidence interval - upper         0.061
##   P-value H_0: RMSEA <= 0.050                    0.418
##   P-value H_0: RMSEA >= 0.080                    0.000
## 
## Standardized Root Mean Square Residual:
## 
##   SRMR                                           0.032
## 
## Parameter Estimates:
## 
##   Standard errors                             Standard
##   Information                                 Expected
##   Information saturated (h1) model          Structured
## 
## Latent Variables:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##   SEp =~                                                                
##     v1                1.000                               0.460    0.629
##     v3                1.178    0.073   16.115    0.000    0.542    0.662
##     v5                1.374    0.084   16.426    0.000    0.632    0.681
##     v8                0.906    0.063   14.294    0.000    0.417    0.565
##     v10               1.274    0.078   16.396    0.000    0.586    0.679
##   SEn =~                                                                
##     v2                1.000                               0.433    0.392
##     v4                1.292    0.128   10.118    0.000    0.559    0.590
##     v6                1.160    0.115   10.080    0.000    0.502    0.584
##     v7                1.350    0.126   10.723    0.000    0.585    0.710
##     v9                1.371    0.128   10.676    0.000    0.594    0.698
## 
## Covariances:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##   SEp ~~                                                                
##     SEn               0.153    0.017    8.944    0.000    0.767    0.767
## 
## Variances:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##    .v1                0.324    0.017   18.748    0.000    0.324    0.605
##    .v3                0.376    0.021   18.096    0.000    0.376    0.562
##    .v5                0.463    0.026   17.674    0.000    0.463    0.537
##    .v8                0.370    0.019   19.695    0.000    0.370    0.681
##    .v10               0.402    0.023   17.718    0.000    0.402    0.539
##    .v2                1.034    0.049   21.065    0.000    1.034    0.847
##    .v4                0.585    0.031   19.092    0.000    0.585    0.651
##    .v6                0.486    0.025   19.183    0.000    0.486    0.658
##    .v7                0.336    0.021   16.358    0.000    0.336    0.496
##    .v9                0.371    0.022   16.729    0.000    0.371    0.513
##     SEp               0.212    0.021    9.906    0.000    1.000    1.000
##     SEn               0.188    0.033    5.701    0.000    1.000    1.000
## 
## R-Square:
##                    Estimate
##     v1                0.395
##     v3                0.438
##     v5                0.463
##     v8                0.319
##     v10               0.461
##     v2                0.153
##     v4                0.349
##     v6                0.342
##     v7                0.504
##     v9                0.487

The \(\chi^2_{M3}\) = 119.926 with df = 34, p < .001.
The RSMEA = .051, TLI = .956, CFI = .966, SRMR = .032. So, all indices suggest a good fit.
A formal test comparing this alternative model with the original model is shown below.

Residual Correlations 3

The values below are the residual correlations (i.e., observed correlation - model estimated correlation). The average absolute values of these residuals is provided by the standardized root mean square residual (SRMR).

sem_residuals(fit3)
v1 v3 v5 v8 v10 v2 v4 v6 v7 v9
v1
v3 0.05
v5 -0.02 -0.03
v8 0.03 0.04 -0.03
v10 -0.03 -0.02 0.03 0.00
v2 -0.03 0.02 0.09 -0.04 0.01
v4 0.02 0.00 0.04 -0.01 -0.01 0.06
v6 -0.05 -0.05 0.07 -0.04 -0.02 0.06 -0.03
v7 -0.01 0.00 -0.01 0.01 0.04 -0.06 0.03 -0.03
v9 -0.01 -0.02 0.02 -0.06 0.02 -0.03 -0.04 0.05 0.01

Analysis of Residuals 3

The output below displays residual correlations and z-scores indicating the magnitude of the residuals. Typically, z values > |2.58| are interpreted as significant and indicate where the model is doing a poor job of reproducing the observed correlations.

lavResiduals(fit3, type = "cor.bollen")
## $type
## [1] "cor.bollen"
## 
## $cov
##         v1     v3     v5     v8    v10     v2     v4     v6     v7     v9
## v1   0.000                                                               
## v3   0.047  0.000                                                        
## v5  -0.021 -0.029  0.000                                                 
## v8   0.033  0.043 -0.030  0.000                                          
## v10 -0.026 -0.020  0.025 -0.001  0.000                                   
## v2  -0.031  0.021  0.089 -0.036  0.008  0.000                            
## v4   0.017 -0.004  0.040 -0.010 -0.012  0.057  0.000                     
## v6  -0.048 -0.052  0.065 -0.040 -0.019  0.056 -0.026  0.000              
## v7  -0.010 -0.001 -0.011  0.006  0.041 -0.062  0.031 -0.030  0.000       
## v9  -0.011 -0.019  0.022 -0.059  0.020 -0.028 -0.044  0.049  0.013  0.000
## 
## $cov.z
##         v1     v3     v5     v8    v10     v2     v4     v6     v7     v9
## v1   0.000                                                               
## v3   3.032  0.000                                                        
## v5  -1.527 -2.284  0.000                                                 
## v8   1.849  2.545 -2.007  0.000                                          
## v10 -1.887 -1.604  1.928 -0.093  0.000                                   
## v2  -1.234  0.864  3.746 -1.360  0.345  0.000                            
## v4   0.792 -0.173  1.944 -0.419 -0.594  2.590  0.000                     
## v6  -2.167 -2.433  3.103 -1.714 -0.923  2.541 -1.519  0.000              
## v7  -0.534 -0.074 -0.643  0.268  2.233 -3.779  2.215 -2.307  0.000       
## v9  -0.588 -1.037  1.194 -2.861  1.075 -1.650 -3.381  3.290  1.287  0.000
## 
## $summary
##                            cov
## crmr                     0.036
## crmr.se                  0.003
## crmr.exactfit.z          6.796
## crmr.exactfit.pvalue     0.000
## ucrmr                    0.031
## ucrmr.se                 0.004
## ucrmr.ci.lower           0.024
## ucrmr.cilupper           0.037
## ucrmr.closefit.h0.value  0.050
## ucrmr.closefit.z        -4.728
## ucrmr.closefit.pvalue    1.000

The z-scores associated with the residual correlations are less extreme that those from Model 1 above.

Ideally the residual correlations should be normally distributed with a mean of 0.0. One way to examine the distribution of these residuals is with a stem-and-leaf plot which is a type of sideways histogram. The first 2 digits of each residual correlation are to the left of the | symbol and are referred to as the stem. The third digit of each residual correlation is to the right of the | and is referred to as a leaf. The plot shows the number (and value) of the leaves for each stem. Note that there are no decimal points in the plot; the implied decimal point is two digits to the left of the | symbol.

stem( lavResiduals(fit3, output = "table")$res ) # gives stem/leaf plot of resid cors
## 
##   The decimal point is 2 digit(s) to the left of the |
## 
##   -6 | 2
##   -4 | 92840
##   -2 | 6100986610
##   -0 | 9921100411
##    0 | 6837
##    2 | 012513
##    4 | 0137967
##    6 | 5
##    8 | 9

Based on the stem-and-leaf plot, the distribution of the residual correlations looks a bit more normal and the values are less extreme than those from Model 1 above. Here most of the residuals are between -.029 and -.009 (there are more leaves on the two stems -2 and -0 than there are on any of the other stems).

Modification Indices 3

The values below indicate the drop in \(\chi^2_M\) (i.e., model improvement) anticipated if parameters currently fixed at 0.0 are freed to be estimated. Note that lhs op rhs refers to the variable on the left hand side and on the right hand side of the operation symbol ~~ which indicates a correlation between the residual variances of two items.The critical value of \(\chi^2\) with df = 1 and \(\alpha = .05\) is 3.841, so any MI greater than this value is worth examination.

modificationIndices(fit3, sort. = TRUE, minimum.value = 3)
##    lhs op rhs     mi    epc sepc.lv sepc.all sepc.nox
## 71  v2 ~~  v7 13.233 -0.084  -0.084   -0.142   -0.142
## 55  v5 ~~  v6 13.205  0.065   0.065    0.136    0.136
## 53  v5 ~~  v2 12.593  0.088   0.088    0.127    0.127
## 77  v6 ~~  v9 12.380  0.064   0.064    0.151    0.151
## 75  v4 ~~  v9 10.234 -0.064  -0.064   -0.138   -0.138
## 34  v1 ~~  v3 10.214  0.046   0.046    0.131    0.131
## 31 SEn =~  v5  9.987  0.448   0.194    0.209    0.209
## 69  v2 ~~  v4  7.033  0.074   0.074    0.095    0.095
## 44  v3 ~~  v8  6.987  0.039   0.039    0.103    0.103
## 70  v2 ~~  v6  6.753  0.066   0.066    0.092    0.092
## 56  v5 ~~  v7  6.424 -0.040  -0.040   -0.102   -0.102
## 63  v8 ~~  v9  6.139 -0.035  -0.035   -0.094   -0.094
## 74  v4 ~~  v7  5.394  0.045   0.045    0.102    0.102
## 32 SEn =~  v8  5.204 -0.261  -0.113   -0.153   -0.153
## 67 v10 ~~  v7  5.043  0.033   0.033    0.090    0.090
## 76  v6 ~~  v7  4.938 -0.039  -0.039   -0.097   -0.097
## 43  v3 ~~  v5  4.809 -0.040  -0.040   -0.095   -0.095
## 48  v3 ~~  v6  4.383 -0.033  -0.033   -0.078   -0.078
## 52  v5 ~~ v10  4.039  0.038   0.038    0.089    0.089
## 51  v5 ~~  v8  3.814 -0.032  -0.032   -0.078   -0.078
## 36  v1 ~~  v8  3.592  0.025   0.025    0.072    0.072
## 37  v1 ~~ v10  3.342 -0.028  -0.028   -0.076   -0.076
## 40  v1 ~~  v6  3.135 -0.026  -0.026   -0.065   -0.065

Comparing Models 1 v 3

The difference in the fit of the two models is itself distributed as a \(\chi^2_\Delta\) with df equal to the difference in the number of df for the two models.
Here \(\chi^2_\Delta\) = \(\chi^2_{M1}\) - \(\chi^2_{M3}\).

# comparing nested models via chisq likelihood ratio test
lavTestLRT(fit3,fit1, type = "chisq")
## 
## Chi-Squared Difference Test
## 
##      Df   AIC   BIC  Chisq Chisq diff   RMSEA Df diff Pr(>Chisq)    
## fit3 34 22254 22357 119.93                                          
## fit1 35 22416 22513 283.14     163.22 0.40831       1  < 2.2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

So the \(\chi^2_\Delta\) test indicates that the two-factor common method variance model fits the observed covariances (correlations) among the items better than the initial one-factor model. All 10 items measure Self-Esteem, but the way in which the items are worded contributes systematically to the inter-item correlations.

Reliability 3

semTools::compRelSEM(fit3, tau.eq=T, obs.var=T, return.total=T) 
##   SEp   SEn total 
## 0.778 0.717 0.827

The reliability analysis indicates that if two separate subscales (SEp and SEn) were to be used they would not be as reliable as the total scale score (.778, 717, and .827 respectively). Also, note that the correlation between the two factors is strong (.77) and so using two subscales as predictors in a regression model would likely result in colinearity problems. In practice, researchers are better off using the 10 items to form a single scale score.

Concluding Remarks

In this lecture we worked through the three approaches to SEM; these are strictly confirmatory, model generation, and testing alternative models. The initial uni-dimensional model of the Rosenburg Self-Esteem Scale (Model 1) was not supported. Guided by the modification indices we illustrated how to add a correlation between two residual terms (Model 2). Model 2 improved fit, but technically it is not a true CFA measurement model because such models posit that the residual variances of the indicator variables (items) are all uncorrelated. We could have continued to add correlations among residual terms to improve the model fit, such a model would be difficult to defend on theoretical grounds.
Model 3 was an alternative model, based in theory (in this case the common method variance model) that was shown to be the best fitting model examined here. This result illustrates that the wording of items (positive or negative) that make up self-report scales can influence how people respond to them which in turn influences the correlations among the scale items.

Model 2 and Model 3 have the same number of parameters (21) and the same number of df (34) so they cannot be compared using the \(\chi^2_\Delta\) test for nested models. The Akaike information criteria (AIC) can be used to compare the relative fit of two non-nested models; smaller values indicate better fit. (see Full Outputs above for AIC values.) For Model 2, the AIC = 22373.942, and for Model 3 the AIC = 22254.404; so Model 3 fits better than Model 2 even though they have the same number of parameters. As an aside, Model 1 had AIC = 22415.620. Based on the AIC values from all three models, Model 3 is the preferred model.

SEM is a powerful tool allowing researchers to formulate and test hypotheses about the underlying structure of covariance matrices. When used wisely, structural equation models can provide useful insights into the complex relations among a set variables relevant to some phenomenon of interest.

---
title: "Strategic Approaches in SEM: An Example Using the Rosenburg Self-Esteem Scale."
author: "Jason Beckstead"
date: "`r Sys.Date()`"
output: 
  html_document:
    toc: true
    toc_float:
      collapsed: false
    toc_depth: 3
    code_folding: hide
    code_download: true
editor_options: 
  chunk_output_type: console
---
## Setup
Required Packages
```{r setup, message=FALSE}
knitr::opts_chunk$set(echo = TRUE)
#--loading my packages
if (!require("pacman")) install.packages("pacman")
library(pacman)
# Load installed packages, install and load the package if not installed
pacman::p_load(here, rio, conflicted, tidyverse, janitor, summarytools, skimr, 
               DataExplorer, inspectdf, Hmisc, readr, dplyr, tinytex,
               # loading SEM related packages
               lavaan, semTools, semoutput, semPlot, sjPlot, psych, 
               GPArotation, psychTools, semptools, lm.beta)
```

```{css, echo=FALSE}
.scroll-500 {
  max-height: 500px;
  overflow-y: auto;
  background-color: inherit;
}
```
## Background 
The statistician George Box famously said "*All models are wrong, but some models are useful*". This statement is particularly applicable to SEM. By definition, models are simplified representations of phenomena. Models need to be simple enough to facilitate our understanding of the phenomenon under study, but not so simple that we miss important, often subtle, aspects of the phenomenon.  

When working within the SEM framework we postulate a model to account for the pattern of variances and covariances we observe in a set of relevant variables. Confirming that our model fits the data is the desired result, but we must keep in mind that our model may be one of several that could explain the data. So, while we can rule out some models because they do not fit the data, we cannot claim that our model is *the* model simply because it is consistent with the observed data.

In summarizing the **strategic framework** for testing structural equation models, Karl Joreskog distinguished between three scenarios that he called **strictly confirmatory** (SC), **alternative models** (AM), and **model generating** (MG).  
  
In the SC mode the researcher postulates a single model based on theory, collects the data, then tests the fit of the hypothesized model to the sample data. Based on the results of this test, the researcher either rejects or fails to reject the model; no further modifications to the model are made.  
  
In the AM mode, the researcher proposes several alternative (i.e., competing) models, all of which are grounded in theory. Following analysis of a single sample he/she selects one model as most appropriate.  
  
  
The model generating (MG) mode represents the case where the researcher, having postulated and rejected a model on the basis of poor fit, proceeds in an *exploratory fashion* to modify and re-estimate the model. This involves inspection of what are called modification indices. MG mode is typically atheoretic and data-driven.  
  
  
In this lecture we start with the SC case. We propose and test a model; we will either accept or reject it. (Heads up, the hypothesized model does not fit well.) This will provide opportunity to demonstrate how one may go about the MG approach. Finally, we examine a second hypothesized model (now we are working from the AM perspective) and compare it to the initial hypothesized model.
  

## The Example
The Rosenburg Self-Esteem Scale contains 10 items hypothesized to measure Self-Esteem as a uni-dimensional construct. Here are the items, response scale options and citation. Note that to control for response set bias, five of the items are negatively worded (and reverse scored) while five are positively worded. The sample size for analyses is 973.  
  
  
```{r apx, echo=FALSE, out.width = '90%'}
knitr::include_graphics(here("data","Self-Esteem Scale.png"))
```

## Reading in the Data
The data are in the form of a correlation matrix plus a vector of standard deviations. These are combined to form a covariance matrix for analysis.  
```{r}
corr <- '
1.000
.158 1.000
.463  .220 1.000
.302  .288  .296 1.000
.407  .293  .422  .348 1.000
.234  .285  .245  .319  .370 1.000
.332  .216  .359  .450  .359  .385 1.000
.388  .134  .417  .246  .354  .213  .313 1.000
.325  .245  .335  .368  .386  .457  .509  .243 1.000
.401  .212  .429  .295  .487  .285  .410  .382  .383 1.000
'
# add variable names and convert to full correlation matrix
corrfull <- lavaan::getCov(corr, names=c('v1','v2','v3','v4','v5','v6','v7','v8','v9','v10'))
# add SDs and convert to full covariance matrix
covfull <- lavaan::cor2cov(corrfull, sds=c(0.732,1.106,0.819,0.948,0.929,0.860,0.824,0.738,0.851,0.864))
# MEANS; not used 3.78 2.96 4.09 4.00 3.80 3.37 4.30 3.92 4.07 4.02   # N = 973

```

## The Correlations
Here are the observed correlations among the 10 items, labeled v1 to v10.
```{r}
# Uses sjPlot to print a nice looking correlation table
tab_corr(corrfull, na.deletion = "listwise", digits = 2, triangle = "lower",
         title="Correlations among Self-Esteem Items",
         string.diag=c('1','1','1','1','1','1','1','1','1','1'))
```

## The CFA Model
The hypothesis to be tested is that the items making up the scale measure a uni-dimensional construct, Self-Esteem. A confirmatory factor model where all 10 items load onto a single latent variable and their residual variance terms are uncorrelated represents this hypothesis. We run the model; if it fits well we have support for our hypothesis, if the model fit is poor then our hypothesis of a single underlying factor is rejected.  
  
  
Specify and run the model:
```{r}
model1 <- ' SE  =~ v1 + v2 + v3 + v4 + v5 + v6 + v7 + v8 + v9 + v10 '
fit1 <- cfa(model1, sample.cov=covfull, sample.nobs=973, std.lv=F )

```

### Path Diagram Model 1
```{r}
#--making CFA path diagram---------------------mar(bot,lft,top,rgt)
fig1 <-semPlot::semPaths(fit1, whatLabels = "std", layout = "tree2", 
                         rotation = 1, style = "lisrel", optimizeLatRes = T, 
                         structural = F, layoutSplit = F,
                         intercepts = F, residuals = T, 
                         curve = 1, curvature = 3, nCharNodes = 8, 
                         sizeLat = 8, sizeMan = 6, sizeMan2 = 6, 
                         edge.label.cex = 0.9, label.cex=1.1, residScale=10, mar=c(19,2,19,2), 
                         edge.color = "black", edge.label.position = .60, DoNotPlot=T)
fig1.1 <- fig1 |> mark_sig(fit1, alpha = c("(n.s.)" = 1.00, "*" = .05))
plot(fig1.1)  

```
  
Note that the path connecting the latent factor with v1 is dashed. This is to indicate that v1 was used as a "reference variable" to define the metric of the latent variable, so there is no test of significance associated with it. The loadings range from .38 to .66. The residual terms, estimating the proportion of variance in each item that is not associated with the latent factor range from .58 to .86. v2 has considerable measurement error in comparison to the other items.  


### Full output 1
```{r, class.output="scroll-500"}
summary(fit1, fit.measures=T, standardized=T, rsquare=T)
```
  
  
The $\chi^2_{M1}$ = 283.124 with *df* = 35, *p* < .001. Based on the significance of the $\chi^2_{M1}$ test we reject the model, however the value of $\chi^2_{M1}$ is influenced by sample size so we examine other fit indices.  
The RSMEA = .085 , TLI = .875, CFI = .903, SRMR = .052. There is considerable room for improvement in fit; the RMSEA and SRMR could be smaller, and the TLI (and CFI) could be larger.   
So, from a strictly confirmatory perspective (SC) we conclude that the covariance structure of the Self-Esteem Scale items is more complex than a uni-dimensional model with a single latent variable.  

  
### Reliability 1
The `semTools` package has a command `compRelSEM` to obtain reliability estimates such as Cronbach's $\alpha$ coefficient.
```{r}
semTools::compRelSEM(fit1, tau.eq=T, obs.var=T, return.total=F) 
```
   
In this sample the reliability of the scale is estimated by $\alpha$ = .827.   

   
### Residual Correlations 1
The values below are the residual correlations (i.e., observed correlation - model estimated correlation). The average absolute values of these residuals is provided by the standardized root mean square residual (SRMR). Note that the largest residual correlation (.12) is between v6 and v9. While `semoutput::sem_residuals()` makes a nice table, I don't believe that we can change the number of digits; it is limited to only two. More precise values (3 digits) are shown in the output from `lavResiduals()` below.
```{r}
semoutput::sem_residuals(fit1)
```

### Analysis of Residuals 1
The output below displays residual correlations and z-scores indicating the magnitude of the residuals. Typically, z values > |2.58| (which is the two-tailed critical value for $\alpha = .01$) are interpreted as significant and indicate where the model is doing a poor job of reproducing the observed correlations.
```{r,class.output="scroll-500"}
lavResiduals(fit1, type = "cor.bollen")
```
   
Notice that the largest residual correlation (.124) between v6 and v9 has the largest z value associated with it (6.111). In other words, the model is doing a poor job of reproducing this correlation.  

Ideally the residual correlations should be normally distributed with a mean of 0.0. One way to examine the distribution of these residuals is with a **stem-and-leaf plot** which is a type of sideways histogram. The first 2 digits of each residual correlation are to the left of the | symbol and are referred to as the *stem*. The third digit of each residual correlation is to the right of the | and is referred to as a *leaf*. The plot shows the number (and value) of the leaves for each stem. Note that there are no decimal points in the plot; the implied decimal point is two digits to the left of the | symbol.  

```{r,class.output="scroll-500"}
stem( lavResiduals(fit1, output = "table")$res ) # gives stem/leaf plot of resid cors
```
  
For example in the first row of the plot there are two residuals for which the first two digits are -.08 and the third digits are 6 and 2, printed to the right of the | symbol. These two residual correlations are thus -.086 and -.082. The largest positive residual correlation .124 appears in the last row as "12 | 4". This distribution is far from being normal.  
  
Looking at the entire plot shows how many residuals begin with the same stem value. So, there are two that start with -.08, six that start with -.06, seven that start with -.04, five that start with .00, etc.  




  
### Modification Indices 1
The values below indicate the drop in $\chi^2_M$ (i.e., model improvement) anticipated if parameters currently fixed at 0.0 are freed to be estimated; all freely estimated parameters automatically have an MI of zero. Note that `lhs op rhs` refers to the variable on the left hand side and on the right hand side of the operation symbol **~~** which indicates a correlation between the residual variances of two items. The critical value of $\chi^2$ with *df* = 1 and $\alpha$ = .05 is 3.841, so any MI greater than this value is worth examination.  
```{r, class.output="scroll-500"}
modificationIndices(fit1, sort. = TRUE, minimum.value = 3)
```
Here the largest MI is listed first and it suggests that if we re-run the model and allow the residual terms between v6 and v9 to correlate, the $\chi^2_M$ is expected to decrease by about 43.80. The other columns show the expected value (epc) of the parameter if it were to be estimated. Note that this finding confirms what we saw when inspecting the residual correlations; the residual correlation between v6 and v9 was the largest.  
The heading **sepc.lv** is the value if the latent variabl(s) are standardized and the column heading **sepc.all** gives the completely standardized values. So the anticipated covariance and correlation between these two residual terms, if this parameter where to be estimated, are .115, .239, respectively.   

## A Revised Model 
Here we are operating in the MG mode; our re-specification is atheoretic and simply data-driven.  Nonetheless, we illustrate how one might specify and fit such a model.
  
Re-Specify and Re-Run the model:  
I added one additional parameter `v6 ~~ v9`, the correlation between the residual variances of these two items because the MI from Model 1 indicated that doing so would have the greatest impact on improving model fit. There is no theoretical rationale for adding this type of parameter because in traditional CFA models the residual terms are assumed to be independent (i.e., uncorrelated).  
```{r}
model2 <- ' SE  =~ v1 + v2 + v3 + v4 + v5 + v6 + v7 + v8 + v9 + v10
            v6 ~~ v9 '
fit2 <- cfa(model2, sample.cov=covfull, sample.nobs=973, std.lv=F )
```

### Path Diagram Model 2
```{r}
#--making CFA path diagram---------------------mar(bot,lft,top,rgt)
fig2 <-semPlot::semPaths(fit2, whatLabels = "std", layout = "tree2", 
                         rotation = 1, style = "lisrel", optimizeLatRes = T, 
                         structural = F, layoutSplit = F,
                         intercepts = F, residuals = T, 
                         curve = 1, curvature = 3, nCharNodes = 8, 
                         sizeLat = 8, sizeMan = 6, sizeMan2 = 6, 
                         edge.label.cex = 0.9, label.cex=1.1, residScale=10, mar=c(19,2,19,2), 
                         edge.color = "black", edge.label.position = .60, DoNotPlot=T)
fig2.1 <- fig2 |> mark_sig(fit2, alpha = c("(n.s.)" = 1.00, "*" = .05))
plot(fig2.1)  

```
  
Note that the correlation between the two residuals is .23, as was suggested by the MI from Model 1.  
  

### Full output 2
```{r, class.output="scroll-500"}
summary(fit2, fit.measures=T, standardized=T, rsquare=T)
```
  
The $\chi^2_{M2}$ = 239.464 with *df* = 34, *p* < .001.  
The RSMEA = .079, TLI = .894, CFI = .920, SRMR = .049. So, the fit is somewhat improved relative to Model 1. A formal test comparing the two models is shown below.


### Residual Correlations 2
The values below are the residual correlations (i.e., observed correlation - model estimated correlation). The average absolute values of these residuals is provided by the standardized root mean square residual (SRMR). Note that the residual correlation between v6 and v9 is now 0.00.
```{r}
sem_residuals(fit2)
```

### Analysis of Residuals 2
The output below displays residual correlations and z-scores indicating the magnitude of the residuals. Typically, z values > |2.58| are interpreted as significant and indicate where the model is doing a poor job of reproducing the observed correlations.
```{r,class.output="scroll-500"}
lavResiduals(fit2, type = "cor.bollen")
```

### Modification Indices 2
The values may have changed from those derived from the first model.The values below indicate the drop in $\chi^2_M$ (i.e., model improvement) anticipated if parameters currently fixed at 0.0 are freed to be estimated. Note that `lhs op rhs` refers to the variable on the left hand side and on the right hand side of the operation symbol **~~** which indicates a correlation between the residual variances of two items. The critical value of $\chi^2$ with *df* = 1 and $\alpha = .05$ is 3.841, so any MI greater than this value is worth examination.  
```{r, class.output="scroll-500"}
modificationIndices(fit2, sort. = TRUE, minimum.value = 3)
```

### Comparing Models 1 v 2
The difference in the fit of the two models is itself distributed as a $\chi^2_\Delta$ with *df* equal to the difference in the number of *df* for the two models.  
Here  $\chi^2_\Delta$ = $\chi^2_{M1}$ - $\chi^2_{M2}$.  
```{r}
# comparing nested models via chisq likelihood ratio test
lavTestLRT(fit2,fit1, type = "chisq")

```
Note that the model comparison test is significant; $\chi^2_\Delta$ = 43.678, *df* = 1, *p* < .05. (Recall that the anticipated drop in $\chi^2_{M}$ based on the MI from Model 1 was 43.804.) Allowing the residual terms of v6 and v9 to correlate significantly improved the fit of the model. We could continue in the MG mode by freeing the parameter with the next largest MI and re-running the model to improve model fit, but this strategy is simply data driven and we are left with a model that we cannot defend on theoretical grounds.

## An Alternative CFA Model
Inspection of the MI and residuals (from Models 1 and 2) led me to formulate a **a second hypothesis** regarding the factor structure of the Self-Esteem Scale. I noticed that pairs of items that are negatively worded had large MI values. This pattern suggested a type of two-factor model referred to in the psychometrics literature as a common method variance model.  
  
My alternative model (Model 3) posits two correlated latent factors. The items that are negatively worded (and reverse scored) load on one factor (SEn) and the items that are positively worded load on the second factor (SEp). All 10 items measure Self-Esteem, but the way that the items are worded contributes systematically to their inter-item covariances/correlations.  
Note that Model 3 has only one more parameter than Model 1, and the same number of parameters as Model 2, but unlike Model 2, Model 3 is theoretically based.  
  
  
Specify and run the model:
```{r}
model3 <- ' 
SEp  =~ v1 + v3 + v5 + v8 + v10 
SEn  =~ v2 + v4 + v6 + v7 + v9 
'
fit3 <- cfa(model3, sample.cov=covfull, sample.nobs=973, std.lv=F )
```

### Path Diagram Model 3
```{r}
#--making CFA path diagram---------------------mar(bot,lft,top,rgt)
fig3 <-semPlot::semPaths(fit3, whatLabels = "std", layout = "tree2", 
                         rotation = 1, style = "lisrel", optimizeLatRes = T, 
                         structural = F, layoutSplit = F,
                         intercepts = F, residuals = T, 
                         curve = 1, curvature = 3, nCharNodes = 8, 
                         sizeLat = 8, sizeMan = 6, sizeMan2 = 6, 
                         edge.label.cex = 0.9, label.cex=1.1, residScale=10, mar=c(19,2,19,2), 
                         edge.color = "black", edge.label.position = .60, DoNotPlot=T)
fig3.1 <- fig3 |> mark_sig(fit3, alpha = c("(n.s.)" = 1.00, "*" = .05))
plot(fig3.1)  

```
  

Note that the paths connecting v1 to SEp and v2 to SEn are dashed. This is to indicate that each was used as a "reference variable" to define the metric of the latent variables, so there are no tests of significance associated with them. For the SEp factor the loadings range from .56 to .68, and for the SEn factor, loadings range from .39 to .71.   

The correlation between the two latent factors is .77, suggesting that the two factors are not very distinct from one another.  
  
The residual terms, estimating the proportion of variance in each item that is not associated with its respective latent factor range from .50 to .85. v2 still has considerable measurement error.  
  

### Full output 3
```{r, class.output="scroll-500"}
summary(fit3, fit.measures=T, standardized=T, rsquare=T)
```
  
The $\chi^2_{M3}$ = 119.926 with *df* = 34, *p* < .001.  
The RSMEA = .051, TLI = .956, CFI = .966, SRMR = .032. So, all indices suggest a good fit.   
A formal test comparing this alternative model with the original model is shown below.

### Residual Correlations 3
The values below are the residual correlations (i.e., observed correlation - model estimated correlation). The average absolute values of these residuals is provided by the standardized root mean square residual (SRMR).
```{r}
sem_residuals(fit3)
```

### Analysis of Residuals 3
The output below displays residual correlations and z-scores indicating the magnitude of the residuals. Typically, z values > |2.58| are interpreted as significant and indicate where the model is doing a poor job of reproducing the observed correlations.
```{r,class.output="scroll-500"}
lavResiduals(fit3, type = "cor.bollen")
```
The z-scores associated with the residual correlations are less extreme that those from Model 1 above.
  
Ideally the residual correlations should be normally distributed with a mean of 0.0. One way to examine the distribution of these residuals is with a **stem-and-leaf plot** which is a type of sideways histogram. The first 2 digits of each residual correlation are to the left of the | symbol and are referred to as the *stem*. The third digit of each residual correlation is to the right of the | and is referred to as a *leaf*. The plot shows the number (and value) of the leaves for each stem. Note that there are no decimal points in the plot; the implied decimal point is two digits to the left of the | symbol.  


```{r,class.output="scroll-500"}
stem( lavResiduals(fit3, output = "table")$res ) # gives stem/leaf plot of resid cors
```

Based on the stem-and-leaf plot, the distribution of the residual correlations looks a bit more normal and the values are less extreme than those from Model 1 above. Here most of the residuals are between -.029 and -.009 (there are more leaves on the two stems -2 and -0 than there are on any of the other stems).

### Modification Indices 3
The values below indicate the drop in $\chi^2_M$ (i.e., model improvement) anticipated if parameters currently fixed at 0.0 are freed to be estimated. Note that `lhs op rhs` refers to the variable on the left hand side and on the right hand side of the operation symbol **~~** which indicates a correlation between the residual variances of two items.The critical value of $\chi^2$ with *df* = 1 and $\alpha = .05$ is 3.841, so any MI greater than this value is worth examination.  
```{r, class.output="scroll-500"}
modificationIndices(fit3, sort. = TRUE, minimum.value = 3)
```

### Comparing Models 1 v 3
The difference in the fit of the two models is itself distributed as a $\chi^2_\Delta$ with *df* equal to the difference in the number of *df* for the two models.  
Here $\chi^2_\Delta$ = $\chi^2_{M1}$ - $\chi^2_{M3}$.  

```{r}
# comparing nested models via chisq likelihood ratio test
lavTestLRT(fit3,fit1, type = "chisq")

```
So the $\chi^2_\Delta$ test indicates that the two-factor common method variance model fits the observed covariances (correlations) among the items better than the initial one-factor model. All 10 items measure Self-Esteem, but the way in which the items are worded contributes systematically to the inter-item correlations.  

### Reliability 3
```{r}
semTools::compRelSEM(fit3, tau.eq=T, obs.var=T, return.total=T) 
```
The reliability analysis indicates that if two separate subscales (SEp and SEn) were to be used they would not be as reliable as the total scale score (.778, 717, and .827 respectively). Also, note that the correlation between the two factors is strong (.77) and so using two subscales as predictors in a regression model would likely result in colinearity problems. In practice, researchers are better off using the 10 items to form a single scale score.


## Concluding Remarks
In this lecture we worked through the three approaches to SEM; these are strictly confirmatory, model generation, and testing alternative models.  The initial uni-dimensional model of the Rosenburg Self-Esteem Scale (Model 1) was not supported. Guided by the modification indices we illustrated how to add a correlation between two residual terms (Model 2). Model 2 improved fit, but technically it is not a true CFA measurement model because such models posit that the residual variances of the indicator variables (items) are all uncorrelated. We could have continued to add correlations among residual terms to improve the model fit, such a model would be difficult to defend on theoretical grounds.    
Model 3 was an alternative model, based in theory (in this case the common method variance model) that was shown to be the best fitting model examined here. This result illustrates that the wording of items (positive or negative) that make up self-report scales can influence how people respond to them which in turn influences the correlations among the scale items.  
  
Model 2 and Model 3 have the same number of parameters (21) and the same number of *df* (34) so they cannot be compared using the $\chi^2_\Delta$ test for nested models. The Akaike information criteria (AIC) can be used to compare the relative fit of two non-nested models; smaller values indicate better fit. (see Full Outputs above for AIC values.) For Model 2, the AIC = 22373.942, and for Model 3 the AIC = 22254.404; so Model 3 fits better than Model 2 even though they have the same number of parameters. As an aside, Model 1 had AIC = 22415.620. Based on the AIC values from all three models, Model 3 is the preferred model.  

SEM is a powerful tool allowing researchers to formulate and test hypotheses about the underlying structure of covariance matrices. When used wisely, structural equation models can provide useful insights into the complex relations among a set variables relevant to some phenomenon of interest. 
