This tutorial will walk you through how to fit a multilevel
confirmatory factor analysis (CFA) model using the lavaan
package in R. This tutorial is intended for people who are already
familiar with CFA and are interested in extending their knowledge to
multilevel CFA.
library(tidyverse)
library(tidylog)
library(lavaan)
df <- read_table("tech_distract_sim.csv", col_names = FALSE)
##
## ── Column specification ────────────────────────────────────────────────────────
## cols(
## X1 = col_double(),
## X2 = col_double(),
## X3 = col_double(),
## X4 = col_double(),
## X5 = col_double(),
## X6 = col_double(),
## X7 = col_double()
## )
colnames(df) <- c("tiktok", "insta", "candy", "netflix", "youtube", "email", "id")
Always do preliminary data exploration before fitting a model. This includes plotting data at group level and individual level, checking for missing values, checking for outliers, looking at basic descriptive statistics, examining the distribution of cluster sizes, etc. (Implementing these preliminaries is outside of the scope of the tutorial.)
The data in this example are simulated data and the parameter values are in no way grounded in reality so don’t take them seriously. The dataset is like a dataset that could be collected in an experience sampling design where people are asked to report multiple times per day for several days to what extent they used different technologies to distract themselves from negative emotions. Thus, the data are multilevel with repeated measures nested within individual people. The data consist of 6 variables which represent different activities people potentially engage in with technology to distract themselves from negative emotions. The variables are: Tiktok, Instagram, Candy Crush, Netflix, Youtube, and Email. The data are in wide format with each row representing a person and each column representing a different activity. The data also include an ID variable that represents the person.
(Note that this design could also be considered a three-level design
where repeated measures are nested within days and days are nested
within people, but we will treat it as a two-level design because lavaan
can only do 2 levels. If you wanted to be very rigorous, you could check
the variance at the day-level for each variable separately in
three-level models using lmer. See here.)
Here is a preview of the data:
head(df)
## # A tibble: 6 × 7
## tiktok insta candy netflix youtube email id
## <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 0.415 1.07 2.11 1.62 -1.29 1.75 1
## 2 1.28 2.09 2.58 0.193 -0.212 1.21 1
## 3 -0.382 0.975 -0.130 -1.41 -3.19 -0.161 1
## 4 0.214 -0.104 -2.29 -0.899 -4.08 -0.368 1
## 5 -0.736 -0.558 -0.402 -1.18 -0.910 0.827 1
## 6 1.71 0.333 -0.787 0.534 -1.28 2.89 1
In our proposed model, we hypothesize different factor structures at the within-person and between-person levels.
At the within-person level, there are two factors – one that is characterized by distracting yourself on a phone and one that is characterized by distracting yourself on a computer. The idea here is that in moment-to-moment experiences, people will either distract themselves with a variety of activities on their phone or with a variety of activities on their computer.
At the between-person level, we hypothesize that there is a single factor that represents the general tendency to distract oneself with technology. This factor is represented by all 6 activities. The idea here is that some people are more likely to distract themselves with technology in general, regardless of the specific activity.
For step 1, it would probably be easier to establish your model in a diagrammatic form rather than code, but we will code out the model here for demonstration purposes. You could also hypothesize two competing models (e.g. a model with equivalent factor structures at both levels and a model with different factor structures at each level) and evaluate which model fits the data better.
model <- '
level: 1
# Define within-person factors
phone =~ NA*tiktok + NA*insta + NA*candy
laptop =~ NA*netflix + NA*youtube + NA*email
# Within-person factor covariances
phone ~~ laptop
# Within-person factor variances (fix to 1 for identification)
phone ~~ 1*phone
laptop ~~ 1*laptop
# Within-person residual variances
tiktok ~~ tiktok
insta ~~ insta
candy ~~ candy
netflix ~~ netflix
youtube ~~ youtube
email ~~ email
level: 2
# Define between-person factor
tech =~ NA*tiktok + NA*insta + NA*candy + NA*netflix + NA*youtube + NA*email
# Between-person factor variances (fix to 1 for identification)
tech ~~ 1*tech
# Between-person residual variances
tiktok ~~ tiktok
insta ~~ insta
candy ~~ candy
netflix ~~ netflix
youtube ~~ youtube
email ~~ email
# Between-person intercepts
tiktok ~ 1
insta ~ 1
candy ~ 1
netflix ~ 1
youtube ~ 1
email ~ 1
'
Next, we will check the ICCs for each of our indicator variables to determine whether we indeed have a multilevel structure in our data. This is done by fitting a model with only the residual variances at each level and person intercepts and then examining the ICCs. ICCs above .10 are generally considered to be indicative of a multilevel structure.
model_icc <- '
level: 1
# Within-person residual variances
tiktok ~~ tiktok
insta ~~ insta
candy ~~ candy
netflix ~~ netflix
youtube ~~ youtube
email ~~ email
level: 2
# Between-person residual variances
tiktok ~~ tiktok
insta ~~ insta
candy ~~ candy
netflix ~~ netflix
youtube ~~ youtube
email ~~ email
# Between-person intercepts
tiktok ~ 1
insta ~ 1
candy ~ 1
netflix ~ 1
youtube ~ 1
email ~ 1
'
fit_icc <- sem(model_icc, data=df, cluster="id")
lavInspect(fit_icc, "icc")
## tiktok insta candy netflix youtube email
## 0.552 0.572 0.482 0.407 0.549 0.535
Here we find substantial variance between individuals. All ICCs are above our cutoff value of .10. This suggests that a multilevel model is appropriate for these data.
Now we will fit our model to the data, letting lavaan know which variable represents the clustering variable.
fit <- sem(model, data=df, cluster="id")
The model fits without any errors or warnings. Yay!
fitMeasures(fit)
## npar fmin chisq
## 31.000 2.758 15.343
## df pvalue baseline.chisq
## 17.000 0.571 6148.349
## baseline.df baseline.pvalue cfi
## 30.000 0.000 1.000
## tli nnfi rfi
## 1.000 1.000 0.996
## nfi pnfi ifi
## 0.998 0.565 1.000
## rni logl unrestricted.logl
## 1.000 -20679.942 -20672.271
## aic bic ntotal
## 41421.884 41602.430 2500.000
## bic2 rmsea rmsea.ci.lower
## 41503.935 0.000 0.000
## rmsea.ci.upper rmsea.ci.level rmsea.pvalue
## 0.016 0.900 1.000
## rmsea.close.h0 rmsea.notclose.pvalue rmsea.notclose.h0
## 0.050 0.000 0.080
## srmr srmr_within srmr_between
## 0.033 0.005 0.029
Overall, we find a non-significant chi-square value indicating that our implied covariance matrix does not deviate significantly from our observed covariance matrix.
In this method, we fit an unrestricted multilevel model and then fit
single level models to the covariance matrix at each level of the
multilevel model. We can then use traditional fit indices to evaluate
the fit of the model at each level. Lavaan automatically fits the
unrestricted model when you fit any model and you can access the
unrestricted model using lavInspect(fit, "h1").
model_level1 <- '
# Define within-person factors
phone =~ NA*tiktok + NA*insta + NA*candy
laptop =~ NA*netflix + NA*youtube + NA*email
# Within-person factor covariances
phone ~~ laptop
# Within-person factor variances (fix to 1 for identification)
phone ~~ 1*phone
laptop ~~ 1*laptop
# Within-person residual variances
tiktok ~~ tiktok
insta ~~ insta
candy ~~ candy
netflix ~~ netflix
youtube ~~ youtube
email ~~ email
'
fit_sigma_h1_w <- sem(model_level1,
sample.cov = lavInspect(fit, "h1")$within$cov,
sample.mean = lavInspect(fit, "h1")$within$mean,
sample.nobs = nrow(df))
fitMeasures(fit_sigma_h1_w)
## npar fmin chisq
## 19.000 0.001 5.036
## df pvalue baseline.chisq
## 8.000 0.754 6026.094
## baseline.df baseline.pvalue cfi
## 15.000 0.000 1.000
## tli nnfi rfi
## 1.001 1.001 0.998
## nfi pnfi ifi
## 0.999 0.533 1.000
## rni logl unrestricted.logl
## 1.000 -19736.460 -19733.942
## aic bic ntotal
## 39510.921 39621.578 2500.000
## bic2 rmsea rmsea.ci.lower
## 39561.210 0.000 0.000
## rmsea.ci.upper rmsea.ci.level rmsea.pvalue
## 0.017 0.900 1.000
## rmsea.close.h0 rmsea.notclose.pvalue rmsea.notclose.h0
## 0.050 0.000 0.080
## rmr rmr_nomean srmr
## 0.006 0.006 0.004
## srmr_bentler srmr_bentler_nomean crmr
## 0.004 0.005 0.005
## crmr_nomean srmr_mplus srmr_mplus_nomean
## 0.006 0.004 0.005
## cn_05 cn_01 gfi
## 7699.430 9974.570 0.999
## agfi pgfi mfi
## 0.998 0.296 1.001
## ecvi
## 0.017
# you can look at all fit stats
fitMeasures(fit_sigma_h1_w)["pvalue"] # or select them individually
## pvalue
## 0.7537357
fitMeasures(fit_sigma_h1_w)["rmsea"]
## rmsea
## 0
fitMeasures(fit_sigma_h1_w)["cfi"]
## cfi
## 1
fitMeasures(fit_sigma_h1_w)["tli"]
## tli
## 1.000925
fitMeasures(fit_sigma_h1_w)["srmr"]
## srmr
## 0.004339135
Fit stats look great!!
model_level2 <- '
# Define between-person factor
tech =~ NA*tiktok + NA*insta + NA*candy + NA*netflix + NA*youtube + NA*email
# Between-person factor variances (fix to 1 for identification)
tech ~~ 1*tech
# Between-person residual variances
tiktok ~~ tiktok
insta ~~ insta
candy ~~ candy
netflix ~~ netflix
youtube ~~ youtube
email ~~ email
# Between-person intercepts
tiktok ~ 1
insta ~ 1
candy ~ 1
netflix ~ 1
youtube ~ 1
email ~ 1
'
fit_sigma_h1_b <- sem(model_level2,
sample.cov = lavInspect(fit, "h1")$id$cov,
sample.mean = lavInspect(fit, "h1")$id$mean,
sample.nobs = 100) # 100 clusters at level 2
fitMeasures(fit_sigma_h1_b)
## npar fmin chisq
## 18.000 0.061 12.290
## df pvalue baseline.chisq
## 9.000 0.197 385.988
## baseline.df baseline.pvalue cfi
## 15.000 0.000 0.991
## tli nnfi rfi
## 0.985 0.985 0.947
## nfi pnfi ifi
## 0.968 0.581 0.991
## rni logl unrestricted.logl
## 0.991 -739.571 -733.426
## aic bic ntotal
## 1515.141 1562.034 100.000
## bic2 rmsea rmsea.ci.lower
## 1505.186 0.060 0.000
## rmsea.ci.upper rmsea.ci.level rmsea.pvalue
## 0.136 0.900 0.365
## rmsea.close.h0 rmsea.notclose.pvalue rmsea.notclose.h0
## 0.050 0.394 0.080
## rmr rmr_nomean srmr
## 0.037 0.042 0.026
## srmr_bentler srmr_bentler_nomean crmr
## 0.026 0.030 0.030
## crmr_nomean srmr_mplus srmr_mplus_nomean
## 0.035 0.026 0.030
## cn_05 cn_01 gfi
## 138.664 177.288 0.963
## agfi pgfi mfi
## 0.889 0.321 0.984
## ecvi
## 0.483
fitMeasures(fit_sigma_h1_b)["pvalue"]
## pvalue
## 0.197445
fitMeasures(fit_sigma_h1_b)["rmsea"]
## rmsea
## 0.06046203
fitMeasures(fit_sigma_h1_b)["cfi"]
## cfi
## 0.9911315
fitMeasures(fit_sigma_h1_b)["tli"]
## tli
## 0.9852192
fitMeasures(fit_sigma_h1_b)["srmr"]
## srmr
## 0.02608581
Very well-fitting model! We are great scientists.
Now let’s see what happens when we mis-specify the model at level 2.
model_mis <- '
level: 1
# Define within-person factors
phone =~ NA*tiktok + NA*insta + NA*candy
laptop =~ NA*netflix + NA*youtube + NA*email
# Within-person factor covariances
phone ~~ laptop
# Within-person factor variances (fix to 1 for identification)
phone ~~ 1*phone
laptop ~~ 1*laptop
# Within-person residual variances
tiktok ~~ tiktok
insta ~~ insta
candy ~~ candy
netflix ~~ netflix
youtube ~~ youtube
email ~~ email
level: 2
# Define between-person factor
phone =~ NA*tiktok + NA*insta + NA*candy
laptop =~ NA*netflix + NA*youtube + NA*email
# Between-person factor covariances
phone ~~ laptop
# Between-person factor variances (fix to 1 for identification)
tiktok ~~ tiktok
insta ~~ insta
candy ~~ candy
netflix ~~ netflix
youtube ~~ youtube
email ~~ email
# Between-person intercepts
tiktok ~ 1
insta ~ 1
candy ~ 1
netflix ~ 1
youtube ~ 1
email ~ 1
'
# first fit the multilevel model to derive the covariance matrices
fit_mis <- sem(model_mis, data=df, cluster="id")
model_level2_mis <- '
# Define between-person factor
phone =~ NA*tiktok + NA*insta + NA*candy
laptop =~ NA*netflix + NA*youtube + NA*email
# Between-person factor covariances
phone ~~ laptop
# Between-person factor variances (fix to 1 for identification)
phone ~~ 1*phone
laptop ~~ 1*laptop
# Between-person residual variances
tiktok ~~ tiktok
insta ~~ insta
candy ~~ candy
netflix ~~ netflix
youtube ~~ youtube
email ~~ email
# Between-person intercepts
tiktok ~ 1
insta ~ 1
candy ~ 1
netflix ~ 1
youtube ~ 1
email ~ 1
'
fit_sigma_h1_b_mis <- sem(model_level2_mis,
sample.cov = lavInspect(fit_mis, "h1")$id$cov,
sample.mean = lavInspect(fit_mis, "h1")$id$mean,
sample.nobs = 100)
## Warning in lav_object_post_check(object): lavaan WARNING: covariance matrix of latent variables
## is not positive definite;
## use lavInspect(fit, "cov.lv") to investigate.
fitMeasures(fit_sigma_h1_b_mis)
## npar fmin chisq
## 19.000 0.048 9.591
## df pvalue baseline.chisq
## 8.000 0.295 385.988
## baseline.df baseline.pvalue cfi
## 15.000 0.000 0.996
## tli nnfi rfi
## 0.992 0.992 0.953
## nfi pnfi ifi
## 0.975 0.520 0.996
## rni logl unrestricted.logl
## 0.996 -738.221 -733.426
## aic bic ntotal
## 1514.442 1563.940 100.000
## bic2 rmsea rmsea.ci.lower
## 1503.934 0.045 0.000
## rmsea.ci.upper rmsea.ci.level rmsea.pvalue
## 0.131 0.900 0.469
## rmsea.close.h0 rmsea.notclose.pvalue rmsea.notclose.h0
## 0.050 0.312 0.080
## rmr rmr_nomean srmr
## 0.036 0.041 0.025
## srmr_bentler srmr_bentler_nomean crmr
## 0.025 0.028 0.028
## crmr_nomean srmr_mplus srmr_mplus_nomean
## 0.034 0.025 0.028
## cn_05 cn_01 gfi
## 162.683 210.466 0.973
## agfi pgfi mfi
## 0.907 0.288 0.992
## ecvi
## 0.476
lavInspect(fit_sigma_h1_b_mis, "cov.lv")
## phone laptop
## phone 1.000
## laptop 1.047 1.000
Interestingly, when you fit the mis-specified model, the fit indices
are the same or better than the correctly specified model. However,
lavaan gives you a warning that the covariance matrix of
latent variables is not positive definite. When examining the covariance
matrix of latent variables (psi), the covariance is greater than 1. Good
demonstration of why not to ignore the warnings!
Now we will evaluate the fit using the method proposed by Ryu and West (2009). In this method, we will fit two multilevel models. The first model will have level 2 be unrestricted in order to evaluate the fit of our hypothesized model at level 1. The second model will have level 1 be unrestricted to evaluate the fit of our hypothesized model at level 2. By saturating the model at each level, the fit statistics will only reflect misfit at the level where we’ve specified the hypothesized model.
model_level2_unrestricted <- '
level: 1
# Define within-person factors
phone =~ NA*tiktok + NA*insta + NA*candy
laptop =~ NA*netflix + NA*youtube + NA*email
# Within-person factor covariances
phone ~~ laptop
# Within-person factor variances (fix to 1 for identification)
phone ~~ 1*phone
laptop ~~ 1*laptop
# Within-person residual variances
tiktok ~~ tiktok
insta ~~ insta
candy ~~ candy
netflix ~~ netflix
youtube ~~ youtube
email ~~ email
level : 2
# Between-person covariances and variances
tiktok ~~ tiktok + insta + candy + netflix + youtube + email
insta ~~ insta + candy + netflix + youtube + email
candy ~~ candy + netflix + youtube + email
netflix ~~ netflix + youtube + email
youtube ~~ youtube + email
email ~~ email
# Between-person intercepts
tiktok ~ 1
insta ~ 1
candy ~ 1
netflix ~ 1
youtube ~ 1
email ~ 1
'
fit_eval_level1 <- sem(model_level2_unrestricted, data=df, cluster="id")
fitMeasures(fit_eval_level1)
## npar fmin chisq
## 40.000 2.756 4.834
## df pvalue baseline.chisq
## 8.000 0.775 6148.349
## baseline.df baseline.pvalue cfi
## 30.000 0.000 1.000
## tli nnfi rfi
## 1.002 1.002 0.997
## nfi pnfi ifi
## 0.999 0.266 1.001
## rni logl unrestricted.logl
## 1.001 -20674.688 -20672.271
## aic bic ntotal
## 41429.376 41662.338 2500.000
## bic2 rmsea rmsea.ci.lower
## 41535.248 0.000 0.000
## rmsea.ci.upper rmsea.ci.level rmsea.pvalue
## 0.016 0.900 1.000
## rmsea.close.h0 rmsea.notclose.pvalue rmsea.notclose.h0
## 0.050 0.000 0.080
## srmr srmr_within srmr_between
## 0.005 0.005 0.000
Fit indices look good. Now let’s evaluate the fit of the hypothesized model at level 2.
model_level1_unrestricted <- '
level: 1
# Within-person variances and covariances
tiktok ~~ tiktok + insta + candy + netflix + youtube + email
insta ~~ insta + candy + netflix + youtube + email
candy ~~ candy + netflix + youtube + email
netflix ~~ netflix + youtube + email
youtube ~~ youtube + email
email ~~ email
level: 2
# Define between-person factor
tech =~ NA*tiktok + NA*insta + NA*candy + NA*netflix + NA*youtube + NA*email
# Between-person factor covariances
tech ~~ 1*tech
# Between-person residual variances
tiktok ~~ tiktok
insta ~~ insta
candy ~~ candy
netflix ~~ netflix
youtube ~~ youtube
email ~~ email
# Between-person intercepts
tiktok ~ 1
insta ~ 1
candy ~ 1
netflix ~ 1
youtube ~ 1
email ~ 1
'
fit_eval_level2 <- sem(model_level1_unrestricted, data=df, cluster="id")
fitMeasures(fit_eval_level2)
## npar fmin chisq
## 39.000 2.757 10.517
## df pvalue baseline.chisq
## 9.000 0.310 6148.349
## baseline.df baseline.pvalue cfi
## 30.000 0.000 1.000
## tli nnfi rfi
## 0.999 0.999 0.994
## nfi pnfi ifi
## 0.998 0.299 1.000
## rni logl unrestricted.logl
## 1.000 -20677.529 -20672.271
## aic bic ntotal
## 41433.059 41660.197 2500.000
## bic2 rmsea rmsea.ci.lower
## 41536.284 0.008 0.000
## rmsea.ci.upper rmsea.ci.level rmsea.pvalue
## 0.025 0.900 1.000
## rmsea.close.h0 rmsea.notclose.pvalue rmsea.notclose.h0
## 0.050 0.000 0.080
## srmr srmr_within srmr_between
## 0.029 0.000 0.029
Yay, our model fits!
Below are three different ways to query the parameters of the model.
The summary function is a generic function used to print
the output of a model in human-readable format. You can add
standardized=TRUE to include the standardized estimates.
You can choose which one suits your need best. The functions
parameterestimates and standardizedSolution
return the estimates in data frame form which can be flexibly
manipulated and used for visualization or other purposes.
summary(fit, standardized=TRUE)
## lavaan 0.6.15 ended normally after 51 iterations
##
## Estimator ML
## Optimization method NLMINB
## Number of model parameters 31
##
## Number of observations 2500
## Number of clusters [id] 100
##
## Model Test User Model:
##
## Test statistic 15.343
## Degrees of freedom 17
## P-value (Chi-square) 0.571
##
## Parameter Estimates:
##
## Standard errors Standard
## Information Observed
## Observed information based on Hessian
##
##
## Level 1 [within]:
##
## Latent Variables:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## phone =~
## tiktok 0.850 0.024 35.883 0.000 0.850 0.732
## insta 0.742 0.019 38.312 0.000 0.742 0.782
## candy 0.963 0.028 34.890 0.000 0.963 0.712
## laptop =~
## netflix 0.928 0.021 43.243 0.000 0.928 0.783
## youtube 0.844 0.020 42.433 0.000 0.844 0.771
## email 0.873 0.016 55.361 0.000 0.873 0.940
##
## Covariances:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## phone ~~
## laptop -0.008 0.024 -0.353 0.724 -0.008 -0.008
##
## Intercepts:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## .tiktok 0.000 0.000 0.000
## .insta 0.000 0.000 0.000
## .candy 0.000 0.000 0.000
## .netflix 0.000 0.000 0.000
## .youtube 0.000 0.000 0.000
## .email 0.000 0.000 0.000
## phone 0.000 0.000 0.000
## laptop 0.000 0.000 0.000
##
## Variances:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## phone 1.000 1.000 1.000
## laptop 1.000 1.000 1.000
## .tiktok 0.624 0.028 22.629 0.000 0.624 0.464
## .insta 0.350 0.019 18.550 0.000 0.350 0.388
## .candy 0.902 0.037 24.125 0.000 0.902 0.493
## .netflix 0.545 0.021 26.141 0.000 0.545 0.388
## .youtube 0.486 0.018 26.965 0.000 0.486 0.406
## .email 0.101 0.012 8.097 0.000 0.101 0.117
##
##
## Level 2 [id]:
##
## Latent Variables:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## tech =~
## tiktok 0.923 0.118 7.849 0.000 0.923 0.717
## insta 0.928 0.093 9.982 0.000 0.928 0.843
## candy 0.822 0.125 6.580 0.000 0.822 0.628
## netflix 0.765 0.088 8.671 0.000 0.765 0.778
## youtube 0.925 0.108 8.594 0.000 0.925 0.765
## email 0.941 0.079 11.925 0.000 0.941 0.943
##
## Intercepts:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## .tiktok 0.355 0.131 2.715 0.007 0.355 0.276
## .insta 0.368 0.112 3.292 0.001 0.368 0.334
## .candy 0.456 0.134 3.408 0.001 0.456 0.348
## .netflix 0.284 0.101 2.805 0.005 0.284 0.288
## .youtube 0.353 0.123 2.877 0.004 0.353 0.292
## .email 0.394 0.102 3.884 0.000 0.394 0.395
## tech 0.000 0.000 0.000
##
## Variances:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## tech 1.000 1.000 1.000
## .tiktok 0.804 0.132 6.076 0.000 0.804 0.485
## .insta 0.350 0.067 5.228 0.000 0.350 0.289
## .candy 1.040 0.165 6.319 0.000 1.040 0.606
## .netflix 0.382 0.066 5.774 0.000 0.382 0.395
## .youtube 0.605 0.101 5.996 0.000 0.605 0.414
## .email 0.111 0.039 2.811 0.005 0.111 0.111
parameterestimates(fit, standardized=TRUE)
## lhs op rhs block level est se z pvalue ci.lower ci.upper
## 1 phone =~ tiktok 1 1 0.850 0.024 35.883 0.000 0.803 0.896
## 2 phone =~ insta 1 1 0.742 0.019 38.312 0.000 0.704 0.780
## 3 phone =~ candy 1 1 0.963 0.028 34.890 0.000 0.909 1.017
## 4 laptop =~ netflix 1 1 0.928 0.021 43.243 0.000 0.886 0.970
## 5 laptop =~ youtube 1 1 0.844 0.020 42.433 0.000 0.805 0.883
## 6 laptop =~ email 1 1 0.873 0.016 55.361 0.000 0.842 0.904
## 7 phone ~~ laptop 1 1 -0.008 0.024 -0.353 0.724 -0.056 0.039
## 8 phone ~~ phone 1 1 1.000 0.000 NA NA 1.000 1.000
## 9 laptop ~~ laptop 1 1 1.000 0.000 NA NA 1.000 1.000
## 10 tiktok ~~ tiktok 1 1 0.624 0.028 22.629 0.000 0.570 0.678
## 11 insta ~~ insta 1 1 0.350 0.019 18.550 0.000 0.313 0.387
## 12 candy ~~ candy 1 1 0.902 0.037 24.125 0.000 0.829 0.975
## 13 netflix ~~ netflix 1 1 0.545 0.021 26.141 0.000 0.504 0.586
## 14 youtube ~~ youtube 1 1 0.486 0.018 26.965 0.000 0.451 0.521
## 15 email ~~ email 1 1 0.101 0.012 8.097 0.000 0.076 0.125
## 16 tiktok ~1 1 1 0.000 0.000 NA NA 0.000 0.000
## 17 insta ~1 1 1 0.000 0.000 NA NA 0.000 0.000
## 18 candy ~1 1 1 0.000 0.000 NA NA 0.000 0.000
## 19 netflix ~1 1 1 0.000 0.000 NA NA 0.000 0.000
## 20 youtube ~1 1 1 0.000 0.000 NA NA 0.000 0.000
## 21 email ~1 1 1 0.000 0.000 NA NA 0.000 0.000
## 22 phone ~1 1 1 0.000 0.000 NA NA 0.000 0.000
## 23 laptop ~1 1 1 0.000 0.000 NA NA 0.000 0.000
## 24 tech =~ tiktok 2 2 0.923 0.118 7.849 0.000 0.693 1.154
## 25 tech =~ insta 2 2 0.928 0.093 9.982 0.000 0.746 1.110
## 26 tech =~ candy 2 2 0.822 0.125 6.580 0.000 0.577 1.067
## 27 tech =~ netflix 2 2 0.765 0.088 8.671 0.000 0.592 0.938
## 28 tech =~ youtube 2 2 0.925 0.108 8.594 0.000 0.714 1.136
## 29 tech =~ email 2 2 0.941 0.079 11.925 0.000 0.787 1.096
## 30 tech ~~ tech 2 2 1.000 0.000 NA NA 1.000 1.000
## 31 tiktok ~~ tiktok 2 2 0.804 0.132 6.076 0.000 0.544 1.063
## 32 insta ~~ insta 2 2 0.350 0.067 5.228 0.000 0.219 0.482
## 33 candy ~~ candy 2 2 1.040 0.165 6.319 0.000 0.717 1.363
## 34 netflix ~~ netflix 2 2 0.382 0.066 5.774 0.000 0.252 0.511
## 35 youtube ~~ youtube 2 2 0.605 0.101 5.996 0.000 0.407 0.803
## 36 email ~~ email 2 2 0.111 0.039 2.811 0.005 0.033 0.188
## 37 tiktok ~1 2 2 0.355 0.131 2.715 0.007 0.099 0.611
## 38 insta ~1 2 2 0.368 0.112 3.292 0.001 0.149 0.587
## 39 candy ~1 2 2 0.456 0.134 3.408 0.001 0.194 0.718
## 40 netflix ~1 2 2 0.284 0.101 2.805 0.005 0.085 0.482
## 41 youtube ~1 2 2 0.353 0.123 2.877 0.004 0.113 0.594
## 42 email ~1 2 2 0.394 0.102 3.884 0.000 0.195 0.593
## 43 tech ~1 2 2 0.000 0.000 NA NA 0.000 0.000
## std.lv std.all std.nox
## 1 0.850 0.732 0.732
## 2 0.742 0.782 0.782
## 3 0.963 0.712 0.712
## 4 0.928 0.783 0.783
## 5 0.844 0.771 0.771
## 6 0.873 0.940 0.940
## 7 -0.008 -0.008 -0.008
## 8 1.000 1.000 1.000
## 9 1.000 1.000 1.000
## 10 0.624 0.464 0.464
## 11 0.350 0.388 0.388
## 12 0.902 0.493 0.493
## 13 0.545 0.388 0.388
## 14 0.486 0.406 0.406
## 15 0.101 0.117 0.117
## 16 0.000 0.000 0.000
## 17 0.000 0.000 0.000
## 18 0.000 0.000 0.000
## 19 0.000 0.000 0.000
## 20 0.000 0.000 0.000
## 21 0.000 0.000 0.000
## 22 0.000 0.000 0.000
## 23 0.000 0.000 0.000
## 24 0.923 0.717 0.717
## 25 0.928 0.843 0.843
## 26 0.822 0.628 0.628
## 27 0.765 0.778 0.778
## 28 0.925 0.765 0.765
## 29 0.941 0.943 0.943
## 30 1.000 1.000 1.000
## 31 0.804 0.485 0.485
## 32 0.350 0.289 0.289
## 33 1.040 0.606 0.606
## 34 0.382 0.395 0.395
## 35 0.605 0.414 0.414
## 36 0.111 0.111 0.111
## 37 0.355 0.276 0.276
## 38 0.368 0.334 0.334
## 39 0.456 0.348 0.348
## 40 0.284 0.288 0.288
## 41 0.353 0.292 0.292
## 42 0.394 0.395 0.395
## 43 0.000 0.000 0.000
standardizedSolution(fit)
## lhs op rhs est.std se z pvalue ci.lower ci.upper
## 1 phone =~ tiktok 0.732 0.014 51.595 0.000 0.705 0.760
## 2 phone =~ insta 0.782 0.014 56.636 0.000 0.755 0.809
## 3 phone =~ candy 0.712 0.014 49.480 0.000 0.684 0.740
## 4 laptop =~ netflix 0.783 0.010 77.771 0.000 0.763 0.802
## 5 laptop =~ youtube 0.771 0.010 74.860 0.000 0.751 0.791
## 6 laptop =~ email 0.940 0.008 119.853 0.000 0.924 0.955
## 7 phone ~~ laptop -0.008 0.024 -0.353 0.724 -0.056 0.039
## 8 phone ~~ phone 1.000 0.000 NA NA 1.000 1.000
## 9 laptop ~~ laptop 1.000 0.000 NA NA 1.000 1.000
## 10 tiktok ~~ tiktok 0.464 0.021 22.291 0.000 0.423 0.504
## 11 insta ~~ insta 0.388 0.022 17.987 0.000 0.346 0.431
## 12 candy ~~ candy 0.493 0.020 24.049 0.000 0.453 0.533
## 13 netflix ~~ netflix 0.388 0.016 24.620 0.000 0.357 0.419
## 14 youtube ~~ youtube 0.406 0.016 25.550 0.000 0.375 0.437
## 15 email ~~ email 0.117 0.015 7.933 0.000 0.088 0.146
## 16 tiktok ~1 0.000 0.000 NA NA 0.000 0.000
## 17 insta ~1 0.000 0.000 NA NA 0.000 0.000
## 18 candy ~1 0.000 0.000 NA NA 0.000 0.000
## 19 netflix ~1 0.000 0.000 NA NA 0.000 0.000
## 20 youtube ~1 0.000 0.000 NA NA 0.000 0.000
## 21 email ~1 0.000 0.000 NA NA 0.000 0.000
## 22 phone ~1 0.000 0.000 NA NA 0.000 0.000
## 23 laptop ~1 0.000 0.000 NA NA 0.000 0.000
## 24 tech =~ tiktok 0.717 0.055 12.960 0.000 0.609 0.826
## 25 tech =~ insta 0.843 0.036 23.419 0.000 0.772 0.914
## 26 tech =~ candy 0.628 0.067 9.332 0.000 0.496 0.759
## 27 tech =~ netflix 0.778 0.046 16.814 0.000 0.687 0.869
## 28 tech =~ youtube 0.765 0.048 16.056 0.000 0.672 0.859
## 29 tech =~ email 0.943 0.022 42.637 0.000 0.900 0.986
## 30 tech ~~ tech 1.000 0.000 NA NA 1.000 1.000
## 31 tiktok ~~ tiktok 0.485 0.079 6.109 0.000 0.330 0.641
## 32 insta ~~ insta 0.289 0.061 4.766 0.000 0.170 0.408
## 33 candy ~~ candy 0.606 0.084 7.179 0.000 0.441 0.772
## 34 netflix ~~ netflix 0.395 0.072 5.486 0.000 0.254 0.536
## 35 youtube ~~ youtube 0.414 0.073 5.679 0.000 0.271 0.557
## 36 email ~~ email 0.111 0.042 2.660 0.008 0.029 0.193
## 37 tiktok ~1 0.276 0.104 2.663 0.008 0.073 0.479
## 38 insta ~1 0.334 0.104 3.202 0.001 0.130 0.539
## 39 candy ~1 0.348 0.105 3.306 0.001 0.142 0.554
## 40 netflix ~1 0.288 0.105 2.745 0.006 0.082 0.495
## 41 youtube ~1 0.292 0.104 2.816 0.005 0.089 0.496
## 42 email ~1 0.395 0.106 3.736 0.000 0.188 0.602
## 43 tech ~1 0.000 0.000 NA NA 0.000 0.000
Don’t forget to include version info in any published markdown so people can reproduce your work!
sessionInfo()
## R version 4.2.2 (2022-10-31)
## Platform: x86_64-apple-darwin17.0 (64-bit)
## Running under: macOS Big Sur ... 10.16
##
## Matrix products: default
## BLAS: /Library/Frameworks/R.framework/Versions/4.2/Resources/lib/libRblas.0.dylib
## LAPACK: /Library/Frameworks/R.framework/Versions/4.2/Resources/lib/libRlapack.dylib
##
## locale:
## [1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8
##
## attached base packages:
## [1] stats graphics grDevices utils datasets methods base
##
## other attached packages:
## [1] lavaan_0.6-15 tidylog_1.0.2 lubridate_1.9.2 forcats_1.0.0
## [5] stringr_1.5.0 dplyr_1.1.3 purrr_1.0.2 readr_2.1.4
## [9] tidyr_1.3.0 tibble_3.2.1 ggplot2_3.4.1 tidyverse_2.0.0
##
## loaded via a namespace (and not attached):
## [1] clisymbols_1.2.0 tidyselect_1.2.0 xfun_0.37 bslib_0.4.2
## [5] colorspace_2.1-0 vctrs_0.6.3 generics_0.1.3 htmltools_0.5.5
## [9] stats4_4.2.2 yaml_2.3.7 utf8_1.2.3 rlang_1.1.1
## [13] jquerylib_0.1.4 pillar_1.9.0 glue_1.6.2 withr_2.5.0
## [17] lifecycle_1.0.3 munsell_0.5.0 gtable_0.3.3 evaluate_0.20
## [21] knitr_1.42 tzdb_0.4.0 fastmap_1.1.1 parallel_4.2.2
## [25] fansi_1.0.4 scales_1.2.1 cachem_1.0.7 jsonlite_1.8.7
## [29] mnormt_2.1.1 hms_1.1.3 digest_0.6.31 stringi_1.7.12
## [33] grid_4.2.2 quadprog_1.5-8 cli_3.6.1 tools_4.2.2
## [37] magrittr_2.0.3 sass_0.4.5 crayon_1.5.2 pbivnorm_0.6.0
## [41] pkgconfig_2.0.3 MASS_7.3-58.3 timechange_0.2.0 rmarkdown_2.20
## [45] rstudioapi_0.14 R6_2.5.1 compiler_4.2.2