The file “HW6amod.csv” contains continuous propensity response data for 1000 individuals for two constructs.
1. Perform a confirmatory factor analysis of the two independent clusters, allowing the latent variables to correlate.
Run the model
CFA.model<- 'F1 =~ V1 + V2 + V3 + V4 + V5
F2 =~ V6 + V7 + V8 + V9 + V10'
cfa.q1<- cfa(CFA.model, data=data.p1, std.lv=TRUE)Extract the results
## lavaan 0.6-3 ended normally after 21 iterations
##
## Optimization method NLMINB
## Number of free parameters 21
##
## Number of observations 1000
##
## Estimator ML
## Model Fit Test Statistic 97.619
## Degrees of freedom 34
## P-value (Chi-square) 0.000
##
## Model test baseline model:
##
## Minimum Function Test Statistic 6075.203
## Degrees of freedom 45
## P-value 0.000
##
## User model versus baseline model:
##
## Comparative Fit Index (CFI) 0.989
## Tucker-Lewis Index (TLI) 0.986
##
## Loglikelihood and Information Criteria:
##
## Loglikelihood user model (H0) -11807.027
## Loglikelihood unrestricted model (H1) -11758.218
##
## Number of free parameters 21
## Akaike (AIC) 23656.054
## Bayesian (BIC) 23759.117
## Sample-size adjusted Bayesian (BIC) 23692.420
##
## Root Mean Square Error of Approximation:
##
## RMSEA 0.043
## 90 Percent Confidence Interval 0.033 0.053
## P-value RMSEA <= 0.05 0.857
##
## Standardized Root Mean Square Residual:
##
## SRMR 0.026
##
## Parameter Estimates:
##
## Information Expected
## Information saturated (h1) model Structured
## Standard Errors Standard
##
## Latent Variables:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## F1 =~
## V1 0.872 0.029 30.440 0.000 0.872 0.815
## V2 0.845 0.028 30.336 0.000 0.845 0.813
## V3 0.804 0.030 26.722 0.000 0.804 0.745
## V4 0.701 0.029 23.798 0.000 0.701 0.685
## V5 0.887 0.027 32.888 0.000 0.887 0.857
## F2 =~
## V6 0.856 0.028 30.423 0.000 0.856 0.814
## V7 0.899 0.030 30.369 0.000 0.899 0.814
## V8 0.838 0.030 28.292 0.000 0.838 0.775
## V9 0.709 0.031 22.526 0.000 0.709 0.656
## V10 0.914 0.028 32.996 0.000 0.914 0.858
##
## Covariances:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## F1 ~~
## F2 0.760 0.017 44.149 0.000 0.760 0.760
##
## Variances:
## Estimate Std.Err z-value P(>|z|) Std.lv Std.all
## .V1 0.384 0.022 17.770 0.000 0.384 0.335
## .V2 0.366 0.021 17.833 0.000 0.366 0.338
## .V3 0.517 0.027 19.489 0.000 0.517 0.444
## .V4 0.556 0.027 20.344 0.000 0.556 0.531
## .V5 0.284 0.018 15.938 0.000 0.284 0.265
## .V6 0.372 0.021 17.865 0.000 0.372 0.337
## .V7 0.413 0.023 17.897 0.000 0.413 0.338
## .V8 0.466 0.025 18.938 0.000 0.466 0.399
## .V9 0.664 0.032 20.663 0.000 0.664 0.569
## .V10 0.299 0.019 15.960 0.000 0.299 0.263
## F1 1.000 1.000 1.000
## F2 1.000 1.000 1.000
##
## R-Square:
## Estimate
## V1 0.665
## V2 0.662
## V3 0.556
## V4 0.469
## V5 0.735
## V6 0.663
## V7 0.662
## V8 0.601
## V9 0.431
## V10 0.737
Report the Chi-square value, df, its significance, and the CFI, and RMSEA.
2. Report the factor loading and item uniqueness information obtained from the CFA.
Loadings (standardized)
## F1 F2
## V1 0.815 0.000
## V2 0.813 0.000
## V3 0.745 0.000
## V4 0.685 0.000
## V5 0.857 0.000
## V6 0.000 0.814
## V7 0.000 0.814
## V8 0.000 0.775
## V9 0.000 0.656
## V10 0.000 0.858
Item uniqueness (error variance of the indicators)
## V1 V2 V3 V4 V5 V6 V7 V8 V9 V10
## V1 0.335
## V2 0.000 0.338
## V3 0.000 0.000 0.444
## V4 0.000 0.000 0.000 0.531
## V5 0.000 0.000 0.000 0.000 0.265
## V6 0.000 0.000 0.000 0.000 0.000 0.337
## V7 0.000 0.000 0.000 0.000 0.000 0.000 0.338
## V8 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.399
## V9 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.569
## V10 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.263
3. Calculate and report McDonald’s Omega for each factor. Also determine the internal consistency of the items for each factor. How do these alpha coefficients compare to the Omega values?
The package ‘SemTools’ works with lavaan and is handy for a variety of functions, including extracting McDonald’s Omega. Simply use the command ‘Reliability’ to do so.
## F1 F2 total
## alpha 0.8874650 0.8870474 0.9193022
## omega 0.8890688 0.8892868 0.9338467
## omega2 0.8890688 0.8892868 0.9338467
## omega3 0.8894135 0.8907433 0.9315403
## avevar 0.6173671 0.6180819 0.6177336
##
## Internal Consistency | F1 | F2 |
## |------------------------|:-------------:|:------:|
## | Alpha | .887 | .887 |
## | Omega | .889 | .889 |
4. Construct and present a factorial validity table of convergent and discriminant validity coefficients using McDonald’s Omega. Do these validities appear appropriately high or low?
# factor 1 (square root of omega)
sqrt(0.8890688)
## [1] 0.9429044
# factor 2 (square root of omega)
sqrt(0.8892868)
## [1] 0.94302Now take the above values and multiply them by the correlation of the factors
When the numbers are rounded, you essentially get the same values for convergent and discriminant validity for both F1 & F2!
##
## | F1 | F2 |
## |----------------------------|:-------------:|:------:|
## | Y1 (convergent) | 0.943 | 0.943 |
## | Y2 (discriminant) | 0.717 | 0.717 |
5. Calculate and present the item information. For each factor, which item is the most informative?
Three different traits (anxiety, depression, aggression) were measured by three different methods (Questionnaire, Interview, Observation).
1. Create a table like the one shown in lab slide.
2. Is there evidence of convergent validity? Explain.
Trait factor loadings that are large and statistically significant indicate good convergent validity.
3. Is there evidence of discriminant validity? Explain.
Small correlations among the different trait factors indicate good discriminant validity.
4. Is there evidence of “method effects”? Explain.
To investigate if common method effects are present, we want to look at our residual correlations.
This commmand requires the columns in the data to be ordered in sets such that the first set is the first trait rated by each method, followed by the second trait rated by each method.
#group items by trait
methQanx<- data.p2[(1)]
methIanx<- data.p2[(4)]
methOanx<- data.p2[(7)]
methQdep<- data.p2[(2)]
methIdep<- data.p2[(5)]
methOdep<- data.p2[(8)]
methQagg<- data.p2[(3)]
methIagg<- data.p2[(6)]
methOagg<- data.p2[(9)]
#Merge the subsets back into one dataframe using the 'cbind.data.frame' command
mtmm.revise<-cbind.data.frame(methQanx, methIanx, methOanx, methQdep,
methIdep, methOdep, methQagg, methIagg, methOagg)Arguments for ‘MTMM’
## SameTrait SameMethod DiffDiff
## Results 0.4582781 0.2748662 0.2750668
Interpreting the output from MTMM analysis
Our Results From the MTMM Command Suggest…
Results from both McDonald’s Correlated Uniqueness Approach and using the Multi-Trait Multi Method command yielded some similarities, though more information can be extracted using McDonald’s method. Overall, it seems as though the model has good convergent validity and fair discriminant validity, though the results from the MTMM command indicated better discriminant validity than the results produced from McDonald’s method. The latter suggests that anxiety/depression/anxiety are still pretty correlated, and ideally we would want them to have a smaller association. However, our method bias was small in both examples, suggesting that only a small portion of variance is likely an artifact of the type of method used.