Start Using the neg.semfit() Function

From the negligible R package


Introduction

What is the purpose/goal of neg.semfit()?

The function neg.semfit performs three different types of equivalence tests for the fit indices RMSEA, CFI, and SRMR. This function can be used instead of using neg.rmsea, neg.cfi, and neg.srmr separately.

What is the theory behind neg.semfit()?

For each of the three equivalence tests conducted, the function will compare one bound of the confidence interval (pertaining to the fit index) to one bound of an equivalence interval (also known as an equivalence bound).

For RMSEA and SRMR (because they are lower values are better fit indices), the upper bound of the confidence interval will be compared to the equivalence bound.

For CFI (because it is a higher value is better fit index), the lower bound of the confidence interval will be compared to the equivalence bound.

For more information on the theory behind these tests and choices of equivalence bounds see:

Beribisky, N., & Cribbie, R. A. (2023). Evaluating the performance of existing and novel equivalence tests for fit indices in structural equation modelling. British Journal of Mathematical and Statistical Psychology. 77(1), 103-129. https://doi.org/10.1111/bmsp.12317

MacCallum, R. C., Browne, M. W., & Sugawara, H. M. (1996). Power analysis and determination of sample size for covariance structure modeling. Psychological Methods, 1(2), 130–149. https://doi.org/10.1037/1082-989X.1.2.130

Maydeu-Olivares, A. (2017). Assessing the size of model misfit in structural equation models. Psychometrika, 82(3), 533–558. https://doi.org/10.1007/s11336-016-9552-7

Shi, D., Maydeu-Olivares, A., & DiStefano, C. (2018). The relationship between the standardized root mean square residual and model misspecification in factor analysis models. Multivariate Behavioral Research, 53(5), 676-694. https://doi.org/10.1080/00273171.2018.1476221

Yuan, K. H., Chan, W., Marcoulides, G. A., & Bentler, P. M. (2016). Assessing structural equation models by equivalence testing with adjusted fit indexes. Structural Equation Modeling: A Multidisciplinary Journal, 23(3), 319-330. https://doi.org/10.1080/10705511.2015.1065414

Null and Alternate Hypotheses of the Procedure

RMSEA: \(H_{0}: RMSEA_{pop} \ge MMES\) | \(H_{1}: RMSEA_{pop} < MMES\)

SRMR: \(H_{0}: SRMR_{pop} \ge MMES\) | \(H_{1}: SRMR_{pop} < MMES\)

CFI: \(H_{0}: CFI_{pop} \le MMES\) | \(H_{1}: CFI_{pop} > MMES\)

Note that the MMES is the minimally meaningful effect size, as approximated by the equivalence bound.

Using neg.semfit()

Now let’s use the function. By default, doing so requires a fitted model object from lavaan.

Required arguments (no default)

mod: the fitted model object (from lavaan)

Optional arguments (has a default)

alpha: the optional argument for the alpha level (default is .05),

round: the optional argument for the number of digits to round equivalence bound and confidence interval bounds (default is 3),

rmsea.eq.bound: the upper bound of the equivalence interval for RMSEA .Note that if rmsea.modif.eq.bound = TRUE, this value must be one of .01, .05, .08, or .10 (default is .05),

rmsea.modif.eq.bound: should the upper bound of the equivalence interval for RMSEA be modified (default is FALSE),

rmsea.ci.method: method used to calculate confidence interval for RMSEA; options are “not.close” or “yhy.boot”; “not.close” corresponds to (1-2alpha) percent CI, “yhy.boot” corresponds to (1-2alpha) percent boot CI (default is “not.close”),

rmsea.nboot: number of bootstrap samples if “yhy.boot” is selected as rmsea.ci.method (default is 250L)

cfi.eq.bound: lower bound of equivalence interval for CFI for comparison. This value must be one of .99, .95, .92 or .90 if cfi.modif.eq.bound = TRUE (default is FALSE),

cfi.modif.eq.bound: should the lower bound of the equivalence interval for CFI be modified (default is FALSE),

cfi.ci.method: method used to calculate confidence interval for CFI; options are “yuan”, “equiv” or “yhy.boot”; “yuan” corresponds to (1-alpha) percent CI, “equiv” corresponds to (1-2alpha) percent CI, “yhy.boot” corresponds to (1-2alpha) percent boot CI (default is “equiv”),

cfi.nboot: number of bootstrap samples if “yhy.boot” is selected as cfi.ci.method (default is 250L),

srmr.eq.bound: upper bound of equivalence interval for SRMR for comparison; note that this value must be one of .05 or .10 if modif.eq.bound = TRUE (default is .08),

srmr.modif.eq.bound: should the upper bound of the equivalence interval for SRMR be modified? (default is FALSE)

srmr.ci.method: method used to calculate confidence interval for SRMR; options are “MO” or “yhy.boot”; “MO” corresponds to (1-2alpha) percent CI, “yhy.boot” corresponds to (1-2alpha) percent boot CI (default is “MO”),

usrmr: fit index around which equivalence test should be structured. When usrmr = TRUE the usrmr from Maydeu-Olivares, 2017 will be used, otherwise srmr from fitmeasures() output in lavaan will be used (default is TRUE),

srmr.nboot: number of bootstrap samples if “yhy.boot” is selected as srmr.ci.method (default is 250L)

Examples

Example 1

First we need to create a model object using lavaan. Let’s use the Holzinger and Swineford dataset that is part of the lavaan package.

library(negligible)
library(lavaan)

d <- lavaan::HolzingerSwineford1939
hs.mod <- 'visual =~ x1 + x2 + x3
textual =~ x4 + x5 + x6
speed =~ x7 + x8 + x9'
fit1 <- lavaan::cfa(hs.mod, data = d)

Now we can use the function. Let’s say that we just go with the defaults first.

neg.semfit(mod = fit1)
** Equivalence/Negligible Effect Tests for Evaluating Model Fit ** 


* RMSEA-Based Test: * 

---- EBF-RMSEA: Equivalence Based Fit Test for RMSEA; the Not-Close Fit Test for RMSEA by MacCallum et al. (1996) ----

RMSEA index: 0.09212148 
*************************************
Confidence Interval Method Selected: not.close 
Upper end of 90% CI for RMSEA: 0.114
*************************************
Modified Equivalence Bound: no 
Equivalence Bound: 0.05 
*************************************
Test Decision (comparing confidence interval to equivalence bound):
 FAIL TO REJECT HO: We fail to find evidence to reject the hypothesis of not-close fit. 



* CFI-Based Test: * 
---- EBFB-CFI: Equivalence Based Fit Test for CFI using YHY Bootstrap for CI ----

CFI index: 0.9305597 
*************************************
Confidence Interval Method Selected: yhy.boot 
Lower end of 90% CI for CFI: 0.899
*************************************
Modified Equivalence Bound: no 
Equivalence Bound: 0.95 
*************************************
Test Decision (comparing confidence interval to equivalence bound): FAIL TO REJECT HO: We fail to reject the hypothesis that the specified model is not substantially better fitting than the baseline model. 



* SRMR-Based Test: * 
---- Equivalence Based Fit Test for Unbiased SRMR ----

uSRMR index: 0.05800319 
*************************************
Confidence Interval Method Selected: MO 
Upper bound of 90% CI for SRMR: 0.074
*************************************
Modified Equivalence Bound: no 
Equivalence Bound: 0.08 
*************************************
Test Decision (comparing confidence interval to equivalence bound):
 REJECT HO: The null hypothesis that the population SRMR exceeds the equivalence bound can be rejected. There is evidence to support satisfactory fit, given the value of the equivalence bound. 

Example 2

Of course we don’t have to rely on the defaults, and the equivalence bounds should be informed by the smallest misspecification a researcher would deem to be important. Let’s change some.

neg.semfit(mod = fit1,  rmsea.eq.bound = .10,
           cfi.eq.bound = .92,
           srmr.eq.bound = .10)
** Equivalence/Negligible Effect Tests for Evaluating Model Fit ** 


* RMSEA-Based Test: * 

---- EBF-RMSEA: Equivalence Based Fit Test for RMSEA; the Not-Close Fit Test for RMSEA by MacCallum et al. (1996) ----

RMSEA index: 0.09212148 
*************************************
Confidence Interval Method Selected: not.close 
Upper end of 90% CI for RMSEA: 0.114
*************************************
Modified Equivalence Bound: no 
Equivalence Bound: 0.1 
*************************************
Test Decision (comparing confidence interval to equivalence bound):
 FAIL TO REJECT HO: We fail to find evidence to reject the hypothesis of not-close fit. 



* CFI-Based Test: * 
---- EBFB-CFI: Equivalence Based Fit Test for CFI using YHY Bootstrap for CI ----

CFI index: 0.9305597 
*************************************
Confidence Interval Method Selected: yhy.boot 
Lower end of 90% CI for CFI: 0.901
*************************************
Modified Equivalence Bound: no 
Equivalence Bound: 0.92 
*************************************
Test Decision (comparing confidence interval to equivalence bound): FAIL TO REJECT HO: We fail to reject the hypothesis that the specified model is not substantially better fitting than the baseline model. 



* SRMR-Based Test: * 
---- Equivalence Based Fit Test for Unbiased SRMR ----

uSRMR index: 0.05800319 
*************************************
Confidence Interval Method Selected: MO 
Upper bound of 90% CI for SRMR: 0.074
*************************************
Modified Equivalence Bound: no 
Equivalence Bound: 0.1 
*************************************
Test Decision (comparing confidence interval to equivalence bound):
 REJECT HO: The null hypothesis that the population SRMR exceeds the equivalence bound can be rejected. There is evidence to support satisfactory fit, given the value of the equivalence bound. 

Example 3

We can also use modified equivalence bounds. For instance, if we wanted to modify according to the adapted cutoff formula provided by Shi et al. 2018, we can change the input to the srmr.modif.eq.bound argument. Note that we can only do this because our equivalence bound for SRMR is .10 (it needs to be either .05 or .10 to apply this change).

neg.semfit(mod = fit1,  rmsea.eq.bound = .10,
           cfi.eq.bound = .92,
           srmr.eq.bound = .10, srmr.modif.eq.bound = TRUE)
** Equivalence/Negligible Effect Tests for Evaluating Model Fit ** 


* RMSEA-Based Test: * 

---- EBF-RMSEA: Equivalence Based Fit Test for RMSEA; the Not-Close Fit Test for RMSEA by MacCallum et al. (1996) ----

RMSEA index: 0.09212148 
*************************************
Confidence Interval Method Selected: not.close 
Upper end of 90% CI for RMSEA: 0.114
*************************************
Modified Equivalence Bound: no 
Equivalence Bound: 0.1 
*************************************
Test Decision (comparing confidence interval to equivalence bound):
 FAIL TO REJECT HO: We fail to find evidence to reject the hypothesis of not-close fit. 



* CFI-Based Test: * 
---- EBFB-CFI: Equivalence Based Fit Test for CFI using YHY Bootstrap for CI ----

CFI index: 0.9305597 
*************************************
Confidence Interval Method Selected: yhy.boot 
Lower end of 90% CI for CFI: 0.898
*************************************
Modified Equivalence Bound: no 
Equivalence Bound: 0.92 
*************************************
Test Decision (comparing confidence interval to equivalence bound): FAIL TO REJECT HO: We fail to reject the hypothesis that the specified model is not substantially better fitting than the baseline model. 



* SRMR-Based Test: * 
---- Equivalence Based Fit Test for Unbiased SRMR and Modified Equivalence Interval ----

uSRMR index: 0.05800319 
*************************************
Confidence Interval Method Selected: MO 
Upper bound of 90% CI for SRMR: 0.074
*************************************
Modified Equivalence Bound: yes 
Equivalence Bound: 0.051 
*************************************
Test Decision (comparing confidence interval to equivalence bound):
 FAIL TO REJECT HO: The null hypothesis that the population SRMR exceeds the equivalence bound cannot be rejected. 

Extractable Elements

A number of elements of the output can be extracted, including the following (note that the setup below will follow the named object containing the results; thus if the results of the negsem.fit function are saved to an object named x, x$ with the following additions will produce the corresponding extractable elements):

rmsea.res$rmsea_index The RMSEA value

rmsea.res$ci.method The confidence interval method selected for computing the upper bound of the confidence interval for the RMSEA

rmsea.res$alpha Nominal Type I error rate (this value will be the same for all RMSEA/CFI/SRMR tests)

rmsea.res$rmsea_eq The value of the upper bound of the confidence interval for CFI

msea.res$modif.eq.bound Whether a modified equivalence bound was used for the RMSEA equivalence test

rmsea.res$eq.bound The value of the equivalence bound used for the RMSEA equivalence test

rmsea.res$decision NHST decision for the RMSEA equivalence test

cfi.res$cfi_index The CFI value

cfi.res$ci.method TThe confidence interval method selected for computing the lower bound of the confidence interval for the CFI

cfi.res$alpha Nominal Type I error rate (this value will be the same for all RMSEA/CFI/SRMR tests)

cfi.res$cfi_eq The value of the lower bound of the confidence interval for CFI

cfi.res$modif.eq.bound Whether a modified equivalence bound was used for the CFI equivalence test

cfi.res$eq.bound The value of the equivalence bound used for the CFI equivalence test

cfi.res$decision NHST decision for the CFI equivalence test

srmr.res$usrmr Whether the unbiased SRMR is used for the equivalence test

srmr.res$srmr_index The SRMR index (USRMR or original SRMR depending on whether usrmr is TRUE or FALSE)

srmr.res$ci.method The confidence interval method selected for computing the upper bound of the confidence interval for the SRMR

srmr.res$alpha Nominal Type I error rate (this value will be the same for all RMSEA/CFI/SRMR tests)

srmr.res$srmr_ci The value of the upper bound of the confidence interval for SRMR

srmr.res$modif.eq.bound Whether a modified equivalence bound was used for the SRMR equivalence test

srmr.res$eq.bound The value of the equivalence bound used for the SRMR equivalence test

srmr.res$decision NHST decision for the SRMR equivalence test

---
title: "Start Using the `neg.semfit()` Function"
subtitle: | 
    From the [`negligible`](https://cran.r-project.org/web/packages/negligible/index.html) R package![](I:/My Drive/Research/Cribbie Lab/negligible Vignettes/Template/neg.logo.png){width=10%}  
author: "[Nataly Beribisky and Rob Cribbie]"
date: "`r format(Sys.time())`"
output:
  rmdformats::robobook:
    code_download: yes
    highlight: tango
---

```{r setup, echo=FALSE, cache=FALSE, messages=FALSE, warning=FALSE}
#install.packages("rmdformats")
suppressPackageStartupMessages(library(rmdformats, warn.conflicts=FALSE))
suppressPackageStartupMessages(library(knitr, warn.conflicts=FALSE))
suppressPackageStartupMessages(library(tidyverse, warn.conflicts=FALSE))
suppressPackageStartupMessages(library(plotly, warn.conflicts=FALSE))
suppressPackageStartupMessages(library(readxl, warn.conflicts=FALSE))
suppressPackageStartupMessages(library(plotly, warn.conflicts=FALSE))
suppressPackageStartupMessages(library(MetBrewer, warn.conflicts=FALSE))
suppressPackageStartupMessages(library(gganimate, warn.conflicts=FALSE))
suppressPackageStartupMessages(library(dplyr, warn.conflicts=FALSE))

## Global options
options(max.print="75")
opts_chunk$set(echo=TRUE,
	             cache=TRUE,
               prompt=FALSE,
               comment=NA,
               message=FALSE,
               warning=FALSE)
opts_knit$set(width=75)
```

<br/>

## **Introduction**



### **What is the purpose/goal of `neg.semfit()`?**

The function neg.semfit performs three different types of equivalence tests for the fit indices RMSEA, CFI, and SRMR. This function can be used instead of using neg.rmsea, neg.cfi, and neg.srmr separately.

### **What is the theory behind `neg.semfit()`?**

For each of the three equivalence tests conducted, the function will compare one bound of the confidence interval (pertaining to the fit index) to one bound of an equivalence interval (also known as an equivalence bound).

For RMSEA and SRMR (because they are lower values are better fit indices), the upper bound of the confidence interval will be compared to the equivalence bound.

For CFI (because it is a higher value is better fit index), the lower bound of the confidence interval will be compared to the equivalence bound.

For more information on the theory behind these tests and choices of equivalence bounds see:

Beribisky, N., & Cribbie, R. A. (2023). Evaluating the performance of existing and novel equivalence tests for fit indices in structural equation modelling. British Journal of Mathematical and Statistical Psychology. 77(1), 103-129. https://doi.org/10.1111/bmsp.12317

MacCallum, R. C., Browne, M. W., & Sugawara, H. M. (1996). Power analysis and determination of sample size for covariance structure modeling. Psychological Methods, 1(2), 130–149. https://doi.org/10.1037/1082-989X.1.2.130

Maydeu-Olivares, A. (2017). Assessing the size of model misfit in structural equation models. Psychometrika, 82(3), 533–558. https://doi.org/10.1007/s11336-016-9552-7

Shi, D., Maydeu-Olivares, A., & DiStefano, C. (2018). The relationship between the standardized root mean square residual and model misspecification in factor analysis models. Multivariate Behavioral Research, 53(5), 676-694. https://doi.org/10.1080/00273171.2018.1476221

Yuan, K. H., Chan, W., Marcoulides, G. A., & Bentler, P. M. (2016). Assessing structural equation models by equivalence testing with adjusted fit indexes. Structural Equation Modeling: A Multidisciplinary Journal, 23(3), 319-330. https://doi.org/10.1080/10705511.2015.1065414


#### *Null and Alternate Hypotheses of the Procedure*

RMSEA:
$H_{0}: RMSEA_{pop} \ge MMES$ |
$H_{1}: RMSEA_{pop} < MMES$

SRMR:
$H_{0}: SRMR_{pop} \ge MMES$ |
$H_{1}: SRMR_{pop} < MMES$

CFI:
$H_{0}: CFI_{pop} \le MMES$ |
$H_{1}: CFI_{pop} > MMES$

Note that the MMES is the minimally meaningful effect size, as approximated by the equivalence bound.


### **Using `neg.semfit()`**

Now let's use the function. By default, doing so requires a fitted model object from lavaan.

#### *Required arguments (no default)*

*mod*: the fitted model object (from lavaan)



#### *Optional arguments (has a default)*

*alpha*: the optional argument for the alpha level (default is .05), 

*round*: the optional argument for the number of digits to round equivalence bound and confidence interval bounds (default is 3),

*rmsea.eq.bound*: the upper bound of the equivalence interval for RMSEA .Note that if rmsea.modif.eq.bound = TRUE, this value must be one of .01, .05, .08, or .10 (default is .05),

*rmsea.modif.eq.bound*: should the upper bound of the equivalence interval for RMSEA be modified (default is FALSE),

*rmsea.ci.method*: method used to calculate confidence interval for RMSEA; options are "not.close" or "yhy.boot"; "not.close" corresponds to (1-2alpha) percent CI, "yhy.boot" corresponds to (1-2alpha) percent boot CI (default is "not.close"),

*rmsea.nboot*: number of bootstrap samples if "yhy.boot" is selected as rmsea.ci.method (default is 250L)

*cfi.eq.bound*: lower bound of equivalence interval for CFI for comparison. This value must be one of .99, .95, .92 or .90 if cfi.modif.eq.bound = TRUE
(default is FALSE),

*cfi.modif.eq.bound*: should the lower bound of the equivalence interval for CFI be modified (default is FALSE),

*cfi.ci.method*: method used to calculate confidence interval for CFI; options are "yuan", "equiv" or "yhy.boot"; "yuan" corresponds to (1-alpha) percent CI, "equiv" corresponds to (1-2alpha) percent CI, "yhy.boot" corresponds to (1-2alpha) percent boot CI (default is "equiv"),

*cfi.nboot*: number of bootstrap samples if "yhy.boot" is selected as cfi.ci.method (default is 250L),

*srmr.eq.bound*: upper bound of equivalence interval for SRMR for comparison; note that this value must be one of .05 or .10 if modif.eq.bound = TRUE (default is .08),

*srmr.modif.eq.bound*: should the upper bound of the equivalence interval for SRMR be modified? (default is FALSE)

*srmr.ci.method*: method used to calculate confidence interval for SRMR; options are "MO" or "yhy.boot"; "MO" corresponds to (1-2alpha) percent CI, "yhy.boot" corresponds to (1-2alpha) percent boot CI (default is "MO"),

*usrmr*: fit index around which equivalence test should be structured. When usrmr = TRUE the usrmr from Maydeu-Olivares, 2017 will be used, otherwise srmr from fitmeasures() output in lavaan will be used (default is TRUE),

*srmr.nboot*: number of bootstrap samples if "yhy.boot" is selected as srmr.ci.method (default is 250L)



## **Examples**

### **Example 1**

First we need to create a model object using lavaan. Let's use the Holzinger and Swineford dataset that is part of the lavaan package.
```{r}
library(negligible)
library(lavaan)

d <- lavaan::HolzingerSwineford1939
hs.mod <- 'visual =~ x1 + x2 + x3
textual =~ x4 + x5 + x6
speed =~ x7 + x8 + x9'
fit1 <- lavaan::cfa(hs.mod, data = d)
```

Now we can use the function. Let's say that we just go with the defaults first.

```{r}
neg.semfit(mod = fit1)
```


### **Example 2**

Of course we don't have to rely on the defaults, and the equivalence bounds should be informed by the smallest misspecification a researcher would deem to be important. Let's change some.

```{r}
neg.semfit(mod = fit1,  rmsea.eq.bound = .10,
           cfi.eq.bound = .92,
           srmr.eq.bound = .10)
```

### **Example 3**

We can also use modified equivalence bounds. For instance, if we wanted to modify according to the adapted cutoff formula provided by Shi et al. 2018, we can change the input to the srmr.modif.eq.bound argument. Note that we can only do this because our equivalence bound for SRMR is .10 (it needs to be either .05 or .10 to apply this change).

```{r}
neg.semfit(mod = fit1,  rmsea.eq.bound = .10,
           cfi.eq.bound = .92,
           srmr.eq.bound = .10, srmr.modif.eq.bound = TRUE)
```

## **Extractable Elements**

A number of elements of the output can be extracted, including the following (note that the setup below will follow the named object containing the results; thus if the results of the negsem.fit function are saved to an object named x, x$ with the following additions will produce the corresponding extractable elements):

*rmsea.res$rmsea_index* The RMSEA value

*rmsea.res$ci.method* The confidence interval method selected for computing the upper bound of the confidence interval for the RMSEA

*rmsea.res$alpha* Nominal Type I error rate (this value will be the same for all RMSEA/CFI/SRMR tests)

*rmsea.res$rmsea_eq* The value of the upper bound of the confidence interval for CFI

*msea.res$modif.eq.bound* Whether a modified equivalence bound was used for the RMSEA equivalence test

*rmsea.res$eq.bound* The value of the equivalence bound used for the RMSEA equivalence test

*rmsea.res$decision* NHST decision for the RMSEA equivalence test

*cfi.res$cfi_index* The CFI value

*cfi.res$ci.method* TThe confidence interval method selected for computing the lower bound of the confidence interval for the CFI

*cfi.res$alpha* Nominal Type I error rate (this value will be the same for all RMSEA/CFI/SRMR tests)

*cfi.res$cfi_eq* The value of the lower bound of the confidence interval for CFI

*cfi.res$modif.eq.bound* Whether a modified equivalence bound was used for the CFI equivalence test

*cfi.res$eq.bound* The value of the equivalence bound used for the CFI equivalence test

*cfi.res$decision* NHST decision for the CFI equivalence test

*srmr.res$usrmr* Whether the unbiased SRMR is used for the equivalence test

*srmr.res$srmr_index* The SRMR index (USRMR or original SRMR depending on whether usrmr is TRUE or FALSE)

*srmr.res$ci.method* The confidence interval method selected for computing the upper bound of the confidence interval for the SRMR

*srmr.res$alpha* Nominal Type I error rate (this value will be the same for all RMSEA/CFI/SRMR tests)

*srmr.res$srmr_ci* The value of the upper bound of the confidence interval for SRMR

*srmr.res$modif.eq.bound* Whether a modified equivalence bound was used for the SRMR equivalence test

*srmr.res$eq.bound* The value of the equivalence bound used for the SRMR equivalence test

*srmr.res$decision* NHST decision for the SRMR equivalence test