Department of Industrial Psychology

Stellenbosch University

South Africa

Analysing survey-type data with the rating scale model

This is a demonstration of using R for the analysis of scales that employ items with ordinal response scales (e.g. Likert-type items). I use the rating scale model (Andrich, 1978; Wright & Masters, 1982), which is one of a family of Rasch models, to analyse the responses of 1377 persons to the items of the General Work Stress Scale (de Bruin & Taylor, 2006). The analysis is performed with the TAM package (Robitzsch et al., 2022). It is assumed that readers have a basic understanding of Rasch models.

There are two Rasch models that can be used for the analysis of scales with items that employ ordered response categories. The rating scale model can only be used if all the items share the same response categories that are labelled the same way across all the items (e.g. Likert-type items). The model estimates one set of category thresholds that holds across all the items. The partial credit model can be used with this type of data, but can also be employed when the ordered response categories differ across the items. I contrast to the rating scale model the partial credit model estimates a unique set of thresholds for each item. The partial credit model estimates more parameters than the rating scale model and in general will provide better fit. The rating scale model is simpler and is easier to interpret.

The R-code below can be used to perform a basic rating scale model analysis of any unidimensional scale with ordinal response categories. Users should import their data and store it as a data frame named raschdata. The data frame should include only the items of interest. Users can use square brackets to select items for inclusion or exclusion. For instance,

Users should ensure that the items are all scored in the same direction (i.e. if required, items should be reverse scored before the analysis begins).

Users must have the TAM and psych packages installed on their machines,

Store the data and find descriptive statistics

library(TAM)
library(psych)
raschdata <- read.csv("stressdata.csv")

nitems <- ncol(raschdata)

describe(raschdata)
##    vars    n mean   sd median trimmed  mad min max range skew kurtosis   se
## i1    1 1377 1.60 1.06      2    1.56 1.48   0   4     4 0.25    -0.35 0.03
## i2    2 1377 1.15 1.11      1    1.01 1.48   0   4     4 0.74    -0.19 0.03
## i3    3 1377 1.23 1.06      1    1.11 1.48   0   4     4 0.69    -0.07 0.03
## i4    4 1377 1.24 1.09      1    1.14 1.48   0   4     4 0.54    -0.48 0.03
## i5    5 1377 1.04 0.93      1    0.95 1.48   0   4     4 0.70     0.13 0.03
## i6    6 1377 1.08 0.90      1    1.01 1.48   0   4     4 0.65     0.18 0.02
## i7    7 1377 1.67 1.05      2    1.66 1.48   0   4     4 0.16    -0.66 0.03
## i8    8 1377 0.97 0.94      1    0.87 1.48   0   4     4 0.86     0.49 0.03
## i9    9 1377 0.99 0.96      1    0.88 1.48   0   4     4 0.91     0.59 0.03

Find response frequencies across response categories

alpha(raschdata)$response.freq
##            0         1         2          3          4 miss
## i1 0.1728395 0.2716049 0.3907044 0.11111111 0.05374001    0
## i2 0.3565723 0.2984749 0.2265795 0.07988381 0.03848947    0
## i3 0.2817720 0.3645606 0.2360203 0.08133624 0.03631082    0
## i4 0.3071895 0.3028322 0.2599855 0.10021786 0.02977487    0
## i5 0.3202614 0.3972404 0.2171387 0.05228758 0.01307190    0
## i6 0.2824982 0.4306463 0.2244009 0.05083515 0.01161946    0
## i7 0.1408860 0.3144517 0.3195352 0.18736383 0.03776325    0
## i8 0.3623820 0.3812636 0.1982571 0.04066812 0.01742919    0
## i9 0.3536674 0.3870733 0.1931736 0.04429920 0.02178649    0

Fit the data to the rating scale model

# Estimate and inspect the item parameters
myTAM <- tam.mml(raschdata, 
                 irtmodel = "RSM")

Inspect the item parameters and centralized category thresholds

myTAM$item_irt
##   item alpha      beta  tau.Cat1   tau.Cat2  tau.Cat3 tau.Cat4
## 1   i1     1 0.6418076 -2.273421 -0.7179472 0.9974021 1.993966
## 2   i2     1 1.5021335 -2.273421 -0.7179472 0.9974021 1.993966
## 3   i3     1 1.3414514 -2.273421 -0.7179472 0.9974021 1.993966
## 4   i4     1 1.3087310 -2.273421 -0.7179472 0.9974021 1.993966
## 5   i5     1 1.7184458 -2.273421 -0.7179472 0.9974021 1.993966
## 6   i6     1 1.6392014 -2.273421 -0.7179472 0.9974021 1.993966
## 7   i7     1 0.5258696 -2.273421 -0.7179472 0.9974021 1.993966
## 8   i8     1 1.8717364 -2.273421 -0.7179472 0.9974021 1.993966
## 9   i9     1 1.8195057 -2.273421 -0.7179472 0.9974021 1.993966
deltas <- myTAM$xsi[1:nitems, ]

Plot the option characteristic curves

# Plot the option characteristic curves
plot(myTAM, 
     type = "items", 
     export = FALSE, 
     package = "graphics", 
     observed = TRUE, 
     low = -6, 
     high = 6)

Plot the empirical and theoretical expected scores curves

plot(myTAM, 
     type = "expected", 
     ngroups = 6, 
     low  = -6, 
     high = 6,
     package = "lattice", overlay = FALSE)

Overlay the expected score curves

plot(myTAM, 
     type = "expected", 
     ngroups = 6, 
     low  = -6, 
     high = 6,
     package = "lattice", 
     overlay = TRUE,
     observed = FALSE)
## Iteration in WLE/MLE estimation  1   | Maximal change  2.0815 
## Iteration in WLE/MLE estimation  2   | Maximal change  0.8564 
## Iteration in WLE/MLE estimation  3   | Maximal change  0.4896 
## Iteration in WLE/MLE estimation  4   | Maximal change  0.1308 
## Iteration in WLE/MLE estimation  5   | Maximal change  0.0037 
## Iteration in WLE/MLE estimation  6   | Maximal change  1e-04 
## Iteration in WLE/MLE estimation  7   | Maximal change  0 
## ----
##  WLE Reliability= 0.867

## ....................................................
##  Plots exported in png format into folder:
##  C:/Users/deondb/OneDrive - Stellenbosch University/Documents/Rpubs/Plots

Inspect the infit and outfit statistics

fit <- msq.itemfit(myTAM)$itemfit
(fit <- data.frame(fit[1], round(fit[3:8], 2)))
##   item Outfit Outfit_t Outfit_p Infit Infit_t Infit_p
## 1   i1   0.96    -1.05     0.30  0.94   -1.72    0.08
## 2   i2   0.97    -0.66     0.51  1.02    0.57    0.57
## 3   i3   1.03     0.67     0.50  1.04    0.93    0.35
## 4   i4   1.15     3.54     0.00  1.17    4.29    0.00
## 5   i5   1.00    -0.04     0.97  1.00    0.01    0.99
## 6   i6   0.77    -5.67     0.00  0.76   -6.65    0.00
## 7   i7   1.10     2.52     0.01  1.08    2.20    0.03
## 8   i8   0.90    -2.15     0.03  0.92   -2.11    0.03
## 9   i9   1.10     2.17     0.03  1.12    2.97    0.00

Inspect Yen’s Q3 and adjusted Q3 statistics

q3 <- tam.modelfit(myTAM)$stat.itempair
## **** Calculate Residuals 
## **** Calculate Counts 
## **** Calculate Covariances
q3 <- data.frame(q3[1:2], round(q3[5:6], 2), round(q3[7:8], 4))
q3
##    item1 item2    Q3   aQ3      p p.holm
## 1     i1    i2  0.40  0.51 0.0000 0.0000
## 27    i5    i6  0.23  0.34 0.0000 0.0000
## 24    i4    i7  0.16  0.27 0.0000 0.0000
## 13    i2    i7 -0.35 -0.24 0.0000 0.0000
## 9     i2    i3  0.12  0.23 0.0000 0.0000
## 6     i1    i7 -0.31 -0.19 0.0000 0.0000
## 3     i1    i4 -0.29 -0.18 0.0000 0.0000
## 2     i1    i3  0.04  0.15 0.0000 0.0000
## 11    i2    i5 -0.25 -0.14 0.0000 0.0000
## 10    i2    i4 -0.24 -0.13 0.0000 0.0001
## 4     i1    i5 -0.24 -0.12 0.0000 0.0001
## 19    i3    i7 -0.22 -0.11 0.0000 0.0009
## 5     i1    i6 -0.22 -0.11 0.0001 0.0015
## 21    i3    i9 -0.21 -0.10 0.0002 0.0041
## 18    i3    i6 -0.20 -0.09 0.0005 0.0121
## 20    i3    i8 -0.20 -0.09 0.0011 0.0234
## 26    i4    i9 -0.20 -0.09 0.0012 0.0237
## 34    i7    i8 -0.02  0.09 0.0012 0.0237
## 32    i6    i8 -0.03  0.08 0.0018 0.0317
## 36    i8    i9 -0.03  0.08 0.0024 0.0410
## 12    i2    i6 -0.19 -0.08 0.0034 0.0541
## 30    i5    i9 -0.05  0.06 0.0180 0.2703
## 23    i4    i6 -0.17 -0.06 0.0242 0.3388
## 35    i7    i9 -0.17 -0.06 0.0276 0.3587
## 7     i1    i8 -0.17 -0.05 0.0430 0.5162
## 16    i3    i4 -0.06  0.05 0.0525 0.5775
## 17    i3    i5 -0.16 -0.05 0.0628 0.6283
## 33    i6    i9 -0.06  0.05 0.0660 0.6283
## 29    i5    i8 -0.15 -0.04 0.1386 1.0000
## 8     i1    i9 -0.15 -0.04 0.1595 1.0000
## 15    i2    i9 -0.08  0.03 0.2436 1.0000
## 28    i5    i7 -0.14 -0.03 0.3095 1.0000
## 31    i6    i7 -0.08  0.03 0.3401 1.0000
## 25    i4    i8 -0.09  0.02 0.3892 1.0000
## 22    i4    i5 -0.13 -0.02 0.3997 1.0000
## 14    i2    i8 -0.10  0.02 0.5690 1.0000

Inspect Root Mean Squared Discrepancy statistics

(item.RMSD <- IRT.itemfit(myTAM)$RMSD_bc)
##   item     Group1
## 1   i1 0.04523986
## 2   i2 0.06536369
## 3   i3 0.03361483
## 4   i4 0.03904145
## 5   i5 0.02867551
## 6   i6 0.05839923
## 7   i7 0.04668713
## 8   i8 0.02544132
## 9   i9 0.02677296

Find and store the standardized residuals

myTAMresid <- IRT.residuals(myTAM)$stand_residuals

Scree plot of the standardized residuals

psych::scree(myTAMresid, 
             factors = FALSE)

Perform unrotated principal components analysis of the residuals

(pcaresid <- pca(myTAMresid,
                 nfactors = ncol(myTAMresid),
                 rotate = "none"))
## Principal Components Analysis
## Call: principal(r = r, nfactors = nfactors, residuals = residuals, 
##     rotate = rotate, n.obs = n.obs, covar = covar, scores = scores, 
##     missing = missing, impute = impute, oblique.scores = oblique.scores, 
##     method = method, use = use, cor = cor, correct = 0.5, weight = NULL)
## Standardized loadings (pattern matrix) based upon correlation matrix
##      PC1   PC2   PC3   PC4   PC5   PC6   PC7   PC8  PC9 h2       u2 com
## i1  0.74  0.03 -0.06  0.17 -0.42  0.02 -0.13 -0.47 0.06  1  5.6e-16 2.6
## i2  0.77  0.03 -0.07  0.10 -0.11  0.16  0.11  0.58 0.05  1 -1.8e-15 2.1
## i3  0.40 -0.30  0.48 -0.12  0.61 -0.35  0.03 -0.08 0.06  1 -6.7e-16 4.1
## i4 -0.38 -0.63  0.19 -0.14 -0.04  0.60  0.19 -0.07 0.06  1 -6.7e-16 3.2
## i5 -0.35  0.55  0.49 -0.02 -0.03  0.16 -0.54  0.11 0.05  1 -1.6e-15 4.0
## i6 -0.35  0.57  0.23  0.38 -0.03 -0.08  0.59 -0.06 0.05  1 -4.4e-16 3.8
## i7 -0.55 -0.47 -0.14  0.09 -0.38 -0.52 -0.10  0.16 0.06  1 -1.6e-15 4.3
## i8 -0.19  0.03 -0.67  0.46  0.50  0.13 -0.19 -0.04 0.05  1  7.8e-16 3.3
## i9 -0.07  0.40 -0.45 -0.78  0.04 -0.05  0.12 -0.04 0.06  1 -6.7e-16 2.3
## 
##                        PC1  PC2  PC3  PC4  PC5  PC6  PC7  PC8  PC9
## SS loadings           2.04 1.50 1.23 1.05 0.96 0.82 0.76 0.61 0.03
## Proportion Var        0.23 0.17 0.14 0.12 0.11 0.09 0.08 0.07 0.00
## Cumulative Var        0.23 0.39 0.53 0.65 0.75 0.85 0.93 1.00 1.00
## Proportion Explained  0.23 0.17 0.14 0.12 0.11 0.09 0.08 0.07 0.00
## Cumulative Proportion 0.23 0.39 0.53 0.65 0.75 0.85 0.93 1.00 1.00
## 
## Mean item complexity =  3.3
## Test of the hypothesis that 9 components are sufficient.
## 
## The root mean square of the residuals (RMSR) is  0 
##  with the empirical chi square  0  with prob <  NA 
## 
## Fit based upon off diagonal values = 1

Plot the test information curve

### Plot the test information curve
info.myTAM <- IRT.informationCurves(myTAM)

plot(info.myTAM, xlab = "Theta", 
     ylab = "Test information", 
     main = "Test Information Curve")

Plot the test standard error curve

plot(info.myTAM, 
     curve_type = "se", 
     xlab = "Theta", 
     ylab = "Standard error", 
     main = "Test Standard Error Curve")

Find and store the estimated person measures

Users can choose “MLE”, “WLE” or “EAP”.

pmod1 <- IRT.factor.scores(myTAM, type = "WLE")
## ***** Estimate WLE *****
## Iteration in WLE/MLE estimation  1   | Maximal change  2.0815 
## Iteration in WLE/MLE estimation  2   | Maximal change  0.8564 
## Iteration in WLE/MLE estimation  3   | Maximal change  0.4896 
## Iteration in WLE/MLE estimation  4   | Maximal change  0.1308 
## Iteration in WLE/MLE estimation  5   | Maximal change  0.0037 
## Iteration in WLE/MLE estimation  6   | Maximal change  1e-04 
## Iteration in WLE/MLE estimation  7   | Maximal change  0 
## ----
##  WLE Reliability= 0.867
#pmod1 <- IRT.factor.scores(myTAM, type = "EAP")
#pmod1 <- IRT.factor.scores(myTAM, type = "MLE")

Distribution of person measures

### Histogram and boxplot of the WLE person measures 
hist(pmod1$theta, 
     xlab = "Person measures in logits")

boxplot(pmod1$theta, 
        ylab = "Person measures in logits")

### Histogram and boxplot of the EAP person measures 
#hist(pmod1$EAP, 
#     xlab = "Person measures in logits")

#boxplot(pmod1$EAP, 
#        ylab = "Person measures in logits")

Construct alley using outfit

### Plot a construct alley using outfit
plot(msq.itemfit(myTAM)$itemfit$Outfit, 
     deltas$xsi[1:ncol(raschdata)], 
     xlim = c(0.5, 1.5), type = "n",
     xlab = "Outfit mean square", 
     ylab = "Item location in logits")

text(msq.itemfit(myTAM)$itemfit$Outfit, 
     deltas$xsi[1:nitems],
     labels = colnames(raschdata), 
     cex = 0.9, 
     font = 2)

abline(v = 1)
abline(v = 1.30, col = "blue")
abline(v = 0.70, col = "blue")

Construct alley using infit

### Plot a construct alley using infit
plot(msq.itemfit(myTAM)$itemfit$Infit, 
     deltas$xsi[1:nitems], 
     xlim = c(0.5, 1.5), 
     type = "n",
     xlab = "Infit mean square", 
     ylab = "Item location in logits")
  
text(msq.itemfit(myTAM)$itemfit$Infit, 
     deltas$xsi[1:nitems],
     labels = colnames(raschdata), 
     cex = 0.9, font = 2)

abline(v = 1)
abline(v = 1.30, col = "blue")
abline(v = 0.70, col = "blue")

Wright map with Thurstonian thresholds

IRT.WrightMap(myTAM)

Descriptive statistics of person measures

psych::describe(pmod1$theta)
##    vars    n mean   sd median trimmed  mad   min  max range  skew kurtosis   se
## X1    1 1377    0 1.53   0.01    0.03 1.28 -3.94 6.37 10.31 -0.13     0.84 0.04

Reliability of the person measures

WLErel(pmod1$theta, 
       pmod1$error)
## [1] 0.8665239

Coefficients alpha, beta, and omega

The reliability() function yields three estimates of test reliability: Cronbach’s alpha, McDonald’s omega, and Revelle’s beta coeffients. Coefficient beta is the minimum split-half reliability. The function also yields three experimental unidimensionality statistics: (a) the fit of a single factor model, (b) the fit of a model where the mean item correlation is used as the “model”, and (c) a unidimensionality statistic that is the product of (a) and (b). See the help pages of the psych package for more information (Revelle, 2023).

If the number of factors is changed to two or more coefficient omega_hierarchical represents the proportion of observed variance that is explained by the general factor of a Schmid-Leiman transformation of the oblique factor solution. Omega_total represents the proportion of observed variance that is accounted for by all the common factors (Revelle & Condon, 2019).

reliability(raschdata, nfactors = 1)
## Measures of reliability 
## reliability(keys = raschdata, nfactors = 1)
##           omega_h alpha omega.tot  Uni r.fit fa.fit max.split min.split mean.r
## All_items     0.9   0.9       0.9 0.96  0.98   0.99      0.92      0.84   0.51
##           med.r n.items
## All_items   0.5       9

References

Andrich, D. (1978). A rating formulation for ordered response categories. Psychometrika, 43, 561-573.

de Bruin, G. P. & Taylor, N. (2005). Development of the Sources of Work Stress Inventory. South African Journal of Psychology, 35, 748-765.

de Bruin, G. P. (2006). Dimensionality of the General Work Stress Scale. SA Journal of Industrial Psychology, 32(4), 68-75.

Revelle, W. (2023). psych: Procedures for Psychological, Psychometric, and Personality Research. Northwestern University, Evanston, Illinois. R package version 2.3.3, https://CRAN.R-project.org/package=psych.

Revelle, W., & Condon, D. (2019). Reliability from \(\alpha\) to \(\omega\). Psychological Assessment, 31, 1395-1411.

Robitzsch A., Kiefer, T., & Wu, M. (2022). TAM: Test Analysis Modules. R package version 4.1-4, https://CRAN.R-project.org/package=TAM.

Wright, B. D. & Masters, G. M. (1982). Rating scale analysis. Chicago: Mesa Press.

Zinbarg, R. E., Revelle, W., Yovel, I., & Li, W. (2005). Cronbach’s \(\alpha\), Revelle’s \(\beta\), and McDonald’s \(\omega_H\): Their relations with each other and two alternative conceptualizations of reliability. Psychometrika, 70, 123-133. http://dx.doi.org/10.1007/s11336-003-0974-7