We’ll revisit the eRm package to do some basic DIF analyses.
library(tidyverse)
library(eRm)
Our dataset comes from McNamara et al. (2019). It’s an L2 listening test taken by 400 or so people.
d <- read_csv("DIF analysis.csv")
##
## -- Column specification --------------------------------------------------------
## cols(
## .default = col_double()
## )
## i Use `spec()` for the full column specifications.
items <- select(d, 3:22)
gender <- d$Gender
I’ve gone ahead and saved the items and the group identifier separately, but it’s not totally necessary.
We’ll run a Rasch model just to get a sense of how the items work.
rasch <- eRm::RM(items, sum0 = T, se = T)
summary(rasch)
##
## Results of RM estimation:
##
## Call: eRm::RM(X = items, se = T, sum0 = T)
##
## Conditional log-likelihood: -3927.053
## Number of iterations: 15
## Number of parameters: 19
##
## Item (Category) Difficulty Parameters (eta): with 0.95 CI:
## Estimate Std. Error lower CI upper CI
## ITEM02 -0.431 0.106 -0.638 -0.223
## ITEM03 -0.350 0.105 -0.556 -0.145
## ITEM04 -0.260 0.104 -0.464 -0.056
## ITEM05 0.372 0.103 0.171 0.573
## ITEM06 0.394 0.103 0.192 0.595
## ITEM07 0.080 0.102 -0.121 0.281
## ITEM08 0.123 0.102 -0.078 0.324
## ITEM09 -0.362 0.105 -0.568 -0.156
## ITEM10 0.350 0.103 0.149 0.552
## ITEM11 1.384 0.117 1.154 1.613
## ITEM12 0.515 0.104 0.312 0.717
## ITEM13 0.285 0.103 0.084 0.486
## ITEM14 0.892 0.107 0.681 1.102
## ITEM15 0.416 0.103 0.214 0.617
## ITEM16 1.089 0.111 0.872 1.305
## ITEM17 0.123 0.102 -0.078 0.324
## ITEM18 -0.745 0.111 -0.962 -0.528
## ITEM19 0.740 0.106 0.533 0.946
## ITEM20 -1.204 0.121 -1.442 -0.966
##
## Item Easiness Parameters (beta) with 0.95 CI:
## Estimate Std. Error lower CI upper CI
## beta ITEM01 3.409 0.272 2.876 3.942
## beta ITEM02 0.431 0.106 0.223 0.638
## beta ITEM03 0.350 0.105 0.145 0.556
## beta ITEM04 0.260 0.104 0.056 0.464
## beta ITEM05 -0.372 0.103 -0.573 -0.171
## beta ITEM06 -0.394 0.103 -0.595 -0.192
## beta ITEM07 -0.080 0.102 -0.281 0.121
## beta ITEM08 -0.123 0.102 -0.324 0.078
## beta ITEM09 0.362 0.105 0.156 0.568
## beta ITEM10 -0.350 0.103 -0.552 -0.149
## beta ITEM11 -1.384 0.117 -1.613 -1.154
## beta ITEM12 -0.515 0.104 -0.717 -0.312
## beta ITEM13 -0.285 0.103 -0.486 -0.084
## beta ITEM14 -0.892 0.107 -1.102 -0.681
## beta ITEM15 -0.416 0.103 -0.617 -0.214
## beta ITEM16 -1.089 0.111 -1.305 -0.872
## beta ITEM17 -0.123 0.102 -0.324 0.078
## beta ITEM18 0.745 0.111 0.528 0.962
## beta ITEM19 -0.740 0.106 -0.946 -0.533
## beta ITEM20 1.204 0.121 0.966 1.442
Chapter 7 discusses unidimensionality tests. Aside from a PCA of Rasch residuals, it’s also common to conduct Andersen’s Likelihood Ratio Test on a median split of the data. The idea is that the ability should be measured identically (i.e., item parameters should be the same) for any grouping of the data.
We can explore that using the LRtest function in eRm. By default, it uses a median split based on total scores.
summary(LRtest(rasch, se = T))
##
## Andersen LR-test:
## LR-value: 57.801
## Chi-square df: 19
## p-value: 0
##
##
## Subject Subgroup: Raw Scores <= Median:
## Log-likelihood: -2061.359
##
## Beta Parameters:
## beta ITEM01 beta ITEM02 beta ITEM03 beta ITEM04 beta ITEM05
## Estimate 3.2336629 0.3984213 0.1483465 0.2450110 -0.2985310
## Std.Err. 0.2866946 0.1361359 0.1370902 0.1364766 0.1440495
## beta ITEM06 beta ITEM07 beta ITEM08 beta ITEM09 beta ITEM10
## Estimate -0.4313368 -0.4313368 -0.1923826 0.2257487 -0.1923826
## Std.Err. 0.1474718 0.1474719 0.1417690 0.1365742 0.1417691
## beta ITEM11 beta ITEM12 beta ITEM13 beta ITEM14 beta ITEM15
## Estimate -1.0657013 -0.4313368 -0.5239593 -1.0341380 -0.5717236
## Std.Err. 0.1731488 0.1474719 0.1502420 0.1714829 0.1517961
## beta ITEM16 beta ITEM17 beta ITEM18 beta ITEM19 beta ITEM20
## Estimate -0.8016802 0.08981432 0.7666005 -0.4087284 1.275632
## Std.Err. 0.1605091 0.13761313 0.1384496 0.1468439 0.148977
##
##
## Subject Subgroup: Raw Scores > Median:
## Log-likelihood: -1836.794
##
## Beta Parameters:
## beta ITEM01 beta ITEM02 beta ITEM03 beta ITEM04 beta ITEM05
## Estimate 4.3784055 0.4180568 0.6088446 0.2190507 -0.5035908
## Std.Err. 0.9528137 0.1754755 0.1847247 0.1674354 0.1506343
## beta ITEM06 beta ITEM07 beta ITEM08 beta ITEM09 beta ITEM10
## Estimate -0.4187078 0.2737924 -0.1064138 0.5106807 -0.5664614
## Std.Err. 0.1516714 0.1694895 0.1575757 0.1797719 0.1500188
## beta ITEM11 beta ITEM12 beta ITEM13 beta ITEM14 beta ITEM15
## Estimate -1.6508915 -0.6494782 -0.1064138 -0.8545678 -0.3322365
## Std.Err. 0.1590055 0.1494007 0.1575759 0.1488114 0.1529734
## beta ITEM16 beta ITEM17 beta ITEM18 beta ITEM19 beta ITEM20
## Estimate -1.3557733 -0.4187078 0.6429697 -1.0805098 0.9919522
## Std.Err. 0.1528941 0.1516712 0.1865454 0.1496848 0.2082697
It seems there’s some kind of problem. Looking at the output, you can see where some difficulty estimates are not the same for the high and low groups.
Maybe it’s gender? We can look at an LR test based on a gender split.
rasch.lr <- LRtest(rasch, splitcr = gender, se = T)
summary(rasch.lr)
##
## Andersen LR-test:
## LR-value: 63.892
## Chi-square df: 19
## p-value: 0
##
##
## Subject Subgroup: gender 1:
## Log-likelihood: -1844.842
##
## Beta Parameters:
## beta ITEM01 beta ITEM02 beta ITEM03 beta ITEM04 beta ITEM05
## Estimate 3.6357633 0.5738635 0.6280113 0.3922672 -0.6648501
## Std.Err. 0.4845486 0.1612232 0.1628193 0.1565697 0.1491787
## beta ITEM06 beta ITEM07 beta ITEM08 beta ITEM09 beta ITEM10
## Estimate -0.1273997 0.1726447 -0.1047875 0.2686720 -0.8951881
## Std.Err. 0.1488717 0.1523279 0.1490406 0.1540001 0.1518674
## beta ITEM11 beta ITEM12 beta ITEM13 beta ITEM14 beta ITEM15
## Estimate -1.398660 -0.3513689 -0.2621182 -0.9424530 -0.4405148
## Std.Err. 0.163426 0.1479943 0.1481717 0.1526143 0.1480444
## beta ITEM16 beta ITEM17 beta ITEM18 beta ITEM19 beta ITEM20
## Estimate -1.162362 -0.2844567 0.6555219 -0.9902330 1.2976482
## Std.Err. 0.156993 0.1481061 0.1636679 0.1534383 0.1912338
##
##
## Subject Subgroup: gender 2:
## Log-likelihood: -2050.265
##
## Beta Parameters:
## beta ITEM01 beta ITEM02 beta ITEM03 beta ITEM04 beta ITEM05
## Estimate 3.2866577 0.3095301 0.1219248 0.1426628 -0.1059930
## Std.Err. 0.3298964 0.1424512 0.1415282 0.1415839 0.1416718
## beta ITEM06 beta ITEM07 beta ITEM08 beta ITEM09 beta ITEM10
## Estimate -0.6654164 -0.3155829 -0.1476054 0.4365964 0.1426628
## Std.Err. 0.1478423 0.1430134 0.1418461 0.1436213 0.1415839
## beta ITEM11 beta ITEM12 beta ITEM13 beta ITEM14 beta ITEM15
## Estimate -1.3817645 -0.6882103 -0.3155829 -0.8520396 -0.400908
## Std.Err. 0.1684268 0.1482713 0.1430135 0.1517731 0.143891
## beta ITEM16 beta ITEM17 beta ITEM18 beta ITEM19 beta ITEM20
## Estimate -1.0250390 0.01838527 0.8143389 -0.5093284 1.1347121
## Std.Err. 0.1562915 0.14142298 0.1498100 0.1452843 0.1584571
plotGOF(rasch.lr)
For some reason the plot comes out funny, but it’s informative when formatted correctly. This shows which items deviate most across groups.
For doing item-level DIF analyses, the eRm package implements a Wald test. This is roughly equivalent to what Winsteps does with the Rasch-Welch test, but returns a Z value and associated p-val instead of a t value.
rasch.wald <- Waldtest(rasch, splitcr = gender)
print(rasch.wald)
##
## Wald test on item level (z-values):
##
## z-statistic p-value
## beta ITEM01 0.596 0.551
## beta ITEM02 1.229 0.219
## beta ITEM03 2.346 0.019
## beta ITEM04 1.182 0.237
## beta ITEM05 -2.716 0.007
## beta ITEM06 2.564 0.010
## beta ITEM07 2.337 0.019
## beta ITEM08 0.208 0.835
## beta ITEM09 -0.797 0.425
## beta ITEM10 -4.999 0.000
## beta ITEM11 -0.072 0.943
## beta ITEM12 1.608 0.108
## beta ITEM13 0.260 0.795
## beta ITEM14 -0.420 0.674
## beta ITEM15 -0.192 0.848
## beta ITEM16 -0.620 0.535
## beta ITEM17 -1.479 0.139
## beta ITEM18 -0.716 0.474
## beta ITEM19 -2.276 0.023
## beta ITEM20 0.656 0.512
We need some better output, though, to evaluate the size of the DIF. We’ll need to do a little manual work on setting up a helpful dataframe.
DIF <- tibble(item = row.names(rasch.wald$coef.table), z = rasch.wald$coef.table[,1],
p = rasch.wald$coef.table[,2], m_est = -1*rasch.wald$betapar1,
f_est = -1*rasch.wald$betapar2) %>% mutate(contrast = m_est - f_est)
Now we can get a sense of the size of the DIF for each item. We can also make a neat looking plot:
DIF %>% ggplot(aes(x = item, y = contrast, fill = p<.05))+
geom_bar(stat = "identity") +
geom_hline(yintercept = .5, linetype = "dashed")+
geom_hline(yintercept = -.5, linetype = "dashed")+
labs(y = "DIF contrast (logits) \n Harder for Females <- -> Harder for Males",
x = "")+
coord_flip() +
theme_bw()