This analysis aims to evaluate the quality of test items in assessing students’ understanding of mathematical concepts using the Rasch Model. This model was chosen because it can assess item fit and estimate students’ abilities objectively along a single ability dimension.
Data Description:
#> item1 item2 item3 item4 item5 item6 item7 item8 item9 item10 item11
#> 0 0 0 0 0 0 0 0 0 0 0
#> item12 item13 item14 item15
#> 0 0 0 0
From the data above, there are no NA or missing values.
df_rasch <- mirt(data = df,
model = 1,
itemtype = "Rasch", technical = list(NCYCLES = 1000000),
verbose = FALSE)
# save model
saveRDS(df_rasch, file = "model_rasch.RDS")
### Load Model
model_rasch <- readRDS("model_rasch.RDS")
df_rasch_param <- coef(model_rasch, IRTpars = TRUE, simplify = TRUE)
df_rasch_param <- df_rasch_param %>%
as.data.frame() %>%
rename(Difficulty_Level = items.b,
Discrimination = F1) %>%
select(Difficulty_Level, Discrimination)
df_rasch_param
Interpretation
item_fit <- itemfit(model_rasch)
item_fit %>%
mutate(information =
case_when(
p.S_X2 < 0.05 ~ "no Fit",
TRUE ~ "Fit"
))
Interpretation
Items 1, 2, 4, 5, 8, 11, 13, 14, and 15 show misfit (p < 0.05), meaning that students responses to these items do not align with the predictions of the Rasch Model.
theta_est <- fscores(model_rasch)
theta_est %>%
data.frame() %>%
rename(theta = F1) %>%
head(15)
df_theta <- cbind(df, theta_est)
df_theta %>%
rename(theta = F1) %>%
summarise(Min = min(theta),
Max = max(theta),
Mean = mean(theta),
SD = sd(theta))
Interpretation
plot(model_rasch, type = "trace")
This graph shows the probability of answering correctly relative to the ability level (theta). Each curve represents a single item.
Replace or revise the nine items (items 1, 2, 4, 5, 8, 11, 13, 14, and 15) because of misfit.