library(ltm)
library(mirt)
data("LSAT")
head(LSAT)
LSAT.model<-ltm(LSAT~z1, IRT.param=TRUE)
coef(LSAT.model)
Dffclt Dscrmn
Item 1 -3.3597341 0.8253715
Item 2 -1.3696497 0.7229499
Item 3 -0.2798983 0.8904748
Item 4 -1.8659189 0.6885502
Item 5 -3.1235725 0.6574516
#Items 1 and 5 are easy. Most people are getting them right. Item 3 is close to the mean. There are no items so far, that are difficult. Discrimination tells us how often they are getting them correct. Disrimination closer to 1 is better. These five items are easy.
plot(LSAT.model, type = "ICC")
#Very far away from what should be measured for traits. Item 3 is the best. We want items to be more logit curved.
plot(LSAT.model, type = "ICC", items = 3)
factor.scores(LSAT.model)
Call:
ltm(formula = LSAT ~ z1, IRT.param = TRUE)
Scoring Method: Empirical Bayes
Factor-Scores for observed response patterns:
NA
#Here it shows patterns. The number of observations for each pattern. Here it shows where the participants responded, and determine an individuals latent trait scores Score
LSAT.model2 <- tpm(LSAT, type="latent.trait", IRT.param=TRUE)
#Does adding the guessing parameter help?
coef(LSAT.model2)
Gussng Dffclt Dscrmn
Item 1 0.03738668 -3.2964761 0.8286287
Item 2 0.07770994 -1.1451487 0.7603748
Item 3 0.01178206 -0.2490144 0.9015777
Item 4 0.03529306 -1.7657862 0.7006545
Item 5 0.05315665 -2.9902046 0.6657969
#These items are low, pretty close to 0. Not very hard, but not very easy to guess.
factor.scores(LSAT.model2)
Call:
tpm(data = LSAT, type = "latent.trait", IRT.param = TRUE)
Scoring Method: Empirical Bayes
Factor-Scores for observed response patterns:
NA
anova(LSAT.model, LSAT.model2)
either the two models are not nested or the model represented by 'object2' fell on a local maxima.
Likelihood Ratio Table
NA
#Go with the lower AIC. Adding the guess parameter does not help.