Problem 1: Create a rasch() model based on the data, fitting a single discriminability parameter for the entire test, and different difficulty parameters for each question. Be sure to include only the questions, and not participant code, in the model
Solution: Looking at the IRT model, we can conclude that the value which has the least or most negative coefficient is the worst and the value which has the highest value indicates that the level of difficulty is higher or it is the best question to differenciate the abilities.
The question which is redundant is B1 as it does not add value to the test.
B1 is not well fit in the model since the P-value(0.0841) is higher than the significance level(<0.05).
Margins in pair(6,7) are not well fit since the value is greater than 3.5
In persin-fit, we see that the person4 is performing the worst with the least P-value.
Code:
library(ltm)
## Warning: package 'ltm' was built under R version 3.6.2
## Loading required package: MASS
## Loading required package: msm
## Warning: package 'msm' was built under R version 3.6.2
## Loading required package: polycor
## Warning: package 'polycor' was built under R version 3.6.2
library(reshape2)
library(ggplot2)
library(ltm)
library(dplyr)
##
## Attaching package: 'dplyr'
## The following object is masked from 'package:MASS':
##
## select
## The following objects are masked from 'package:stats':
##
## filter, lag
## The following objects are masked from 'package:base':
##
## intersect, setdiff, setequal, union
data <- read.csv("bntcrt.csv")
answers <- data[,2:8]
total <- rowSums(answers)
answers <- answers[order(total),]
v1<-rasch(answers)
summary(v1)
##
## Call:
## rasch(data = answers)
##
## Model Summary:
## log.Lik AIC BIC
## -1121.318 2258.636 2287.453
##
## Coefficients:
## value std.err z.vals
## Dffclt.B1 -1.3894 0.1791 -7.7563
## Dffclt.B2 -0.4069 0.1380 -2.9488
## Dffclt.B3 0.6383 0.1448 4.4086
## Dffclt.B4 1.3199 0.1758 7.5064
## Dffclt.C1 0.1338 0.1347 0.9938
## Dffclt.C2 0.1175 0.1345 0.8733
## Dffclt.C3 -0.9935 0.1577 -6.3005
## Dscrmn 1.1451 0.0998 11.4765
##
## Integration:
## method: Gauss-Hermite
## quadrature points: 21
##
## Optimization:
## Convergence: 0
## max(|grad|): 0.00074
## quasi-Newton: BFGS
item.fit(v1)
##
## Item-Fit Statistics and P-values
##
## Call:
## rasch(data = answers)
##
## Alternative: Items do not fit the model
## Ability Categories: 10
##
## X^2 Pr(>X^2)
## B1 8.2110 0.0841
## B2 14.7782 0.0052
## B3 16.7186 0.0022
## B4 37.6743 <0.0001
## C1 33.0166 <0.0001
## C2 32.0904 <0.0001
## C3 51.5432 <0.0001
margins(v1)
##
## Call:
## rasch(data = answers)
##
## Fit on the Two-Way Margins
##
## Response: (0,0)
## Item i Item j Obs Exp (O-E)^2/E
## 1 6 7 66 52.73 3.34
## 2 1 2 28 34.16 1.11
## 3 1 3 40 46.56 0.92
##
## Response: (1,0)
## Item i Item j Obs Exp (O-E)^2/E
## 1 6 7 12 25.21 6.92 ***
## 2 3 4 55 64.88 1.50
## 3 5 7 19 24.95 1.42
##
## Response: (0,1)
## Item i Item j Obs Exp (O-E)^2/E
## 1 1 3 19 12.47 3.42
## 2 3 4 20 29.72 3.18
## 3 2 3 36 27.25 2.81
##
## Response: (1,1)
## Item i Item j Obs Exp (O-E)^2/E
## 1 3 4 42 32.40 2.84
## 2 6 7 116 103.09 1.62
## 3 2 3 61 70.03 1.17
##
## '***' denotes a chi-squared residual greater than 3.5
person.fit(v1)
##
## Person-Fit Statistics and P-values
##
## Call:
## rasch(data = answers)
##
## Alternative: Inconsistent response pattern under the estimated model
##
## B1 B2 B3 B4 C1 C2 C3 L0 Lz Pr(<Lz)
## 1 0 0 0 0 0 0 0 -1.6370 0.9711 0.8343
## 2 0 0 0 0 0 0 1 -2.6414 0.7417 0.7709
## 3 0 0 0 0 0 1 1 -4.1499 -0.1634 0.4351
## 4 0 0 0 0 1 1 1 -5.0106 -0.8135 0.208
## 5 0 0 1 0 0 0 0 -4.5099 -0.7241 0.2345
## 6 0 0 1 0 0 1 1 -5.5882 -1.3567 0.0874
## 7 0 0 1 0 1 0 0 -6.0371 -1.8152 0.0347
## 8 0 0 1 0 1 1 1 -5.8253 -1.5897 0.056
## 9 0 0 1 1 1 0 1 -7.2022 -2.8825 0.002
## 10 0 1 0 0 0 0 0 -3.3131 0.2148 0.5851
## 11 0 1 0 0 0 0 1 -3.5494 0.3622 0.6414
## 12 0 1 0 0 0 1 1 -4.3913 -0.2312 0.4086
## 13 0 1 0 0 1 0 0 -4.8402 -0.7676 0.2214
## 14 0 1 0 0 1 0 1 -4.4101 -0.2487 0.4018
## 15 0 1 0 0 1 1 0 -5.6822 -1.4451 0.0742
## 16 0 1 0 0 1 1 1 -4.6285 -0.4659 0.3206
## 17 0 1 0 1 0 0 0 -6.1984 -1.9564 0.0252
## 18 0 1 0 1 0 1 1 -5.9867 -1.7412 0.0408
## 19 0 1 0 1 1 0 1 -6.0054 -1.7588 0.0393
## 20 0 1 1 0 0 0 0 -5.4178 -1.2732 0.1015
## 21 0 1 1 0 0 0 1 -4.9877 -0.7919 0.2142
## 22 0 1 1 0 1 1 1 -4.8183 -0.7781 0.2183
## 23 0 1 1 1 0 0 0 -7.6367 -3.2831 0.0005
## 24 0 1 1 1 0 1 1 -6.1765 -1.9570 0.0252
## 25 1 0 0 0 0 0 0 -2.1881 1.0974 0.8638
## 26 1 0 0 0 0 0 1 -2.4244 1.3469 0.911
## 27 1 0 0 0 0 1 0 -3.6965 0.2334 0.5923
## 28 1 0 0 0 0 1 1 -3.2663 0.8268 0.7958
## 29 1 0 0 0 1 0 0 -3.7152 0.2171 0.5859
## 30 1 0 0 0 1 0 1 -3.2850 0.8092 0.7908
## 31 1 0 0 0 1 1 0 -4.5572 -0.3871 0.3493
## 32 1 0 0 0 1 1 1 -3.5035 0.5904 0.7225
## 33 1 0 0 1 0 0 1 -4.6432 -0.4680 0.3199
## 34 1 0 0 1 0 1 1 -4.8617 -0.6849 0.2467
## 35 1 0 0 1 1 0 1 -4.8804 -0.7024 0.2412
## 36 1 0 0 1 1 1 1 -4.4739 -0.4791 0.3159
## 37 1 0 1 0 0 0 0 -4.2928 -0.2885 0.3865
## 38 1 0 1 0 0 0 1 -3.8627 0.2660 0.6049
## 39 1 0 1 0 0 1 1 -4.0811 0.0481 0.5192
## 40 1 0 1 0 1 0 1 -4.0998 0.0305 0.5122
## 41 1 0 1 0 1 1 1 -3.6933 0.1984 0.5786
## 42 1 0 1 1 0 1 1 -5.0515 -0.9805 0.1634
## 43 1 0 1 1 1 0 1 -5.0702 -0.9967 0.1594
## 44 1 0 1 1 1 1 1 -3.9915 -0.3677 0.3566
## 45 1 1 0 0 0 0 0 -3.0960 0.7591 0.7761
## 46 1 1 0 0 0 0 1 -2.6658 1.3915 0.918
## 47 1 1 0 0 0 1 0 -3.9380 0.1952 0.5774
## 48 1 1 0 0 0 1 1 -2.8843 1.1718 0.8794
## 49 1 1 0 0 1 0 0 -3.9567 0.1776 0.5705
## 50 1 1 0 0 1 0 1 -2.9030 1.1543 0.8758
## 51 1 1 0 0 1 1 0 -4.1751 -0.0402 0.484
## 52 1 1 0 0 1 1 1 -2.4965 1.2372 0.892
## 53 1 1 0 1 0 0 0 -5.3149 -1.0996 0.1357
## 54 1 1 0 1 0 0 1 -4.2611 -0.1210 0.4518
## 55 1 1 0 1 0 1 1 -3.8547 0.0583 0.5233
## 56 1 1 0 1 1 0 0 -5.5520 -1.3331 0.0913
## 57 1 1 0 1 1 0 1 -3.8734 0.0421 0.5168
## 58 1 1 0 1 1 1 0 -5.1455 -1.0621 0.1441
## 59 1 1 0 1 1 1 1 -2.7947 0.5552 0.7106
## 60 1 1 1 0 0 0 0 -4.5343 -0.3656 0.3573
## 61 1 1 1 0 0 0 1 -3.4806 0.6119 0.7297
## 62 1 1 1 0 0 1 1 -3.0741 0.7358 0.7691
## 63 1 1 1 0 1 0 0 -4.7714 -0.6001 0.2742
## 64 1 1 1 0 1 0 1 -3.0928 0.7196 0.7641
## 65 1 1 1 0 1 1 1 -2.0141 1.1570 0.8764
## 66 1 1 1 1 0 0 1 -4.4510 -0.4593 0.323
## 67 1 1 1 1 0 1 0 -5.7231 -1.5635 0.059
## 68 1 1 1 1 0 1 1 -3.3723 0.1098 0.5437
## 69 1 1 1 1 1 0 0 -5.7418 -1.5797 0.0571
## 70 1 1 1 1 1 0 1 -3.3910 0.0954 0.538
## 71 1 1 1 1 1 1 1 -1.5296 0.9568 0.8307
Problem 2: Create a constrained rasch model with the discriminability parameter identified in the previous model. That is, if the discriminability was 1.5, use constraint=cbind(8,1.5) in the argument. Then, create a second model with a fixed constraint of 3.0, which would indicate much sharper discriminability. Examine this new model via item.fit, margins, and person.fit. Again, describe what happens to the worst-fit few people, and identify if there are any substantial mis-fits. Finally, compare the two models using anova and AIC, and discuss the model comparison.
Fit an ltm model with one component, to allow each question to have its own discriminability. Plot the item curves, and examine the item.fit, margins(), and person.fit() to determine if the model fits well and if there are any problems. Discuss the different questions in terms of difficulty and discriminability, and discuss whether it appears the B and C questions differ in systematic ways? Which are harder? Which are more discriminative?
Solution:
In model2, we see that B1 is insignificant from item. Margins in model2 show that all the pairs are insignificant. From Anova and AIC we observe that model1 is better.
From LTM model we see that B1 is insignificant. The paris (3,4) and (2,4) are insignificant.
Code:
model1 <- rasch(answers, constraint =cbind(7,1.1451))
model1
##
## Call:
## rasch(data = answers, constraint = cbind(7, 1.1451))
##
## Coefficients:
## Dffclt.B1 Dffclt.B2 Dffclt.B3 Dffclt.B4 Dffclt.C1 Dffclt.C2 Dffclt.C3
## -1.390 -0.408 0.637 1.318 0.133 0.116 -0.999
## Dscrmn
## 1.146
##
## Log.Lik: -1121.319
model2<-rasch(answers, constraint =cbind(7,3))
model2
##
## Call:
## rasch(data = answers, constraint = cbind(7, 3))
##
## Coefficients:
## Dffclt.B1 Dffclt.B2 Dffclt.B3 Dffclt.B4 Dffclt.C1 Dffclt.C2 Dffclt.C3
## -1.411 -0.595 0.279 0.851 -0.143 -0.157 -2.001
## Dscrmn
## 1.499
##
## Log.Lik: -1169.7
item.fit(model2)
##
## Item-Fit Statistics and P-values
##
## Call:
## rasch(data = answers, constraint = cbind(7, 3))
##
## Alternative: Items do not fit the model
## Ability Categories: 10
##
## X^2 Pr(>X^2)
## B1 7.0425 0.1337
## B2 9.6618 0.0465
## B3 9.9548 0.0412
## B4 27.0373 <0.0001
## C1 22.8072 0.0001
## C2 20.8802 0.0003
## C3 212.6378 <0.0001
margins(model2)
##
## Call:
## rasch(data = answers, constraint = cbind(7, 3))
##
## Fit on the Two-Way Margins
##
## Response: (0,0)
## Item i Item j Obs Exp (O-E)^2/E
## 1 6 7 66 19.57 110.12 ***
## 2 4 7 69 23.81 85.77 ***
## 3 5 7 59 19.65 78.79 ***
##
## Response: (1,0)
## Item i Item j Obs Exp (O-E)^2/E
## 1 2 7 39 9.37 93.73 ***
## 2 1 7 49 15.30 74.24 ***
## 3 3 7 17 4.38 36.40 ***
##
## Response: (0,1)
## Item i Item j Obs Exp (O-E)^2/E
## 1 1 3 19 9.26 10.25 ***
## 2 2 3 36 22.71 7.78 ***
## 3 6 7 77 104.70 7.33 ***
##
## Response: (1,1)
## Item i Item j Obs Exp (O-E)^2/E
## 1 2 7 121 167.69 13.00 ***
## 2 2 3 61 92.84 10.92 ***
## 3 3 6 53 81.77 10.12 ***
##
## '***' denotes a chi-squared residual greater than 3.5
person.fit(model2)
##
## Person-Fit Statistics and P-values
##
## Call:
## rasch(data = answers, constraint = cbind(7, 3))
##
## Alternative: Inconsistent response pattern under the estimated model
##
## B1 B2 B3 B4 C1 C2 C3 L0 Lz Pr(<Lz)
## 1 0 0 0 0 0 0 0 -1.5829 0.6360 0.7376
## 2 0 0 0 0 0 0 1 -1.7702 0.9251 0.8225
## 3 0 0 0 0 0 1 1 -3.7137 -0.1531 0.4392
## 4 0 0 0 0 1 1 1 -4.8343 -0.8832 0.1886
## 5 0 0 1 0 0 0 0 -5.1891 -1.4755 0.07
## 6 0 0 1 0 0 1 1 -5.4675 -1.3931 0.0818
## 7 0 0 1 0 1 0 0 -7.1531 -2.6887 0.0036
## 8 0 0 1 0 1 1 1 -5.8292 -1.7897 0.0367
## 9 0 0 1 1 1 0 1 -7.3398 -3.0883 0.001
## 10 0 1 0 0 0 0 0 -3.8786 -0.5554 0.2893
## 11 0 1 0 0 0 0 1 -3.0569 0.3311 0.6297
## 12 0 1 0 0 0 1 1 -4.1570 -0.3378 0.3677
## 13 0 1 0 0 1 0 0 -5.8426 -1.7226 0.0425
## 14 0 1 0 0 1 0 1 -4.1775 -0.3543 0.3615
## 15 0 1 0 0 1 1 0 -6.9428 -2.5810 0.0049
## 16 0 1 0 0 1 1 1 -4.5187 -0.6632 0.2536
## 17 0 1 0 1 0 0 0 -7.3328 -2.8212 0.0024
## 18 0 1 0 1 0 1 1 -6.0088 -1.9442 0.0259
## 19 0 1 0 1 1 0 1 -6.0293 -1.9618 0.0249
## 20 0 1 1 0 0 0 0 -6.4758 -2.1894 0.0143
## 21 0 1 1 0 0 0 1 -4.8107 -0.8642 0.1937
## 22 0 1 1 0 1 1 1 -4.7709 -1.0615 0.1442
## 23 0 1 1 1 0 0 0 -9.0866 -4.3072 <0.0001
## 24 0 1 1 1 0 1 1 -6.2611 -2.3081 0.0105
## 25 1 0 0 0 0 0 0 -2.6543 0.3043 0.6196
## 26 1 0 0 0 0 0 1 -1.8325 1.2337 0.8913
## 27 1 0 0 0 0 1 0 -4.5978 -0.8049 0.2104
## 28 1 0 0 0 0 1 1 -2.9327 0.6480 0.7415
## 29 1 0 0 0 1 0 0 -4.6183 -0.8200 0.2061
## 30 1 0 0 0 1 0 1 -2.9532 0.6315 0.7361
## 31 1 0 0 0 1 1 0 -5.7184 -1.5951 0.0553
## 32 1 0 0 0 1 1 1 -3.2943 0.3893 0.6515
## 33 1 0 0 1 0 0 1 -4.4434 -0.5684 0.2849
## 34 1 0 0 1 0 1 1 -4.7845 -0.8917 0.1863
## 35 1 0 0 1 1 0 1 -4.8050 -0.9093 0.1816
## 36 1 0 0 1 1 1 1 -4.4036 -0.7542 0.2254
## 37 1 0 1 0 0 0 0 -5.2515 -1.2868 0.0991
## 38 1 0 1 0 0 0 1 -3.5864 0.1216 0.5484
## 39 1 0 1 0 0 1 1 -3.9276 -0.1551 0.4384
## 40 1 0 1 0 1 0 1 -3.9480 -0.1727 0.4315
## 41 1 0 1 0 1 1 1 -3.5466 -0.0373 0.4851
## 42 1 0 1 1 0 1 1 -5.0368 -1.2839 0.0996
## 43 1 0 1 1 1 0 1 -5.0573 -1.3010 0.0966
## 44 1 0 1 1 1 1 1 -3.8463 -0.6537 0.2567
## 45 1 1 0 0 0 0 0 -3.9410 -0.3207 0.3742
## 46 1 1 0 0 0 0 1 -2.2759 1.1769 0.8804
## 47 1 1 0 0 0 1 0 -5.0412 -1.0497 0.1469
## 48 1 1 0 0 0 1 1 -2.6171 0.9715 0.8343
## 49 1 1 0 0 1 0 0 -5.0617 -1.0662 0.1432
## 50 1 1 0 0 1 0 1 -2.6375 0.9539 0.8299
## 51 1 1 0 0 1 1 0 -5.4028 -1.4232 0.0773
## 52 1 1 0 0 1 1 1 -2.2361 1.0590 0.8552
## 53 1 1 0 1 0 0 0 -6.5518 -2.2662 0.0117
## 54 1 1 0 1 0 0 1 -4.1277 -0.3271 0.3718
## 55 1 1 0 1 0 1 1 -3.7263 -0.1876 0.4256
## 56 1 1 0 1 1 0 0 -6.9134 -2.7218 0.0032
## 57 1 1 0 1 1 0 1 -3.7468 -0.2047 0.4189
## 58 1 1 0 1 1 1 0 -6.5120 -2.5180 0.0059
## 59 1 1 0 1 1 1 1 -2.5358 0.3343 0.6309
## 60 1 1 1 0 0 0 0 -5.6949 -1.5761 0.0575
## 61 1 1 1 0 0 0 1 -3.2708 0.4095 0.6589
## 62 1 1 1 0 0 1 1 -2.8693 0.5293 0.7017
## 63 1 1 1 0 1 0 0 -6.0565 -1.9851 0.0236
## 64 1 1 1 0 1 0 1 -2.8898 0.5122 0.6957
## 65 1 1 1 0 1 1 1 -1.6789 0.9803 0.8365
## 66 1 1 1 1 0 0 1 -4.3800 -0.7344 0.2313
## 67 1 1 1 1 0 1 0 -7.1452 -3.0477 0.0012
## 68 1 1 1 1 0 1 1 -3.1690 -0.1431 0.4431
## 69 1 1 1 1 1 0 0 -7.1657 -3.0649 0.0011
## 70 1 1 1 1 1 0 1 -3.1895 -0.1585 0.437
## 71 1 1 1 1 1 1 1 -0.9559 0.8287 0.7964
anova(model1,model2)
## Warning in anova.rasch(model1, model2): either the two models are not nested or the model represented by 'object2' fell on a local maxima.
##
## Likelihood Ratio Table
## AIC BIC log.Lik LRT df p.value
## model1 2256.64 2281.85 -1121.32
## model2 2353.40 2378.62 -1169.70 -96.76 0 1
AIC(model1,model2)
model3<-ltm(answers~z1)
item.fit(model3)
##
## Item-Fit Statistics and P-values
##
## Call:
## ltm(formula = answers ~ z1)
##
## Alternative: Items do not fit the model
## Ability Categories: 10
##
## X^2 Pr(>X^2)
## B1 13.6509 0.0913
## B2 18.0131 0.0211
## B3 29.5962 0.0002
## B4 37.3032 <0.0001
## C1 34.1678 <0.0001
## C2 35.1342 <0.0001
## C3 37.1948 <0.0001
margins(model3)
##
## Call:
## ltm(formula = answers ~ z1)
##
## Fit on the Two-Way Margins
##
## Response: (0,0)
## Item i Item j Obs Exp (O-E)^2/E
## 1 3 4 154 142.47 0.93
## 2 2 4 101 92.64 0.75
## 3 2 7 39 43.30 0.43
##
## Response: (1,0)
## Item i Item j Obs Exp (O-E)^2/E
## 1 3 4 55 66.36 1.94
## 2 4 7 9 5.78 1.79
## 3 3 6 44 38.34 0.84
##
## Response: (0,1)
## Item i Item j Obs Exp (O-E)^2/E
## 1 3 4 20 31.47 4.18 ***
## 2 2 4 10 18.34 3.79 ***
## 3 1 6 23 19.14 0.78
##
## Response: (1,1)
## Item i Item j Obs Exp (O-E)^2/E
## 1 3 4 42 30.70 4.16 ***
## 2 2 4 52 43.83 1.52
## 3 3 6 53 58.73 0.56
##
## '***' denotes a chi-squared residual greater than 3.5
person.fit(model3)
##
## Person-Fit Statistics and P-values
##
## Call:
## ltm(formula = answers ~ z1)
##
## Alternative: Inconsistent response pattern under the estimated model
##
## B1 B2 B3 B4 C1 C2 C3 L0 Lz Pr(<Lz)
## 1 0 0 0 0 0 0 0 -1.8958 0.6833 0.7528
## 2 0 0 0 0 0 0 1 -3.4974 0.3285 0.6287
## 3 0 0 0 0 0 1 1 -4.6324 -0.4910 0.3117
## 4 0 0 0 0 1 1 1 -4.8114 -0.7169 0.2367
## 5 0 0 1 0 0 0 0 -3.9267 -0.4613 0.3223
## 6 0 0 1 0 0 1 1 -5.3720 -1.1894 0.1171
## 7 0 0 1 0 1 0 0 -5.6480 -1.5460 0.0611
## 8 0 0 1 0 1 1 1 -5.1802 -1.1461 0.1259
## 9 0 0 1 1 1 0 1 -6.8229 -2.5714 0.0051
## 10 0 1 0 0 0 0 0 -2.6689 0.4088 0.6587
## 11 0 1 0 0 0 0 1 -3.5712 0.3870 0.6506
## 12 0 1 0 0 0 1 1 -4.3114 -0.2122 0.416
## 13 0 1 0 0 1 0 0 -4.4610 -0.5856 0.2791
## 14 0 1 0 0 1 0 1 -4.2513 -0.1422 0.4435
## 15 0 1 0 0 1 1 0 -5.8977 -1.6545 0.049
## 16 0 1 0 0 1 1 1 -4.1817 -0.2349 0.4071
## 17 0 1 0 1 0 0 0 -5.9871 -1.8306 0.0336
## 18 0 1 0 1 0 1 1 -5.5875 -1.5033 0.0664
## 19 0 1 0 1 1 0 1 -5.8008 -1.6350 0.051
## 20 0 1 1 0 0 0 0 -4.4157 -0.6535 0.2567
## 21 0 1 1 0 0 0 1 -4.6133 -0.4823 0.3148
## 22 0 1 1 0 1 1 1 -4.3327 -0.5183 0.3021
## 23 0 1 1 1 0 0 0 -7.3093 -2.9390 0.0016
## 24 0 1 1 1 0 1 1 -5.7103 -1.7205 0.0427
## 25 1 0 0 0 0 0 0 -1.7558 1.0975 0.8638
## 26 1 0 0 0 0 0 1 -2.5973 1.2652 0.8971
## 27 1 0 0 0 0 1 0 -3.9996 -0.0814 0.4676
## 28 1 0 0 0 0 1 1 -3.2993 0.7101 0.7612
## 29 1 0 0 0 1 0 0 -3.5122 0.2090 0.5828
## 30 1 0 0 0 1 0 1 -3.2509 0.7726 0.7801
## 31 1 0 0 0 1 1 0 -4.9103 -0.7550 0.2251
## 32 1 0 0 0 1 1 1 -3.1386 0.6861 0.7537
## 33 1 0 0 1 0 0 1 -4.7097 -0.5630 0.2867
## 34 1 0 0 1 0 1 1 -4.5422 -0.5806 0.2808
## 35 1 0 0 1 1 0 1 -4.7695 -0.7076 0.2396
## 36 1 0 0 1 1 1 1 -3.6423 -0.0932 0.4629
## 37 1 0 1 0 0 0 0 -3.4790 0.0965 0.5384
## 38 1 0 1 0 0 0 1 -3.6233 0.4218 0.6634
## 39 1 0 1 0 0 1 1 -3.8361 0.1416 0.5563
## 40 1 0 1 0 1 0 1 -3.9415 0.1053 0.5419
## 41 1 0 1 0 1 1 1 -3.2675 0.3849 0.6499
## 42 1 0 1 1 0 1 1 -4.6426 -0.8154 0.2074
## 43 1 0 1 1 1 0 1 -5.0548 -1.0743 0.1413
## 44 1 0 1 1 1 1 1 -3.2262 -0.0085 0.4966
## 45 1 1 0 0 0 0 0 -2.2646 0.9996 0.8412
## 46 1 1 0 0 0 0 1 -2.5157 1.4220 0.9225
## 47 1 1 0 0 0 1 0 -4.0504 -0.0265 0.4894
## 48 1 1 0 0 0 1 1 -2.8093 1.0905 0.8623
## 49 1 1 0 0 1 0 0 -3.6837 0.2138 0.5847
## 50 1 1 0 0 1 0 1 -2.8891 1.0771 0.8593
## 51 1 1 0 0 1 1 0 -4.6864 -0.5410 0.2943
## 52 1 1 0 0 1 1 1 -2.3097 1.2488 0.8941
## 53 1 1 0 1 0 0 0 -5.1896 -1.0866 0.1386
## 54 1 1 0 1 0 0 1 -4.3292 -0.2452 0.4031
## 55 1 1 0 1 0 1 1 -3.6896 0.0329 0.5131
## 56 1 1 0 1 1 0 0 -6.0182 -1.7586 0.0393
## 57 1 1 0 1 1 0 1 -4.0705 -0.1820 0.4278
## 58 1 1 0 1 1 1 0 -6.1842 -1.9921 0.0232
## 59 1 1 0 1 1 1 1 -2.3622 0.7220 0.7649
## 60 1 1 1 0 0 0 0 -3.7683 0.0373 0.5149
## 61 1 1 1 0 0 0 1 -3.3720 0.6596 0.7452
## 62 1 1 1 0 0 1 1 -3.1453 0.6447 0.7404
## 63 1 1 1 0 1 0 0 -4.8375 -0.7104 0.2387
## 64 1 1 1 0 1 0 1 -3.3917 0.5265 0.7007
## 65 1 1 1 0 1 1 1 -2.1926 1.0796 0.8598
## 66 1 1 1 1 0 0 1 -4.8078 -0.7631 0.2227
## 67 1 1 1 1 0 1 0 -6.8005 -2.5134 0.006
## 68 1 1 1 1 0 1 1 -3.5403 -0.0583 0.4768
## 69 1 1 1 1 1 0 0 -6.8221 -2.5046 0.0061
## 70 1 1 1 1 1 0 1 -4.1289 -0.4028 0.3435
## 71 1 1 1 1 1 1 1 -1.6170 0.9616 0.8319
Problem 3:
The B and C questions are hypothesized to measure different information. Fit a two-factor LTM model (~z1+z2). Examine the margins, and use anova() and AIC() to compare this to the one-factor latent model. Does the two-factor model appear to improve on the smaller model? Examine the model with plot(model,type=“loadings”), which will show where each question fits in the two-dimensional space. Do you see B and C questions falling on different dimensions?
Solution: From margins of model4, we can see that all the pairs are significant. The two-factor improves the smaller model. B and C questions are falling on different dimensions from the plot.
Code:
model4<-ltm(answers~z1+z2)
margins(model4)
##
## Call:
## ltm(formula = answers ~ z1 + z2)
##
## Fit on the Two-Way Margins
##
## Response: (0,0)
## Item i Item j Obs Exp (O-E)^2/E
## 1 1 6 36 40.35 0.47
## 2 2 3 75 80.58 0.39
## 3 1 7 29 26.93 0.16
##
## Response: (1,0)
## Item i Item j Obs Exp (O-E)^2/E
## 1 2 3 99 93.05 0.38
## 2 3 7 17 19.12 0.23
## 3 1 6 107 102.65 0.18
##
## Response: (0,1)
## Item i Item j Obs Exp (O-E)^2/E
## 1 2 3 36 30.16 1.13
## 2 1 6 23 18.64 1.02
## 3 2 4 10 11.86 0.29
##
## Response: (1,1)
## Item i Item j Obs Exp (O-E)^2/E
## 1 2 3 61 67.20 0.57
## 2 1 6 105 109.37 0.17
## 3 3 6 53 55.96 0.16
anova(model3,model4)
##
## Likelihood Ratio Table
## AIC BIC log.Lik LRT df p.value
## model3 2247.01 2297.44 -1109.50
## model4 2234.21 2309.86 -1096.11 26.79 7 <0.001
AIC(model3,model4)
plot(model4,type="loadings")
Problem 4: Now, return to the one-factor ltm model from part 2. Plot the total information in the test using plot(type=“IIC”,items=0). Then compute total information using information(), giving a range of -10 to +10. Suppose we wanted to separate the B and C questions, and determine for each subset, whether they are better at discriminating people below average or above average. Compute information in the range (-10,0) and (0,10) for the B questions (1 to 4) and the C questions (5 to 7), and discuss whether each test is more informative for above average, below average people, or if it is balanced.
Solution:
The subsets B and C seperately are not discriminating better. The information in range (-10,0) describes the percentage which is below average. The information in range (0,10) describes the percentage which is above average. We can see that each test is not balanced.
Hence the test is informative about Above average since the % is greater.
Code:
plot(model3,type="IIC",items=0)
information(model3,c(-10,10))
##
## Call:
## ltm(formula = answers ~ z1)
##
## Total Information = 9.01
## Information in (-10, 10) = 9.01 (100%)
## Based on all the items
b1answers<-cbind(answers$B1,answers$B2,answers$B3,answers$B4)
c1answers<-cbind(answers$C1,answers$C2,answers$C3)
modelb<-ltm(b1answers~z1)
modelc<-ltm(c1answers~z1)
information(modelb,c(-10,0))
##
## Call:
## ltm(formula = b1answers ~ z1)
##
## Total Information = 19.95
## Information in (-10, 0) = 1.1 (5.53%)
## Based on all the items
information(modelb,c(0,10))
##
## Call:
## ltm(formula = b1answers ~ z1)
##
## Total Information = 19.95
## Information in (0, 10) = 18.85 (94.47%)
## Based on all the items
information(modelc,c(-10,0))
##
## Call:
## ltm(formula = c1answers ~ z1)
##
## Total Information = 6.03
## Information in (-10, 0) = 3.56 (59%)
## Based on all the items
information(modelc,c(0,10))
##
## Call:
## ltm(formula = c1answers ~ z1)
##
## Total Information = 6.03
## Information in (0, 10) = 2.47 (41%)
## Based on all the items
Problem 5:
The CRT questions were open-response, but the BNT questions were multiple-choice, which means that people had a 25% chance of getting these right just by guessing. Use a three-parameter model (tpm) and constrain the guessing parameter to be .25 using constraint=cbind(1:4,1, .25)), for just the BNT questions (answers[,1:4]). Examine the resulting item characteristic curves. Then, make a corresponding ltm model for these four questions (like the model in part 2, but for just the BNT questions). Compare these two models using AIC, and by plotting. Describe the differences in the item curves. Does adding a guessing parameter considerably change the interpretation of these items (consider the difficulty parameters and/or the curves.)
Solution:
AIC does not show great difference between the 2 items. Adding a guessing parameter does not considerably change the interpretation of the items. But there is a slight difference in the question curves of B3 and B4, which might not make a big difference.
Code:
model5 <- tpm(answers[,1:4], type = "latent.trait", max.guessing = 0.25,constraint=cbind(1:4,1, .25))
model5
##
## Call:
## tpm(data = answers[, 1:4], type = "latent.trait", constraint = cbind(1:4,
## 1, 0.25), max.guessing = 0.25)
##
## Coefficients:
## Gussng Dffclt Dscrmn
## B1 0.062 -3.403 0.360
## B2 0.062 -0.312 0.927
## B3 0.062 0.766 1.439
## B4 0.062 0.794 15.549
##
## Log.Lik: -621.791
model6<-ltm(answers[,1:4]~z1)
summary(model6)
##
## Call:
## ltm(formula = answers[, 1:4] ~ z1)
##
## Model Summary:
## log.Lik AIC BIC
## -620.3723 1256.745 1285.562
##
## Coefficients:
## value std.err z.vals
## Dffclt.B1 -3.6786 2.1374 -1.7211
## Dffclt.B2 -0.5050 0.2028 -2.4907
## Dffclt.B3 0.6695 0.1809 3.7007
## Dffclt.B4 0.7094 0.3650 1.9435
## Dscrmn.B1 0.3571 0.2146 1.6640
## Dscrmn.B2 0.8323 0.2348 3.5444
## Dscrmn.B3 1.0797 0.2538 4.2543
## Dscrmn.B4 17.7158 205.6412 0.0861
##
## Integration:
## method: Gauss-Hermite
## quadrature points: 21
##
## Optimization:
## Convergence: 0
## max(|grad|): 0.0019
## quasi-Newton: BFGS
AIC(model5,model6)
plot(model5)
grid (10,10, lty = 6, col = "cornsilk2")
par(new=TRUE)
plot(model6)
grid (10,10, lty = 6, col = "cornsilk2")