(4, 2, 4, 5, 6, 5, 4, 0, 5, 4, 5, 3, 3, 3, 5, 4, 3, 3, 6, 4, 4, 6, 2, 3, 3, 3, 4, 5, 5, 5, 4, 3, 3, 4, 7, 5, 6, 6, 7, 4, 6, 4, 6, 5, 4, 3, 4, 6, 6, 4).
## khat phat
## [1,] 11.90 0.36
## [2,] 10.89 0.39
## [3,] 19.06 0.23
## [4,] 11.37 0.38
## [5,] 10.64 0.40
## [6,] 9.46 0.45
## [7,] 8.25 0.52
## [8,] 11.21 0.38
## [9,] 7.06 0.61
## [10,] 6.64 0.65
## khat phat
## [1,] 7.78 0.55
## [2,] 9.48 0.45
## [3,] 11.18 0.38
## [4,] 10.15 0.42
## [5,] 8.68 0.50
## [6,] 9.72 0.44
## [7,] 8.86 0.49
## [8,] 9.16 0.47
## [9,] 11.15 0.39
## [10,] 12.51 0.34
## khat phat
## [1,] 10.22 0.42
## [2,] 9.21 0.47
## [3,] 10.91 0.39
## [4,] 10.96 0.39
## [5,] 10.54 0.41
## [6,] 9.36 0.46
## [7,] 11.59 0.37
## [8,] 9.06 0.47
## [9,] 9.88 0.44
## [10,] 9.75 0.44
n = 50: bias = 0.89 and MSE = 23.31
n = 100: bias = 0.56 and MSE = 4.95
n = 250: bias = 0.33 and MSE = 1.49
n = 50: bias = 0.0288 and MSE = 0.0118
n = 100: bias = 0.022 and MSE = 0.0061
n = 250: bias = 0.0215 and MSE = 0.0025
The \(\hat k\) estimators seem to overestimate the parameter k for each sample size n. Whereas, the \(\hat p\) estimators seem to consistently have very small overestimates or underestimates of the parameter and are therefore closer to being considered unbiased because the calculated biases and mean squared errors (MSE) are approximately 0 for each sample size. As the sample size increases, the bias and MSE usually decrease and become closer to 0 for both \(\hat k\) and \(\hat p\).