Training Set: QDA may fit the data better as it is more flexible, capturing complexities even if unnecessary. Test Set: LDA is expected to perform better since it has lower variance and a linear decision boundary accurately approximates the true Bayes boundary.
QDA will perform better on both.
It should improve. As n increases, the variance of QDA decreases, leading to better performance relative to LDA.
False. While QDA can model a linear decision boundary, it introduces unnecessary variance when the true boundary is linear. LDA will likely yield a better test error rate in this case.
b0 <- -6
b1 <- 0.05
b2 <- 1
exp(b0 + b1*40 + b2*3.5) / (1 + exp(b0 + b1*40 + b2*3.5))
## [1] 0.3775407
37.75%
Set p(X) = 0.05 The math does not format well on here, but it simplifies down to: 0.05X = 2.5 X1 = 50 The student needs to study for 50 hours for a 50% chance of getting an A.
Since KNN’s test error is not separately reported, we assume it suffers from overfitting (training error is much lower than test error). Logistic regression has a higher test error, but it generalizes better. Logistic regression is preferable.
0.37/1.37 = 0.27 So, 27% of people with an odds of 0.37 will default.
0.16/0.84 = 0.19 Odds are 0.19.