Question #2. For parts (a) through (c), indicate which of i. through iv. is correct. Justify your answer.
i. More flexible and hence will give improved prediction accuracy when its increase in bias is less than its decrease in variance.
ii. More flexible and hence will give improved prediction accuracy when its increase in variance is less than its decrease in bias.
iii. Less flexible and hence will give improved prediction accuracy when its increase in bias is less than its decrease in variance.
iv. Less flexible and hence will give improved prediction accuracy when its increase in variance is less than its decrease in bias.
(a) The lasso, relative to least squares, is:
iii. Less flexible and hence will give improved prediction accuracy when its increase in bias is less than its decrease in variance.
Why?: We are aware that reducing a model’s variance while reducing its rigidity and bias can significantly reduce a model’s variance. By removing non-essential variables and shrinking coefficient estimates to zero, Lasso can produce a model with a lower variance and higher bias than least squares.
(b) Repeat (a) for ridge regression relative to least squares.
iii. Less flexible and hence will give improved prediction accuracy when its increase in bias is less than its decrease in variance.
Why?: Shrinking coefficient estimates, as we know, enhances a model’s rigidity and bias while dramatically reducing its variation. Ridge can reduce coefficient estimates to values that are almost zero, retaining some of the excess variation that non-essential factors add. Ridge still creates a model that is more biased and has a smaller variance than least square. Ridge’s bias and variance are lower than those of Lasso’s; the severity of this difference relies on the severity of the penalty applied (greater penalty, higher bias, lower variance).
(c) Repeat (a) for non-linear methods relative to least squares.
ii. More flexible and hence will give improved prediction accuracy when its increase in variance is less than its decrease in bias.
Why?: Non-linear models are less biased and more adaptable than least squares models. Although these models have a greater variance, their ultimate objective is to improve forecast accuracy.
Question #9. In this exercise, we will predict the number of applications received using the other variables in the College data set.
library(ISLR2)
attach(College)
(a) Split the data set into a training set and a test set.
set.seed(1)
train=sample(c(TRUE,FALSE),nrow(College),rep=TRUE)
test=(!train)
train.college = College[train,]
test.college = College[test,]
(b) Fit a linear model using least squares on the training set, and report the test error obtained.
lm.fit = lm(Apps~., data=train.college)
pred.lm = predict(lm.fit, test.college, type="response")
mse.lm = mean((pred.lm - test.college$Apps)^2)
mse.lm
## [1] 984743.1
(c) Fit a ridge regression model on the training set, with λ chosen by cross-validation. Report the test error obtained.
library(glmnet)
## Loading required package: Matrix
## Loaded glmnet 4.1-3
train.mat = model.matrix(Apps ~ ., data = train.college)
test.mat = model.matrix(Apps ~ ., data = test.college)
grid = 10 ^ seq(4, -2, length = 100)
fit.ridge = glmnet(train.mat, train.college$Apps, alpha = 0, lambda = grid, thresh = 1e-12)
cv.ridge = cv.glmnet(train.mat, train.college$Apps, alpha = 0, lambda = grid, thresh = 1e-12)
best.ridge = cv.ridge$lambda.min
best.ridge
## [1] 0.01
pred.ridge <- predict(fit.ridge, s = best.ridge, newx = test.mat)
mean((pred.ridge - test.college$Apps)^2)
## [1] 984731.2
(d) Fit a lasso model on the training set, with λ chosen by crossvalidation. Report the test error obtained, along with the number of non-zero coefficient estimates.
fit.lasso <- glmnet(train.mat, train.college$Apps, alpha = 1, lambda = grid, thresh = 1e-12)
cv.lasso <- cv.glmnet(train.mat, train.college$Apps, alpha = 1, lambda = grid, thresh = 1e-12)
best.lasso <- cv.lasso$lambda.min
best.lasso
## [1] 0.01
pred.lasso <- predict(fit.lasso, s = best.lasso, newx = test.mat)
mean((pred.lasso - test.college$Apps)^2)
## [1] 984715.2
(e) Fit a PCR model on the training set, with M chosen by crossvalidation. Report the test error obtained, along with the value of M selected by cross-validation.
library(pls)
##
## Attaching package: 'pls'
## The following object is masked from 'package:stats':
##
## loadings
fit.pcr <- pcr(Apps ~ ., data = train.college, scale = TRUE, validation = "CV")
pred.pcr <- predict(fit.pcr, test.college, ncomp = 10)
mean((pred.pcr - test.college$Apps)^2)
## [1] 1682909
(f) Fit a PLS model on the training set, with M chosen by crossvalidation. Report the test error obtained, along with the value of M selected by cross-validation.
fit.pls <- plsr(Apps ~ ., data = train.college, scale = TRUE, validation = "CV")
pred.pls <- predict(fit.pls, test.college, ncomp = 10)
mean((pred.pls - test.college$Apps)^2)
## [1] 994703.4
(g) Comment on the results obtained. How accurately can we predict the number of college applications received? Is there much difference among the test errors resulting from these five approaches?
avg <- mean(test.college$Apps)
lm.rsq <- 1 - mean((pred.lm - test.college$Apps)^2) / mean((avg - test.college$Apps)^2)
ridge.rsq <- 1 - mean((pred.ridge - test.college$Apps)^2) / mean((avg - test.college$Apps)^2)
lasso.rsq <- 1 - mean((pred.lasso - test.college$Apps)^2) / mean((avg - test.college$Apps)^2)
pcr.rsq <- 1 - mean((pred.pcr - test.college$Apps)^2) / mean((avg - test.college$Apps)^2)
pls.rsq <- 1 - mean((pred.pls - test.college$Apps)^2) / mean((avg - test.college$Apps)^2)
Question 11. We will now try to predict per capita crime rate in the Boston data set.
library(MASS)
##
## Attaching package: 'MASS'
## The following object is masked from 'package:ISLR2':
##
## Boston
library(leaps)
data(Boston)
set.seed(1)
(a) Try out some of the regression methods explored in this chapter, such as best subset selection, the lasso, ridge regression, and PCR. Present and discuss results for the approaches that you consider.
predict.regsubsets = function(object, newdata, id, ...) {
form = as.formula(object$call[[2]])
mat = model.matrix(form, newdata)
coefi = coef(object, id = id)
mat[, names(coefi)] %*% coefi
}
k = 10
p = ncol(Boston) - 1
folds = sample(rep(1:k, length = nrow(Boston)))
cv.errors = matrix(NA, k, p)
for (i in 1:k) {
best.fit = regsubsets(crim ~ ., data = Boston[folds != i, ], nvmax = p)
for (j in 1:p) {
pred = predict(best.fit, Boston[folds == i, ], id = j)
cv.errors[i, j] = mean((Boston$crim[folds == i] - pred)^2)
}
}
cv.means = sqrt(apply(cv.errors, 2, mean))
plot(cv.means, type = "b", xlab = "Number of variables", ylab = "CV error")
which.min(cv.means)
## [1] 9
best.subset=cv.means[which.min(cv.means)]
best.subset
## [1] 6.543281
x = model.matrix(crim ~ . - 1, data = Boston)
y = Boston$crim
cv.lasso = cv.glmnet(x, y, type.measure = "mse")
cv.lasso
##
## Call: cv.glmnet(x = x, y = y, type.measure = "mse")
##
## Measure: Mean-Squared Error
##
## Lambda Index Measure SE Nonzero
## min 0.207 36 44.84 18.13 7
## 1se 4.066 4 62.75 23.76 1
coef(cv.lasso)
## 14 x 1 sparse Matrix of class "dgCMatrix"
## s1
## (Intercept) 2.176491
## zn .
## indus .
## chas .
## nox .
## rm .
## age .
## dis .
## rad 0.150484
## tax .
## ptratio .
## black .
## lstat .
## medv .
lasso.rmse=sqrt(cv.lasso$cvm[cv.lasso$lambda == cv.lasso$lambda.1se])
lasso.rmse
## [1] 7.921353
x = model.matrix(crim ~ . - 1, data = Boston)
y = Boston$crim
cv.ridge = cv.glmnet(x, y, type.measure = "mse", alpha = 0)
plot(cv.ridge)
coef(cv.ridge)
## 14 x 1 sparse Matrix of class "dgCMatrix"
## s1
## (Intercept) 1.523899542
## zn -0.002949852
## indus 0.029276741
## chas -0.166526007
## nox 1.874769665
## rm -0.142852604
## age 0.006207995
## dis -0.094547258
## rad 0.045932737
## tax 0.002086668
## ptratio 0.071258052
## black -0.002605281
## lstat 0.035745604
## medv -0.023480540
ridge_rmse=sqrt(cv.ridge$cvm[cv.ridge$lambda == cv.ridge$lambda.1se])
ridge_rmse
## [1] 7.669133
(b) Propose a model (or set of models) that seem to perform well on this data set, and justify your answer. Make sure that you are evaluating model performance using validation set error, crossvalidation, or some other reasonable alternative, as opposed to using training error.
all.models = rbind(best.subset, lasso.rmse, ridge_rmse)
all.models
## [,1]
## best.subset 6.543281
## lasso.rmse 7.921353
## ridge_rmse 7.669133
(c) Does your chosen model involve all of the features in the data set? Why or why not?