a) The lasso, relative to least squares, is:
i. More flexible and hence will give improved prediction accuracy when its increase in bias is less than its decrease in variance.
ii. More flexible and hence will give improved prediction accuracy when its increase in variance is less than its decrease in bias.
iii. Less flexible and hence will give improved prediction accuracy when its increase in bias is less than its decrease in variance.
iv. Less flexible and hence will give improved prediction accuracy when its increase in variance is less than its decrease in bias.
b) Repeat a) for ridge regression relative to least squares.
c) Repeat a) for non-linear methods relative to least squares.
9. In this exercise, we will predict the number of
applications received using the other variables in the
College data set.
library(ISLR2)
library(glmnet)
sum(is.na(College))
## [1] 0
a) Split the data into a training set and a test set.
set.seed(123)
index <- sample(1:nrow(College), 0.5 * nrow(College))
college_train <- College[index, ]
college_test <- College[-index, ]
b) Fit a linear model using least squares on the training set, and report the test error obtained.
ols_fit <- lm(Apps ~., data = college_train)
summary(ols_fit)
##
## Call:
## lm(formula = Apps ~ ., data = college_train)
##
## Residuals:
## Min 1Q Median 3Q Max
## -2623.9 -472.8 -64.4 319.2 6042.3
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 7.778e+01 5.815e+02 0.134 0.89366
## PrivateYes -8.146e+02 2.002e+02 -4.069 5.77e-05 ***
## Accept 1.350e+00 7.318e-02 18.448 < 2e-16 ***
## Enroll -1.455e-01 2.627e-01 -0.554 0.58006
## Top10perc 3.784e+01 7.111e+00 5.322 1.79e-07 ***
## Top25perc -8.680e+00 5.646e+00 -1.538 0.12502
## F.Undergrad 2.157e-02 4.778e-02 0.451 0.65192
## P.Undergrad -2.767e-03 5.049e-02 -0.055 0.95632
## Outstate -5.351e-02 2.467e-02 -2.169 0.03075 *
## Room.Board 1.709e-01 6.102e-02 2.801 0.00536 **
## Books 5.341e-02 3.183e-01 0.168 0.86682
## Personal -9.869e-02 8.594e-02 -1.148 0.25157
## PhD -6.503e+00 6.215e+00 -1.046 0.29608
## Terminal -6.148e+00 7.156e+00 -0.859 0.39083
## S.F.Ratio -8.663e+00 1.953e+01 -0.443 0.65769
## perc.alumni -8.923e+00 5.544e+00 -1.610 0.10835
## Expend 7.938e-02 1.951e-02 4.068 5.80e-05 ***
## Grad.Rate 1.115e+01 3.966e+00 2.811 0.00520 **
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 965.7 on 370 degrees of freedom
## Multiple R-squared: 0.9158, Adjusted R-squared: 0.9119
## F-statistic: 236.6 on 17 and 370 DF, p-value: < 2.2e-16
ols_pred <- predict(ols_fit, college_test)
ols_err <- mean((college_test$Apps - ols_pred)^2)
ols_err
## [1] 1373995
c) Fit a ridge regression model on the training set, with \(\lambda\) chosen by cross-validation. Report the test error obtained.
train_mat <- model.matrix(Apps ~., data = college_train)[, -1]
test_mat <- model.matrix(Apps ~., data = college_test)[, -1]
grid <- 10^seq(10, -5, length = 1000)
ridge_fit <- cv.glmnet(train_mat, college_train$Apps, alpha = 0, lambda = grid, thresh = 1e-12)
ridge_bestlam <- ridge_fit$lambda.min
ridge_bestlam
## [1] 18.24993
ridge_pred <- predict(ridge_fit, newx = test_mat, s = ridge_bestlam)
ridge_err <- mean((college_test$Apps - ridge_pred)^2)
ridge_err
## [1] 1430032
d) Fit a lasso model on the training set, with \(\lambda\) chosen by cross-validation. Report the test error obtained, along with the number of non-zero coefficient estimates.
lasso_fit <- cv.glmnet(train_mat, college_train$Apps, alpha = 1, lambda = grid, thresh = 1e-12)
lasso_bestlam <- lasso_fit$lambda.min
lasso_bestlam
## [1] 22.45698
lasso_pred <- predict(lasso_fit, newx = test_mat, s = lasso_bestlam)
lasso_err <- mean((college_test$Apps - lasso_pred)^2)
lasso_err
## [1] 1397791
lasso_coef <- predict(lasso_fit, s = lasso_bestlam, type = "coefficients")
lasso_coef
## 18 x 1 sparse Matrix of class "dgCMatrix"
## s1
## (Intercept) -426.85650324
## PrivateYes -714.47829898
## Accept 1.33112833
## Enroll .
## Top10perc 26.89109257
## Top25perc .
## F.Undergrad .
## P.Undergrad .
## Outstate -0.02973659
## Room.Board 0.12629635
## Books .
## Personal -0.04100342
## PhD -3.35442007
## Terminal -5.81868568
## S.F.Ratio .
## perc.alumni -7.04257724
## Expend 0.07598420
## Grad.Rate 7.50196560
e) Fit a PCR model on the training set, with \(M\) chosen by cross-validation. Report the test error obtained, along with the value of \(M\) selected by cross-validation.
library(pls)
pcr_fit <- pcr(Apps ~., data = college_train, scale = TRUE, validation = "CV")
validationplot(pcr_fit, val.type = "MSEP")
pcr_pred <- predict(pcr_fit, college_test, ncomp = 9)
pcr_err <- mean((college_test$Apps - pcr_pred)^2)
pcr_err
## [1] 2889578
f) Fit a PLS model on the training set, with \(M\) chosen by cross-validation. Report the test error obtained, along with the value of \(M\) selected by cross-validation.
pls_fit <- plsr(Apps ~., data = college_train, scale = TRUE, validation = "CV")
validationplot(pls_fit, val.type = "MSEP")
pls_pred <- predict(pls_fit, college_test, ncomp = 5)
pls_err <- mean((college_test$Apps - pls_pred)^2)
pls_err
## [1] 1948087
g) Comment on the results obtained. How accurately can we predict the number of college applications received? Is there much difference among the test errors resulting from these five approaches?
Calculate \(R^2\) values
test_avg <- mean(college_test$Apps)
lm_r2 <- 1 - mean((ols_pred - college_test$Apps)^2) / mean((test_avg - college_test$Apps)^2)
ridge_r2 <- 1 - mean((ridge_pred - college_test$Apps)^2) / mean((test_avg - college_test$Apps)^2)
lasso_r2 <- 1 - mean((lasso_pred - college_test$Apps)^2) / mean((test_avg - college_test$Apps)^2)
pcr_r2 <- 1 - mean((pcr_pred - college_test$Apps)^2) / mean((test_avg - college_test$Apps)^2)
pls_r2 <- 1 - mean((pls_pred - college_test$Apps)^2) / mean((test_avg - college_test$Apps)^2)
results_r2 <- rbind(lm_r2, ridge_r2, lasso_r2, pcr_r2, pls_r2)
results_r2
## [,1]
## lm_r2 0.9289176
## ridge_r2 0.9260186
## lasso_r2 0.9276866
## pcr_r2 0.8505103
## pls_r2 0.8992175
results_err <- rbind(ols_err, ridge_err, lasso_err, pcr_err, pls_err)
results_err
## [,1]
## ols_err 1373995
## ridge_err 1430032
## lasso_err 1397791
## pcr_err 2889578
## pls_err 1948087
Boston data set.a) Try out some of the regression methods explored in this chapter, such as best subset selection, the lasso, ridge regression, and PCR. Present and discuss results for the approaches that you consider.
library(MASS)
library(leaps)
set.seed(123)
Best Subset Selection
predict.regsubsets = function(object, newdata, id, ...) {
form = as.formula(object$call[[2]])
mat = model.matrix(form, newdata)
coefi = coef(object, id = id)
mat[, names(coefi)] %*% coefi
}
k = 10
p = ncol(Boston) - 1
folds = sample(rep(1:k, length = nrow(Boston)))
cv.errors = matrix(NA, k, p)
for (i in 1:k) {
best.fit = regsubsets(crim ~ ., data = Boston[folds != i, ], nvmax = p)
for (j in 1:p) {
pred = predict(best.fit, Boston[folds == i, ], id = j)
cv.errors[i, j] = mean((Boston$crim[folds == i] - pred)^2)
}
}
rmse.cv = sqrt(apply(cv.errors, 2, mean))
plot(rmse.cv, pch = 19, type = "b")
which.min(rmse.cv)
## [1] 12
best_rmse=rmse.cv[which.min(rmse.cv)]
best_rmse
## [1] 6.535771
Lasso
x = model.matrix(crim ~ . - 1, data = Boston)
y = Boston$crim
cv.lasso = cv.glmnet(x, y, type.measure = "mse")
plot(cv.lasso)
coef(cv.lasso)
## 14 x 1 sparse Matrix of class "dgCMatrix"
## s1
## (Intercept) 1.4186414
## zn .
## indus .
## chas .
## nox .
## rm .
## age .
## dis .
## rad 0.2298449
## tax .
## ptratio .
## black .
## lstat .
## medv .
lasso_rmse=sqrt(cv.lasso$cvm[cv.lasso$lambda == cv.lasso$lambda.1se])
lasso_rmse
## [1] 7.572919
Ridge
x = model.matrix(crim ~ . - 1, data = Boston)
y = Boston$crim
cv.ridge = cv.glmnet(x, y, type.measure = "mse", alpha = 0)
plot(cv.ridge)
coef(cv.ridge)
## 14 x 1 sparse Matrix of class "dgCMatrix"
## s1
## (Intercept) 0.887549636
## zn -0.002684451
## indus 0.035521503
## chas -0.242070224
## nox 2.338520336
## rm -0.166158479
## age 0.007601568
## dis -0.120012848
## rad 0.063692316
## tax 0.002813534
## ptratio 0.090098298
## black -0.003542339
## lstat 0.046779459
## medv -0.030585796
ridge_rmse=sqrt(cv.ridge$cvm[cv.ridge$lambda == cv.ridge$lambda.1se])
ridge_rmse
## [1] 7.403883
pcr.fit = pcr(crim ~ ., data = Boston, scale = TRUE, validation = "CV")
summary(pcr.fit)
## Data: X dimension: 506 13
## Y dimension: 506 1
## Fit method: svdpc
## Number of components considered: 13
##
## VALIDATION: RMSEP
## Cross-validated using 10 random segments.
## (Intercept) 1 comps 2 comps 3 comps 4 comps 5 comps 6 comps
## CV 8.61 7.190 7.189 6.743 6.722 6.731 6.751
## adjCV 8.61 7.188 7.187 6.740 6.716 6.728 6.747
## 7 comps 8 comps 9 comps 10 comps 11 comps 12 comps 13 comps
## CV 6.746 6.638 6.646 6.639 6.638 6.568 6.499
## adjCV 6.741 6.631 6.639 6.632 6.631 6.561 6.492
##
## TRAINING: % variance explained
## 1 comps 2 comps 3 comps 4 comps 5 comps 6 comps 7 comps 8 comps
## X 47.70 60.36 69.67 76.45 82.99 88.00 91.14 93.45
## crim 30.69 30.87 39.27 39.61 39.61 39.86 40.14 42.47
## 9 comps 10 comps 11 comps 12 comps 13 comps
## X 95.40 97.04 98.46 99.52 100.0
## crim 42.55 42.78 43.04 44.13 45.4
pcr_rmse=sqrt(pcr.fit$validation$adj[12])
b) Propose a model (or a set of models) that seem to perform well on this data set, and justify your answer. Make sure that you are evaluating model performance using validation set error, cross-validation, or some other reasonable alternative, as opposed to using training error.
results <- rbind(best_rmse, lasso_rmse, ridge_rmse, pcr_rmse)
results
## [,1]
## best_rmse 6.535771
## lasso_rmse 7.572919
## ridge_rmse 7.403883
## pcr_rmse 6.430667
c) Did your chosen model involve all of the features in the data set? Why or why not?