Question 2.

For parts (a) through (c), indicate which of i. through iv. is correct. Justify your answer.

(a) The lasso, relative to least squares, is:

i. More flexible and hence will give improved prediction accuracy when its increase in bias is less than its decrease in variance. ii. More flexible and hence will give improved prediction accuracy when its increase in variance is less than its decrease in bias. iii. Less flexible and hence will give improved prediction accuracy when its increase in bias is less than its decrease in variance. iv. Less flexible and hence will give improved prediction accuracy when its increase in variance is less than its decrease in bias.

(b) Repeat (a) for ridge regression relative to least squares. (c) Repeat (a) for non-linear methods relative to least squares.

Question 9.

In this exercise, we will predict the number of applications received using the other variables in the College data set.

library(ISLR)
data(College)

(a) Split the data set into a training set and a test set.

set.seed(10)
Q9train=sample(1:dim(College)[1], dim(College)[1] / 2)
Q9test=-Q9train
Coltrain=College[Q9train, ]
Coltest=College[Q9test, ]

(b) Fit a linear model using least squares on the training set, and report the test error obtained.

Q9fit=lm(Apps ~ ., data=Coltrain)
Q9pred=predict(Q9fit, Coltest)
mean((Q9pred - Coltest$Apps)^2)
## [1] 1020100

The test error reported is 1020100.

(c) Fit a ridge regression model on the training set, with λ chosen by cross-validation. Report the test error obtained.

library(glmnet)
Q9trainmat=model.matrix(Apps ~ ., data = Coltrain)
Q9testmat=model.matrix(Apps ~ ., data = Coltest)
Q9grid=10 ^ seq(4, -2, length = 100)
Q9fitlam=glmnet(Q9trainmat, Coltrain$Apps, alpha = 0, lambda = Q9grid, thresh = 1e-12)
Q9cvlam=cv.glmnet(Q9trainmat, Coltrain$Apps, alpha = 0, lambda = Q9grid, thresh = 1e-12)
Q9lam=Q9cvlam$lambda.min
Q9lam
## [1] 0.01
Q9pred=predict(Q9fitlam,s=Q9lam,newx=Q9testmat)
mean((Q9pred - Coltest$Apps)^2)
## [1] 1020090

(d) Fit a lasso model on the training set, with λ chosen by crossvalidation. Report the test error obtained, along with the number of non-zero coefficient estimates.

Q9fitlasso=glmnet(Q9trainmat, Coltrain$Apps, alpha = 1, lambda = Q9grid, thresh = 1e-12)
Q9cvlasso=cv.glmnet(Q9trainmat, Coltrain$Apps, alpha = 1, lambda = Q9grid, thresh = 1e-12)
Q9lasso=Q9cvlasso$lambda.min
Q9lasso
## [1] 0.01
Q9predlasso=predict(Q9fitlasso, s = Q9lasso, newx = Q9testmat)
mean((Q9predlasso - Coltest$Apps)^2)
## [1] 1020097
predict(Q9fitlasso, s = Q9lasso, type = "coefficients")
## 19 x 1 sparse Matrix of class "dgCMatrix"
##                        s1
## (Intercept) -629.58395718
## (Intercept)    .         
## PrivateYes  -647.53846353
## Accept         1.68906292
## Enroll        -1.02324702
## Top10perc     48.18302370
## Top25perc    -10.50909239
## F.Undergrad    0.01982727
## P.Undergrad    0.04214021
## Outstate      -0.09488417
## Room.Board     0.14547537
## Books          0.06661918
## Personal       0.05663252
## PhD          -10.11314460
## Terminal      -2.29132336
## S.F.Ratio     22.06683385
## perc.alumni    2.07798052
## Expend         0.07653891
## Grad.Rate      9.99638694

(e) Fit a PCR model on the training set, with M chosen by crossvalidation. Report the test error obtained, along with the value of M selected by cross-validation.

library(pls)
Q9fitpcr=pcr(Apps ~ ., data = Coltrain, scale = TRUE, validation = "CV")
validationplot(Q9fitpcr, val.type = "MSEP")

Q9predpcr=predict(Q9fitpcr, Coltest, ncomp = 10)
mean((Q9predpcr - Coltest$Apps)^2)
## [1] 1422699

(f) Fit a PLS model on the training set, with M chosen by crossvalidation. Report the test error obtained, along with the value of M selected by cross-validation.

Q9fitpls=plsr(Apps ~ ., data = Coltrain, scale = TRUE, validation = "CV")
validationplot(Q9fitpls, val.type = "MSEP")

Q9predpls=predict(Q9fitpls, Coltest, ncomp = 10)
mean((Q9predpls - Coltest$Apps)^2)
## [1] 1029442

(g) Comment on the results obtained. How accurately can we predict the number of college applications received? Is there much difference among the test errors resulting from these five approaches?

Q9testavg= mean(Coltest$Apps)
Q9lmr2=1 - mean((Q9pred - Coltest$Apps)^2) / mean((Q9testavg - Coltest$Apps)^2)
Q9ridger2=1 - mean((Q9pred - Coltest$Apps)^2) / mean((Q9testavg - Coltest$Apps)^2)
Q9lassor2=1 - mean((Q9predlasso - Coltest$Apps)^2) / mean((Q9testavg - Coltest$Apps)^2)
Q9pcrr2=1 - mean((Q9predpcr - Coltest$Apps)^2) / mean((Q9testavg - Coltest$Apps)^2)
Q9plsr2=1 - mean((Q9predpls - Coltest$Apps)^2) / mean((Q9testavg - Coltest$Apps)^2)
barplot(c(Q9lmr2, Q9ridger2, Q9lassor2, Q9pcrr2, Q9plsr2), col="grey", names.arg=c("OLS", "Ridge", "Lasso", "PCR", "PLS"), main="Test R-squared")

Question 11.

We will now try to predict per capita crime rate in the Boston data set.

(a) Try out some of the regression methods explored in this chapter, such as best subset selection, the lasso, ridge regression, and PCR. Present and discuss results for the approaches that you consider.

set.seed(1)
library(MASS)
library(leaps)
library(glmnet)
attach(Boston)

Best subset selection

predict.regsubsets=function(object, newdata, id, ...) {
    form=as.formula(object$call[[2]])
    mat=model.matrix(form, newdata)
    coefi=coef(object, id = id)
    xvars=names(coefi)
    mat[, xvars] %*% coefi
}

k = 10
folds=sample(1:k, nrow(Boston), replace = TRUE)
cv.errors=matrix(NA, k, 13, dimnames = list(NULL, paste(1:13)))
for (j in 1:k) {
    best.fit=regsubsets(crim ~ ., data = Boston[folds != j, ], nvmax = 13)
    for (i in 1:13) {
        pred=predict(best.fit, Boston[folds == j, ], id = i)
        cv.errors[j, i]=mean((Boston$crim[folds == j] - pred)^2)
    }
}
mean.cv.errors <- apply(cv.errors, 2, mean)
plot(mean.cv.errors, type = "b", xlab = "Number of variables", ylab = "CV error")

Lasso

Q9x=model.matrix(crim ~ ., Boston)[, -1]
Q9y=Boston$crim
Q9cvout=cv.glmnet(Q9x, Q9y, alpha = 1, type.measure = "mse")
plot(Q9cvout)

Q9cvout=cv.glmnet(Q9x, Q9y, alpha = 0, type.measure = "mse")
plot(Q9cvout)

PCR

Q9pcrfit=pcr(crim ~ ., data = Boston, scale = TRUE, validation = "CV")
summary(Q9pcrfit)
## Data:    X dimension: 506 13 
##  Y dimension: 506 1
## Fit method: svdpc
## Number of components considered: 13
## 
## VALIDATION: RMSEP
## Cross-validated using 10 random segments.
##        (Intercept)  1 comps  2 comps  3 comps  4 comps  5 comps  6 comps
## CV            8.61    7.198    7.198    6.786    6.762    6.790    6.821
## adjCV         8.61    7.195    7.195    6.780    6.753    6.784    6.813
##        7 comps  8 comps  9 comps  10 comps  11 comps  12 comps  13 comps
## CV       6.822    6.689    6.712     6.720     6.712     6.664     6.593
## adjCV    6.812    6.679    6.701     6.708     6.700     6.651     6.580
## 
## TRAINING: % variance explained
##       1 comps  2 comps  3 comps  4 comps  5 comps  6 comps  7 comps  8 comps
## X       47.70    60.36    69.67    76.45    82.99    88.00    91.14    93.45
## crim    30.69    30.87    39.27    39.61    39.61    39.86    40.14    42.47
##       9 comps  10 comps  11 comps  12 comps  13 comps
## X       95.40     97.04     98.46     99.52     100.0
## crim    42.55     42.78     43.04     44.13      45.4
validationplot(Q9pcrfit, val.type = "MSEP")

(b) Propose a model (or set of models) that seem to perform well on this data set, and justify your answer. Make sure that you are evaluating model performance using validation set error, crossvalidation, or some other reasonable alternative, as opposed to using training error. The model with the lower cross-validation error is the best subset selection method

(c) Does your chosen model involve all of the features in the data set? Why or why not? No it does not since it only has 13 predictors.