6. Linear Model Selection and Regularization

 

Conceptual

 

1. We perform best subset, forward stepwise, and backward stepwise selection on a single data set. For each approach, we obtain p+1 models, containing 0,1,2,…,p predictors. Explain your answers:

 

  1. Which of the three models with k predictors has the smallest training RSS? The model with best subset selection has the smallest training RSS since it considers every possible model with k predictrors. Thus it could happen that die other two approaches end up picking the same model.
  2. Which of the three models with k predictors has the smallest test RSS? Hard to say with the given information. Best subset may overfit if n is relativly small compared to p. Also the other two methods may pick a model that performs better on test set by luck.
  1. True or False:
  1. The predictors in the k-variable model identified by forwardstep wise are a subset of the predictors in the (k+1)-variable model identified by forward stepwise selection.TRUE
  2. The predictors in the k-variable model identified by backward stepwise are a subset of the predictors in the (k+1) TRUE
  3. The predictors in the k-variable model identified by backward stepwise are a subset of the predictors in the (k+1)-variable model identified by forward stepwise selection. FALSE
  4. The predictors in thek-variable model identified by forwardstepwise are a subset of the predictors in the (k+1)-variable model identified by backward stepwise selection. FALSE
  5. The predictors in the k-variable model identified by best subset are a subset of the predictors in the (k+1)-variable model identified by best subset selection. FALSE

 

2. For parts (a) through (c), indicate which of i. through iv. is correct. Justify your answer.

  1. The lasso, relative to least squares, is: iii. Less flexible and hence will give improved prediction accuracy when its increase in bias is less than its decrease in variance. With increasing \(\lambda\) variance starts to decrease faster than bias increases leading to a low in the U shaped MSE curve.

  2. Repeat (a) for ridge regression relative to least squares. iii. Less flexible and hence will give improved prediction accuracy when its increase in bias is less than its decrease in variance. With increasing \(\lambda\) variance starts to decrease faster than bias increases leading to a low in the U shaped MSE curve.

  3. Repeat (a) for non-linear methods relative to least squares. ii. More flexible and hence will give improved prediction accuracy when its increase in variance is less than its decrease in bias.

 

3. Suppose we estimate the regression coefficients in a linear regression model by minimizing

\[\sum_{i=1}^n\Biggl(y_i - \beta_0 - \sum_{j=1}^p\beta_jx_{ij}\Biggr)\text{ subject to }\sum_{j=1}^p|\beta_j|\le s\]

for a particular value of s. For parts (a) through (e), indicate which of i. through v. is correct. Justify your answer.

  1. As we increase s from 0, the training RSS will: If we start from zero we start with the least flexible and most restricted model and end with the least squares model. As the model gets more flexible the training RSS will steadily decrease.

  2. Repeat (a) for test RSS Test RSS will initially decrease and then start increasing again as the model gets more flexible and starts to overfit the training data.

  3. Repeat (a) for variance. If we start from zero we start with the least flexible and most restricted model and end with the least squares model. With increased flexibility variance always increases.

  4. Repeat (a) for (squared) bias. Squared bias steadily decreases with increased flexibility.

  5. Repeat (a) for the irreducible error. The irreducible error is a constant independet of the model.

 

4. Suppose we estimate the regression coefficients in a linear regression model by minimizing

\[\sum_{i=1}^n\Biggl(y_i - \beta_0 - \sum_{j=1}^p\beta_jx_{ij}\Biggr) - \lambda\sum_{j=1}^p\beta_j^2\]

for a particular value of \(\lambda\). For parts (a) through (e), indicate which of i. through v. is correct. Justify your answer.

 

  1. As we increase λ from 0, the training RSS will:

Training error will increase steadily with less and less flexibility in the model.

  1. Repeat (a) for test RSS.

Test error decreases initially and then starts to increase again as the model gets less and less flexible.

  1. Repeat (a) for variance.

Variance decreases with less and less flexibility.

  1. Repeat (a) for (squared) bias.

Bias increases with less and less flexibility.

  1. Repeat (a) for the irreducible error.

The irreducible error is a constant independet of the model.

 

5. It is well known that ridge regression tends to give similar coefficient values to correlated variables, where as the lasso may give quite different coefficient values to correlated variables. We will now explore this property in a very simple setting. Suppose that \(n=2, p=2, x_{11}=x_{12}, x_{21}=x_{22}\). Furthermore, suppose that \(y_1+y_2=0\) and \(x_{11}+x_{21}=0\) and \(x_{12}+x_{22}=0\), so that the estimate for the intercept in a least squares, ridge regression, or lasso model is zero: \(\hat\beta_0=0\)

 

  1. Write out the ridge regression optimization problem in this setting.

With the information from above and \(x_{11} = x_{12} = x_1\), \(x_{11} = x_{12} = x_1\) we can write:

\[(y_1 - \hat{\beta}_1x_1 - \hat{\beta}_2x_1)^2 + (y_2 - \hat{\beta}_1x_2 - \hat{\beta}_2x_2)^2 + \lambda(\hat{\beta}_1^2 + \hat{\beta}_2^2)\]

  1. Argue that in this setting, the ridge coefficient estimates satisfy \(\hat\beta_1 = \hat\beta_2\)

deriving the expression in (a) with respect to \(\hat\beta_1\) and \(\hat\beta_2\) and setting them equal to zero yields:

\[\frac{\partial }{\partial \hat\beta_1}: (2\hat\beta_1x_{11}^2-2x_{11}y_1+2\hat\beta_2x_{11}x_{12}) + (2\hat\beta_1x_{21}^2-2x_{21}y_2+2\hat\beta_2x_{21}x_{22}) + 2\lambda\hat\beta_1 = 0\] and \[\frac{\partial }{\partial \hat\beta_1}: (2\hat\beta_1x_{11}^2-2x_{11}y_1+2\hat\beta_2x_{11}x_{12}) + (2\hat\beta_1x_{21}^2-2x_{21}y_2+2\hat\beta_2x_{21}x_{22}) + 2\lambda\hat\beta_1 = 0\] with a little bit of algebra we can rearange the equations above to:
\[\lambda\hat\beta_1 = x_1y_1 + x_2y_2 + 2\hat\beta_1x_1x_2 + 2\hat\beta_2x_1x_2\] and \[\lambda\hat\beta_2 = x_1y_1 + x_2y_2 + 2\hat\beta_1x_1x_2 + 2\hat\beta_2x_1x_2\]

which yields \(\hat\beta_1 = \hat\beta_2\)

  1. Write out the lasso optimization problem in this setting.

\[(y_1 - \hat{\beta}_1x_1 - \hat{\beta}_2x_1)^2 + (y_2 - \hat{\beta}_1x_2 - \hat{\beta}_2x_2)^2 + \lambda(|\hat{\beta}_1| + |\hat{\beta}_2|)\]

  1. Argue that in this setting, the lasso coefficients \(\hat\beta_1\)and \(\hat\beta_2\) are not unique in other words, there are many possible solutions to the optimization problem in (c). Describe these solutions.

 

6. We will now explore (6.12) and (6.13) further.

  1. Consider (6.12) with \(p= 1\). For some choice of \(y_1\) and \(\lambda>0\), plot (6.12) as a function of\(\beta_1\). Your plot should confirm that (6.12) is solved by (6.14).
lambda = 3
y= 4
betas = seq(-10, 10, 0.2)
ridge = function(beta, y, lambda) (y-beta)^2 + lambda*beta^2
plot(betas, ridge(betas, y, lambda), pch = 20, xlab = "beta", ylab = "Ridge optimization")
est.beta= y/(1+ lambda)
points(est.beta, ridge(est.beta, y, lambda), col = "red", pch = 20, lwd=5)

  1. Consider (6.13) with \(p= 1\). For some choice of \(y_1\) and \(\lambda>0\), plot (6.13) as a function of \(\beta_1\). Your plot should confirm that (6.13) is solved by (6.15).
lambda = 3
y= 4
betas = seq(-10, 10, 0.2)
lasso = function(beta, y, lambda) (y-beta)^2 + lambda*abs(beta)
plot(betas, lasso(betas, y, lambda), xlab="beta", main="Lasso Regression Optimization", pch=20)
est.beta= y- (lambda/2)
points(est.beta, lasso(est.beta, y, lambda), col = "red", pch = 20, lwd=5)

 

Applied

 

8. In this exercise, we will generate simulated data, and will then use this data to perform best subset selection.

  1. Use the rnorm() function to generate a predictor X of length n=100, as well as a noise vector \(\epsilon\) of length n=100.
set.seed(1)
X = rnorm(100)
eps = rnorm(100)
  1. Generate a response vector \(Y\) of length \(n= 100\) according to the model \(Y = \beta_0 + \beta_1X + \beta_2X^2 + \beta_3X^3 + \varepsilon\) where \(\beta_0\), \(\beta_1\), \(\beta_2\), \(\beta_3\) are constants of your choice.
b0 = 4
b1 = 2
b2 = 3
b3 = 2
Y = b0 + b1 * X + b2 * X^2 + b3 * X^3 + eps
  1. Use the regsubsets() function to perform best subset selection in order to choose the best model containing the predictors \(X\),\(X_2\),…,\(X_{10}\). What is the best model obtained according to \(C_p\), \(BIC\), and adjusted \(R^2\)? Show some plots to provide evidence for your answer, and report the coefficients of the best model obtained. Note you will need to use the data.frame() function to create a single data set containing both \(X\) and \(Y\).
library(leaps)
data.full = data.frame(Y, X)
regfit.full = regsubsets(Y~poly(X,10,raw=T), data=data.frame(Y,X), nvmax=10)
summary = summary(regfit.full)
par(mfrow=c(2,2))
plot(summary$cp, xlab ="Number of variables", ylab="C_p", type="l")
points(which.min(summary$cp), summary$cp[which.min(summary$cp)], col = "red", cex = 2, pch = 20)
plot(summary$bic, xlab ="Number of variables", ylab="BIC", type="l")
points(which.min(summary$bic), summary$bic[which.min(summary$bic)], col = "red", cex = 2, pch = 20)
plot(summary$adjr2, xlab ="Number of variables", ylab="Adjusted R^2", type="l")
points(which.max(summary$adjr2), summary$adjr2[which.max(summary$adjr2)], col = "red", cex = 2, pch = 20)

  1. Repeat (c), using forward stepwise selection and also using backwards stepwise selection. How does your answer compare to the results in (c)?
library(leaps)
data.full = data.frame(Y, X)
regfit.fwd = regsubsets(Y~poly(X,10,raw=T), data=data.frame(Y,X), nvmax=10, method="forward")
summary = summary(regfit.fwd)
par(mfrow=c(2,2))
plot(summary$cp, xlab ="Number of variables", ylab="C_p", type="l")
points(which.min(summary$cp), summary$cp[which.min(summary$cp)], col = "red", cex = 2, pch = 20)
plot(summary$bic, xlab ="Number of variables", ylab="BIC", type="l")
points(which.min(summary$bic), summary$bic[which.min(summary$bic)], col = "red", cex = 2, pch = 20)
plot(summary$adjr2, xlab ="Number of variables", ylab="Adjusted R^2", type="l")
points(which.max(summary$adjr2), summary$adjr2[which.max(summary$adjr2)], col = "red", cex = 2, pch = 20)

library(leaps)
data.full = data.frame(Y, X)
regfit.back = regsubsets(Y~poly(X,10,raw=T), data=data.frame(Y,X), nvmax=10, method="backward")
summary = summary(regfit.back)
par(mfrow=c(2,2))
plot(summary$cp, xlab ="Number of variables", ylab="C_p", type="l")
points(which.min(summary$cp), summary$cp[which.min(summary$cp)], col = "red", cex = 2, pch = 20)
plot(summary$bic, xlab ="Number of variables", ylab="BIC", type="l")
points(which.min(summary$bic), summary$bic[which.min(summary$bic)], col = "red", cex = 2, pch = 20)
plot(summary$adjr2, xlab ="Number of variables", ylab="Adjusted R^2", type="l")
points(which.max(summary$adjr2), summary$adjr2[which.max(summary$adjr2)], col = "red", cex = 2, pch = 20)

coef(regfit.full, which.max(summary(regfit.full)$adjr2))
##           (Intercept) poly(X, 10, raw = T)1 poly(X, 10, raw = T)2 
##            4.07200775            2.38745596            2.84575641 
## poly(X, 10, raw = T)3 poly(X, 10, raw = T)5 
##            1.55797426            0.08072292
coef(regfit.fwd, which.max(summary(regfit.fwd)$adjr2))
##           (Intercept) poly(X, 10, raw = T)1 poly(X, 10, raw = T)2 
##            4.07200775            2.38745596            2.84575641 
## poly(X, 10, raw = T)3 poly(X, 10, raw = T)5 
##            1.55797426            0.08072292
coef(regfit.back, which.max(summary(regfit.back)$adjr2))
##           (Intercept) poly(X, 10, raw = T)1 poly(X, 10, raw = T)2 
##           4.079236362           2.231905828           2.833494180 
## poly(X, 10, raw = T)3 poly(X, 10, raw = T)9 
##           1.819555807           0.001290827
  • C_p and Adjusted R^2 pick models with 4 variables while BIC picks a model with 3 variables. this is the same for every method out of the 3
  • one difference is that with forward selection and Adjusted R^2 we have X^5 over x^9
  1. Now fit a lasso model to the simulated data, again using \(X, X^2, ...,X^{10}\) as predictors. Use cross-validation to select the optimal value of λ. Create plots of the cross-validation error as a function of λ. Report the resulting coefficient estimates, and discuss the results obtained.
library(glmnet)
## Loading required package: Matrix
## Loaded glmnet 3.0-2
xmat = model.matrix(Y~poly(X,10,raw=T))[,-1]
lasso.mod = cv.glmnet(xmat, Y, alpha=1)
(lambda = lasso.mod$lambda.min)
## [1] 0.04924644
par(mfrow=c(1,1))
plot(lasso.mod)

predict(lasso.mod, s=lambda, type="coefficients")
## 11 x 1 sparse Matrix of class "dgCMatrix"
##                                  1
## (Intercept)            4.160843693
## poly(X, 10, raw = T)1  2.203516696
## poly(X, 10, raw = T)2  2.650263654
## poly(X, 10, raw = T)3  1.765931598
## poly(X, 10, raw = T)4  0.040178155
## poly(X, 10, raw = T)5  0.021903164
## poly(X, 10, raw = T)6  .          
## poly(X, 10, raw = T)7  0.003685793
## poly(X, 10, raw = T)8  .          
## poly(X, 10, raw = T)9  .          
## poly(X, 10, raw = T)10 .
  1. Now generate a response vector Y according to the model \(Y = \beta_0 + \beta_7X^7 + \varepsilon,\), and perform best subset selection and the lasso. Discuss the results obtained.
Y2 = 4 + 7*X^7 + eps
data.full2 = data.frame(y = Y2,x = X)
regfit.full2 = regsubsets(Y~poly(X,10,raw=T), data=data.full2, nvmax=10)
summary2 = summary(regfit.full2)
par(mfrow=c(2,2))
plot(summary2$cp, xlab ="Number of variables", ylab="C_p", type="l")
points(which.min(summary2$cp), summary2$cp[which.min(summary2$cp)], col = "red", cex = 2, pch = 20)
plot(summary2$bic, xlab ="Number of variables", ylab="BIC", type="l")
points(which.min(summary2$bic), summary2$bic[which.min(summary2$bic)], col = "red", cex = 2, pch = 20)
plot(summary2$adjr2, xlab ="Number of variables", ylab="Adjusted R^2", type="l")
points(which.max(summary2$adjr2), summary2$adjr2[which.max(summary2$adjr2)], col = "red", cex = 2, pch = 20)

coef(regfit.full2, which.min(summary2$cp))
##           (Intercept) poly(X, 10, raw = T)1 poly(X, 10, raw = T)2 
##            4.07200775            2.38745596            2.84575641 
## poly(X, 10, raw = T)3 poly(X, 10, raw = T)5 
##            1.55797426            0.08072292
coef(regfit.full2, which.min(summary2$bic))
##           (Intercept) poly(X, 10, raw = T)1 poly(X, 10, raw = T)2 
##              4.061507              1.975280              2.876209 
## poly(X, 10, raw = T)3 
##              2.017639
coef(regfit.full2, which.min(summary2$adjr2))
##           (Intercept) poly(X, 10, raw = T)3 
##              6.437156              2.828270

With \(C_p\) we pick a 4 variable model, with BIC a 3 variable model and with adjusted \(R^2\) a 1 variable model.

xmat = model.matrix(Y2~poly(X,10,raw=T))[,-1]
lasso = cv.glmnet(xmat, Y2, alpha=1)
(lambda = lasso$lambda.min)
## [1] 12.36884
par(mfrow=c(1,1))
plot(lasso)

predict(lasso, s=lambda, type="coefficients")
## 11 x 1 sparse Matrix of class "dgCMatrix"
##                               1
## (Intercept)            4.820215
## poly(X, 10, raw = T)1  .       
## poly(X, 10, raw = T)2  .       
## poly(X, 10, raw = T)3  .       
## poly(X, 10, raw = T)4  .       
## poly(X, 10, raw = T)5  .       
## poly(X, 10, raw = T)6  .       
## poly(X, 10, raw = T)7  6.796694
## poly(X, 10, raw = T)8  .       
## poly(X, 10, raw = T)9  .       
## poly(X, 10, raw = T)10 .

The Lasso picks the 1 variable model. Coefficiants differ quite heavily from best subsets.

 

9. In this exercise, we will predict the number of applications receive dusing the other variables in the College data set.

  1. Split the data set into a training set and a test set.
library(ISLR)
data(College)
set.seed(1)
trainid = sample(1:nrow(College), nrow(College)/2)
train = College[trainid,]
test = College[-trainid,]
  1. Fit a linear model using least squares on the training set, and report the test error obtained.
lm.fit = lm(Apps~., data=train)
lm.pred = predict(lm.fit, test)
lm.err = mean((test$Apps - lm.pred)^2)
lm.err
## [1] 1135758
  1. Fit a ridge regression model on the training set, with \(\lambda\) chosenby cross-validation. Report the test error obtained.
library(glmnet)
train.X = model.matrix(Apps ~ ., data = train)
train.Y = train[, "Apps"]
test.X = model.matrix(Apps ~ ., data = test)
test.Y = test[, "Apps"]
grid = 10 ^ seq(4, -2, length=100)
ridge.mod = glmnet(train.X, train.Y, alpha =0, lambda=grid, thresh = 1e-12)
lambda.best = ridge.mod$lambda.min
ridge.pred = predict(ridge.mod, newx= test.X, s=lambda.best)
(ridge.err = mean((test.Y - ridge.pred)^2))
## [1] 1164319
  1. Fit a lasso model on the training set, with \(\lambda\) chosen by cross-validation. Report the test error obtained, along with the number of non-zero coefficient estimates.
lasso.mod = glmnet(train.X, train.Y, alpha =1, lambda=grid, thresh = 1e-12)
lasso.cv = cv.glmnet(train.X, train.Y, alpha =1, lambda=grid, thresh = 1e-12)
lambda.best = lasso.cv$lambda.min
lasso.pred = predict(lasso.mod, newx= test.X, s=lambda.best)
(lasso.err = mean((test.Y - lasso.pred)^2))
## [1] 1135660
predict(lasso.mod, s = lambda.best, type="coefficients")
## 19 x 1 sparse Matrix of class "dgCMatrix"
##                         1
## (Intercept) -7.900363e+02
## (Intercept)  .           
## PrivateYes  -3.070103e+02
## Accept       1.779328e+00
## Enroll      -1.469508e+00
## Top10perc    6.672214e+01
## Top25perc   -2.230442e+01
## F.Undergrad  9.258974e-02
## P.Undergrad  9.408838e-03
## Outstate    -1.083495e-01
## Room.Board   2.115147e-01
## Books        2.912105e-01
## Personal     6.120406e-03
## PhD         -1.547200e+01
## Terminal     6.409503e+00
## S.F.Ratio    2.282638e+01
## perc.alumni  1.130498e+00
## Expend       4.856697e-02
## Grad.Rate    7.488081e+00
  1. Fit a PCR model on the training set, with M chosen by cross-validation. Report the test error obtained, along with the value of M selected by cross-validation.
library(pls)
## 
## Attaching package: 'pls'
## The following object is masked from 'package:stats':
## 
##     loadings
pcr.fit = pcr(Apps~., data=train, scale=TRUE, validation="CV")
validationplot(pcr.fit, val.type="MSEP")

summary(pcr.fit)
## Data:    X dimension: 388 17 
##  Y dimension: 388 1
## Fit method: svdpc
## Number of components considered: 17
## 
## VALIDATION: RMSEP
## Cross-validated using 10 random segments.
##        (Intercept)  1 comps  2 comps  3 comps  4 comps  5 comps  6 comps
## CV            4288     4027     2351     2355     2046     1965     1906
## adjCV         4288     4031     2347     2353     2014     1955     1899
##        7 comps  8 comps  9 comps  10 comps  11 comps  12 comps  13 comps
## CV        1910     1913     1871      1799      1799      1802      1800
## adjCV     1903     1908     1866      1790      1792      1795      1793
##        14 comps  15 comps  16 comps  17 comps
## CV         1832      1728      1310      1222
## adjCV      1829      1702      1296      1212
## 
## TRAINING: % variance explained
##       1 comps  2 comps  3 comps  4 comps  5 comps  6 comps  7 comps  8 comps
## X       32.20    57.78    65.31    70.99    76.37    81.27     84.8    87.85
## Apps    13.44    70.93    71.07    79.87    81.15    82.25     82.3    82.33
##       9 comps  10 comps  11 comps  12 comps  13 comps  14 comps  15 comps
## X       90.62     92.91     94.98     96.74     97.79     98.72     99.42
## Apps    83.38     84.76     84.80     84.84     85.11     85.14     90.55
##       16 comps  17 comps
## X        99.88    100.00
## Apps     93.42     93.89
pcr.pred = predict(pcr.fit, test, ncomp=10)
(pcr.err = mean((test$Apps - pcr.pred)^2))
## [1] 1723100
  1. Fit a PLS model on the training set, with M chosen by cross-validation. Report the test error obtained, along with the value of M selected by cross-validation.
pls.fit = plsr(Apps~., data=train, scale=TRUE, validation="CV")
validationplot(pls.fit, val.type="MSEP")

pls.pred = predict(pls.fit, test, ncomp=10)
(pls.err = mean((test$Apps - pls.pred)^2))
## [1] 1131661
  1. Comment on the results obtained. How accurately can we predict the number of college applications received? Is there much difference among the test errors resulting from these five approaches?
test.avg = mean(test$Apps)
lm.r2 =1 - lm.err/mean((test.avg - test$Apps)^2)
ridge.r2 =1 - ridge.err/mean((test.avg - test$Apps)^2)
lasso.r2 =1 - lasso.err/mean((test.avg - test$Apps)^2)
pcr.r2 =1 - pcr.err/mean((test.avg - test$Apps)^2)
pls.r2 =1 - pls.err/mean((test.avg - test$Apps)^2)
all.r2 = c(lm.r2, ridge.r2, lasso.r2, pcr.r2, pls.r2)
names(all.r2) = c("lm", "ridge", "lasso", "pcr", "pls")
barplot(all.r2 )

All but the pcr model predict college applications with high accuracy.

 

10. We have seen that as the number of features used in a model increases, the training error will necessarily decrease, but the test error may not. We will now explore this in a simulated data set.

  1. Generate a data set with p=20 features, n=1000 observations, and an associated quantitative response vector generated according to the model \(Y = X\beta + \epsilon\), where \(\beta\) has some elements that are exactly equal to zero.
set.seed(1)
x = matrix(rnorm(1000 * 20), 1000, 20)
b = rnorm(20)
b[3] = 0
b[4] = 0
b[9] = 0
b[11] = 0
b[13] = 0
b[14] = 0
b[7] = 0
b[19] = 0

eps = rnorm(1000)
y = x %*% b + eps
  1. Split your data set into a training set containing 100 observations and a test set containing 900 observations.
trainid = sample(1:nrow(x), nrow(x)/10)
X.train = x[-trainid,]
Y.train = y[-trainid,]
X.test = x[trainid,]
Y.test = y[trainid,]
  1. Perform best subset selection on the training set, and plot the training set MSE associated with the best model of each size.
data.train = data.frame(y = Y.train, x = X.train)
regfit.full = regsubsets(y ~ ., data = data.train, nvmax = 20)
train.mat = model.matrix(y ~ ., data = data.train, nvmax = 20)
val.errors = rep(NA, 20)
for (i in 1:20) {
    coefi = coef(regfit.full, id = i)
    pred = train.mat[, names(coefi)] %*% coefi
    val.errors[i] = mean((pred - Y.train)^2)
}
plot(val.errors, xlab = "Number of predictors", ylab = "Training MSE", pch = 19, type = "b")

  1. Plot the test set MSE associated with the best model of each size.
data.test = data.frame(y = Y.test, x = X.test)
test.mat = model.matrix(y ~ ., data = data.test, nvmax = 20)
for (i in 1:20) {
    coefi = coef(regfit.full, id = i)
    pred = test.mat[, names(coefi)] %*% coefi
    val.errors[i] = mean((pred - Y.test)^2)
}
plot(val.errors, xlab = "Number of predictors", ylab = "Test MSE", pch = 19, type = "b")

  1. For which model size does the test set MSE take on its minimum value? Comment on your results. If it takes on its minimum value for a model containing only an intercept or a model containing all of the features, then play around with the way that you are generating the data in (a) until you come up with a scenario in which the test set MSE is minimized for an intermediate model size.
which.min(val.errors)
## [1] 12
  1. How does the model at which the test set MSE is minimized compare to the true model used to generate the data? Comment on the coefficient values.
coef(regfit.full, which.min(val.errors))
## (Intercept)         x.1         x.2         x.5         x.6         x.8 
##  0.06067403  0.19513841  0.23488764  1.06526308 -0.26722541  0.73552035 
##        x.10        x.12        x.15        x.16        x.17        x.18 
##  0.71666624  0.53548818 -0.72346236 -0.27004140  0.34920967  1.65652430 
##        x.20 
## -1.01865533
  • the model captured all zeroed out coefficients
  1. Create a plot displaying $ $ or a range of values of r where \(\hat\beta_j^r\) is the jth coefficient estimate for the best model containing r coefficients. Comment on what you observe. How does this compare to the test MSE plot from (d)?
val.errors = rep(NA, 20)
x_cols = colnames(x, do.NULL = FALSE, prefix = "x.")
for (i in 1:20) {
    coefi = coef(regfit.full, id = i)
    val.errors[i] = sqrt(sum((b[x_cols %in% names(coefi)] - coefi[names(coefi) %in% x_cols])^2) + sum(b[!(x_cols %in% names(coefi))])^2)
}
plot(val.errors, xlab = "Number of coefficients", ylab = "Error between estimated and true coefficients", pch = 19, type = "b")

which.min(val.errors)
## [1] 12
  • It minimizes with the same number of coefficients

 

11. We will now try to predict per capita crime rate in the Boston data set.

  1. Try out some of the regression methods explored in this chapter, such as best subset selection, the lasso, ridge regression, and PCR. Present and discuss results for the approaches that you consider.
Best subset selection
library(MASS) 
library(leaps)   
library(glmnet)  
data(Boston)
set.seed(1)
predict.regsubsets = function(object, newdata, id, ...) {
    form = as.formula(object$call[[2]])
    mat = model.matrix(form, newdata)
    coefi = coef(object, id = id)
    xvars = names(coefi)
    mat[, xvars] %*% coefi
}

k = 10
folds = sample(1:k, nrow(Boston), replace = TRUE)
cv.errors = matrix(NA, k, 13, dimnames = list(NULL, paste(1:13)))
for (j in 1:k) {
    best.fit = regsubsets(crim ~ ., data = Boston[folds != j, ], nvmax = 13)
    for (i in 1:13) {
        pred = predict(best.fit, Boston[folds == j, ], id = i)
        cv.errors[j, i] = mean((Boston$crim[folds == j] - pred)^2)
    }
}
mean.cv.errors = apply(cv.errors, 2, mean)
plot(mean.cv.errors, type = "b", xlab = "Number of variables", ylab = "CV error")

min(mean.cv.errors)
## [1] 42.46014
regfit.best = regsubsets(crim~., data=Boston, nvmax=13)
coef(regfit.best, 12)
##   (Intercept)            zn         indus          chas           nox 
##  16.985713928   0.044673247  -0.063848469  -0.744367726 -10.202169211 
##            rm           dis           rad           tax       ptratio 
##   0.439588002  -0.993556631   0.587660185  -0.003767546  -0.269948860 
##         black         lstat          medv 
##  -0.007518904   0.128120290  -0.198877768
Lasso
x = model.matrix(crim ~ ., Boston)[, -1]
y = Boston$crim
cv.out = cv.glmnet(x, y, alpha = 1, type.measure = "mse")
plot(cv.out)

cv.out
## 
## Call:  cv.glmnet(x = x, y = y, type.measure = "mse", alpha = 1) 
## 
## Measure: Mean-Squared Error 
## 
##     Lambda Measure    SE Nonzero
## min  0.051   43.11 14.16      11
## 1se  3.376   56.89 17.29       1
Ridge
cv.out = cv.glmnet(x, y, alpha = 0, type.measure = "mse")
cv.out
## 
## Call:  cv.glmnet(x = x, y = y, type.measure = "mse", alpha = 0) 
## 
## Measure: Mean-Squared Error 
## 
##     Lambda Measure    SE Nonzero
## min   0.54   43.48 14.33      13
## 1se  74.44   57.69 16.76      13
PCR
pcr.fit = pcr(crim ~ ., data = Boston, scale = TRUE, validation = "CV")
validationplot(pcr.fit, val.type = "MSEP")

  1. Propose a model (or set of models) that seem to perform well on this data set, and justify your answer. Make sure that you are evaluating model performance using validation set error, cross-validation, or some other reasonable alternative, as opposed to using training error.
  • Best subset selection yields the best results.
  1. Does your chosen model involve all of the features in the dataset? Why or why not?
  • The best model is the 12 feature model from best subset selection. 1 Feature is missing.