knitr::opts_chunk$set(echo = TRUE,message = FALSE, warning = FALSE,eval = TRUE)

Question 2. For parts (a) through (c), indicate which of i. through iv. is correct.Justify your answer.

  1. More flexible and hence will give improved prediction accuracy when its increase in bias is less than its decrease in variance.

  2. More flexible and hence will give improved prediction accuracy when its increase in variance is less than its decrease in bias.

  3. Less flexible and hence will give improved prediction accuracy when its increase in bias is less than its decrease in variance.

  4. Less flexible and hence will give improved prediction accuracy when its increase in variance is less than its decrease in bias.

#(a) The lasso, relative to least squares, is: iii is the correct answer.

Lasso’s advantage over least squares is rooted in the bias-variance trade-off. Lasso can shrink coefficient estimates, removing non-essential variables for less variance and higher bias. This consequently can generate more accurate predictions. In addition, lasso performs variable selection which makes it easier to interpret than other methods like ridge regression.

#(b) Repeat (a) for ridge regression relative to least squares. iii is the correct answer.

Ridge regression`s advantage over least squares is rooted in the bias-variance trade-off.Ridge regression can shrink the coefficient estimates, As λ increases, the flexibility of the ridge regression fit decreases leading to decreased variance but increased bias. ridge regression works best in situations where the least squares estimates have high variance.

#(c) Repeat (a) for non-linear methods relative to least squares. ii is the correct answer.

Non linear methods are generally more flexible than least squares. They perform better when the linearity assumption is strongly broken. These methods will have more variance due to their more sensitive fits to the data.

##9. In this exercise, we will predict the number of applications received using the other variables in the College data set. #(a) Split the data set into a training set and a test set.

library(ISLR)
library(caret)
library(pls)
library(tidyverse)
data("College")
attach(College)
college=na.omit(College)
names(college)
##  [1] "Private"     "Apps"        "Accept"      "Enroll"      "Top10perc"  
##  [6] "Top25perc"   "F.Undergrad" "P.Undergrad" "Outstate"    "Room.Board" 
## [11] "Books"       "Personal"    "PhD"         "Terminal"    "S.F.Ratio"  
## [16] "perc.alumni" "Expend"      "Grad.Rate"
train<-sample(nrow(college), size=0.7*nrow(college))
test=(-train)
TRAIN=college[train,]
TEST=college[-train,]# Generate the college object
dim(college)
## [1] 777  18
dim(TRAIN)
## [1] 543  18
dim(TEST)
## [1] 234  18

#(b) Fit a linear model using least squares on the training set, and report the test error obtained.

set.seed(4)
lm.college=lm(Apps~., data=college,subset=train)
lm.pred<-predict(lm.college, TEST)
mean((TEST$Apps-lm.pred)^2)
## [1] 981856.1
lm.err<- mean((TEST$Apps-lm.pred)^2)
lm.err
## [1] 981856.1

linear model test error: 30616237.

#(c) Fit a ridge regression model on the training set, with λ chosen by cross-validation. Report the test error obtained.

library(glmnet)
x=model.matrix(Apps~.,college[,-2])  
y=college$Apps
grid=10^seq(10,-2,length=100)
ridge.mod=glmnet(x,y,lambda = grid, alpha = 0)
dim(coef(ridge.mod))
## [1]  19 100
set.seed(4)
y.test=y[test]
ridge.mod <- glmnet(x[train,], y[train], alpha = 0, lambda = grid, thresh = 1e-12)
plot(ridge.mod)

cv.out=cv.glmnet(x[train,],y[train], alpha=0)
plot(cv.out)

best.lambda <- cv.out$lambda.min
best.lambda
## [1] 385.2118
ridge.pred=predict(ridge.mod,s=best.lambda,newx = x[test,])
mean((ridge.pred-y.test)^2)
## [1] 855103.2
ridge.err=mean((ridge.pred-y.test)^2)
ridge.err
## [1] 855103.2
out=glmnet(x,y,alpha = 0)
predict(out,type='coefficients', s=best.lambda,)[1:18,]
##   (Intercept)   (Intercept)    PrivateYes        Accept        Enroll 
## -1.494932e+03  0.000000e+00 -5.287085e+02  9.894508e-01  4.515255e-01 
##     Top10perc     Top25perc   F.Undergrad   P.Undergrad      Outstate 
##  2.533056e+01  8.392125e-01  7.489374e-02  2.435065e-02 -2.252780e-02 
##    Room.Board         Books      Personal           PhD      Terminal 
##  1.993583e-01  1.323914e-01 -8.614583e-03 -3.881481e+00 -4.755417e+00 
##     S.F.Ratio   perc.alumni        Expend 
##  1.291136e+01 -8.708573e+00  7.553949e-02

with the best λ chosen by cross-validation. the test error obtained is 2834447.

#(d) Fit a lasso model on the training set, with λ chosen by crossvalidation. Report the test error obtained, along with the number of non-zero coefficient estimates.

set.seed(4)
lasso.mod=glmnet(x[train,],y[train], alpha=0, lamba=grid)
plot(lasso.mod)

cv.out=cv.glmnet(x[train,],y[train], alpha=1)
plot(cv.out)

bestlam=cv.out$lambda.min
bestlam
## [1] 2.05576
lasso.pred=predict(lasso.mod,s= bestlam,newx = x[test,])
mean((lasso.pred-y.test)^2)
## [1] 855484.8
lasso.err <- mean((lasso.pred-y.test)^2)
out=glmnet(x,y,alpha = 1, lambda = grid)
lasso.coef=predict(out,type='coefficients', s=bestlam,)[1:18,]
lasso.coef
##   (Intercept)   (Intercept)    PrivateYes        Accept        Enroll 
## -469.61520967    0.00000000 -491.37359524    1.57069206   -0.76472898 
##     Top10perc     Top25perc   F.Undergrad   P.Undergrad      Outstate 
##   48.19620155  -12.90006983    0.04242692    0.04405439   -0.08331476 
##    Room.Board         Books      Personal           PhD      Terminal 
##    0.14955224    0.01529045    0.02907184   -8.41507750   -3.26360015 
##     S.F.Ratio   perc.alumni        Expend 
##   14.57582070   -0.03145777    0.07715099

with the best λ chosen by cross-validation. the test error obtained is 2840498, which is not better than rigde model.

#(e) Fit a PCR model on the training set, with M chosen by cross validation. Report the test error obtained, along with the value of M selected by cross-validation.

library(pls)
set.seed(4)
pcr.fit=pcr(Apps~.,data=college,scale=TRUE,validation="CV")
summary(pcr.fit)
## Data:    X dimension: 777 17 
##  Y dimension: 777 1
## Fit method: svdpc
## Number of components considered: 17
## 
## VALIDATION: RMSEP
## Cross-validated using 10 random segments.
##        (Intercept)  1 comps  2 comps  3 comps  4 comps  5 comps  6 comps
## CV            3873     3840     2035     2037     1877     1587     1578
## adjCV         3873     3840     2033     2037     1735     1577     1576
##        7 comps  8 comps  9 comps  10 comps  11 comps  12 comps  13 comps
## CV        1568     1542     1493      1486      1493      1492      1497
## adjCV     1568     1537     1491      1484      1490      1489      1494
##        14 comps  15 comps  16 comps  17 comps
## CV         1496      1428      1154      1120
## adjCV      1494      1409      1148      1115
## 
## TRAINING: % variance explained
##       1 comps  2 comps  3 comps  4 comps  5 comps  6 comps  7 comps  8 comps
## X      31.670    57.30    64.30    69.90    75.39    80.38    83.99    87.40
## Apps    2.316    73.06    73.07    82.08    84.08    84.11    84.32    85.18
##       9 comps  10 comps  11 comps  12 comps  13 comps  14 comps  15 comps
## X       90.50     92.91     95.01     96.81      97.9     98.75     99.36
## Apps    85.88     86.06     86.06     86.10      86.1     86.13     90.32
##       16 comps  17 comps
## X        99.84    100.00
## Apps     92.52     92.92
validationplot(pcr.fit, val.type = "MSEP")

pcr.fit=pcr(Apps~.,data=college[train,],scale=TRUE,validation="CV")
validationplot(pcr.fit, val.type = "MSEP")

pcr.pred=predict(pcr.fit,college[test,], ncomp = 8)
mean((pcr.pred-college$Apps[test])^2)
## [1] 1734897
pcr.pred=predict(pcr.fit,college[test,], ncomp = 9)
mean((pcr.pred-college$Apps[test])^2)
## [1] 1350903
pcr.pred10=predict(pcr.fit,college[test,], ncomp = 10)
mean((pcr.pred-college$Apps[test])^2)
## [1] 1350903
pcr.err <- mean((pcr.pred10-college$Apps[test])^2)

the best ncomp is 10 along with a value of 3948086.

#(f) Fit a PLS model on the training set, with M chosen by cross validation. Report the test error obtained, along with the value of M selected by cross-validation.

set.seed(4)
pls.fit=plsr(Apps~.,data=college[train,],scale=TRUE, validation="CV")
summary(pls.fit)
## Data:    X dimension: 543 17 
##  Y dimension: 543 1
## Fit method: kernelpls
## Number of components considered: 17
## 
## VALIDATION: RMSEP
## Cross-validated using 10 random segments.
##        (Intercept)  1 comps  2 comps  3 comps  4 comps  5 comps  6 comps
## CV            4090     2016     1775     1594     1508     1335     1294
## adjCV         4090     2012     1774     1587     1491     1323     1282
##        7 comps  8 comps  9 comps  10 comps  11 comps  12 comps  13 comps
## CV        1278     1267     1259      1260      1262      1260      1257
## adjCV     1267     1257     1248      1249      1250      1249      1245
##        14 comps  15 comps  16 comps  17 comps
## CV         1257      1257      1257      1257
## adjCV      1246      1246      1246      1246
## 
## TRAINING: % variance explained
##       1 comps  2 comps  3 comps  4 comps  5 comps  6 comps  7 comps  8 comps
## X       26.24    45.27    63.00    65.60    68.69    73.64    77.53    81.09
## Apps    77.07    83.24    87.18    90.53    92.41    92.92    93.02    93.09
##       9 comps  10 comps  11 comps  12 comps  13 comps  14 comps  15 comps
## X       83.19     85.73     88.13     91.48     92.35     94.12     96.24
## Apps    93.15     93.18     93.21     93.22     93.23     93.23     93.23
##       16 comps  17 comps
## X        98.39    100.00
## Apps     93.23     93.23
validationplot(pls.fit)

pls.pred=predict(pls.fit,college[test,],ncomp = 7)
mean((pls.pred-college$Apps[test])^2)
## [1] 944748.2
pls.pred=predict(pls.fit,college[test,],ncomp = 10)
mean((pls.pred-college$Apps[test])^2)
## [1] 968917.4
pls.pred=predict(pls.fit,college[test,],ncomp = 5)
mean((pls.pred-college$Apps[test])^2)
## [1] 1151579
pls.pred=predict(pls.fit,college[test,],ncomp = 6)
mean((pls.pred-college$Apps[test])^2)
## [1] 945524
pls.pred10=predict(pls.fit,college[test,],ncomp = 10)
pls.err <- mean((pls.pred10-college$Apps[test])^2)

the best ncomp is 10 along with a value of 1656451.

#(g) Comment on the results obtained. How accurately can we predict the number of college applications received? Is there much difference among the test errors resulting from these five approaches?

barplot(c(lm.err,ridge.err,lasso.err,pcr.err,pls.err), col="gray", xlab="Regression Methods", ylab="Test Error", main="Test Errors for All Methods", names.arg=c("LM", "Ridge", "Lasso", "PCR", "PLS"))

avg.Apps=mean(TEST$Apps)
lm.r2=1-mean((lm.pred-TEST$Apps)^2)/mean((avg.Apps-TEST$Apps)^2)
lm.r2
## [1] 0.9098118
ridge.r2=1-mean((ridge.pred-TEST$Apps)^2)/mean((avg.Apps-TEST$Apps)^2)
ridge.r2
## [1] 0.9214546
lasso.r2=1-mean((lasso.pred-TEST$Apps)^2)/mean((avg.Apps-TEST$Apps)^2)
lasso.r2
## [1] 0.9214196
pcr.r2=1-mean((pcr.pred10-TEST$Apps)^2)/mean((avg.Apps-TEST$Apps)^2)
pcr.r2
## [1] 0.8712233
pls.r2=1-mean((pls.pred10-TEST$Apps)^2)/mean((avg.Apps-TEST$Apps)^2)
pls.r2
## [1] 0.9110002

from the data showing above, least squares and PLS models have the lowest test error and highest R square.

##11. We will now try to predict per capita crime rate in the Boston dataset.##

#(a) Try out some of the regression methods explored in this chapter,such as best subset selection, the lasso, ridge regression, and PCR. Present and discuss results for the approaches that you consider.

library(ISLR)
library(MASS)
library(glmnet)
library(leaps)
detach(College)
attach(Boston)
data=Boston
set.seed(8)
names(Boston)
##  [1] "crim"    "zn"      "indus"   "chas"    "nox"     "rm"      "age"    
##  [8] "dis"     "rad"     "tax"     "ptratio" "black"   "lstat"   "medv"
sum(is.na(Boston))
## [1] 0
Boston=na.omit(Boston)
dim(Boston)
## [1] 506  14
x=model.matrix(crim~.,Boston[,-1])  
y=Boston$crim
grid=10^seq(10,-2,length=100)
ridge.mod=glmnet(x,y,lambda = grid, alpha = 0)
dim(coef(ridge.mod))
## [1]  15 100
train<-sample(nrow(Boston), size=0.6*nrow(Boston))
test=(-train)
y.test=y[test]
TRAIN=Boston[train,]
TEST=Boston[-train,]
dim(Boston)
## [1] 506  14
dim(TRAIN)
## [1] 303  14
dim(TEST)
## [1] 203  14

Best Subset Selection

regfit.full=regsubsets(crim~.,Boston,nvmax = 13)
reg.summary=summary(regfit.full)
names(reg.summary)
## [1] "which"  "rsq"    "rss"    "adjr2"  "cp"     "bic"    "outmat" "obj"
reg.summary$rsq
##  [1] 0.3912567 0.4207965 0.4286123 0.4334892 0.4392738 0.4440173 0.4476594
##  [8] 0.4504606 0.4524408 0.4530572 0.4535605 0.4540031 0.4540104
par(mfrow=c(2,2))
plot(reg.summary$rss,xlab = "Number of Variables",ylab = "RSS",type = "l")
plot(reg.summary$adjr2,xlab = "Number of Variables",ylab = "Adjr2",type = "l")
points(9,reg.summary$adjr2[9],col="blue",cex=2,pch=20)
plot(reg.summary$cp,xlab = "Number of Variables",ylab = "cp",type = "l")
points(7,reg.summary$cp[7],col="blue",cex=2,pch=20)
plot(reg.summary$bic,xlab = "Number of Variables",ylab = "bic",type = "l")
points(2,reg.summary$bic[2],col="blue",cex=2,pch=20)

which.max(reg.summary$adjr2)
## [1] 9
which.min(reg.summary$cp)
## [1] 8
which.min(reg.summary$bic)
## [1] 3
regsubsets.err <- mean((lasso.pred-y.test)^2)
boston.bsm = regsubsets(crim ~ .,data = Boston[train,], nvmax = 13)
summary(boston.bsm)
## Subset selection object
## Call: regsubsets.formula(crim ~ ., data = Boston[train, ], nvmax = 13)
## 13 Variables  (and intercept)
##         Forced in Forced out
## zn          FALSE      FALSE
## indus       FALSE      FALSE
## chas        FALSE      FALSE
## nox         FALSE      FALSE
## rm          FALSE      FALSE
## age         FALSE      FALSE
## dis         FALSE      FALSE
## rad         FALSE      FALSE
## tax         FALSE      FALSE
## ptratio     FALSE      FALSE
## black       FALSE      FALSE
## lstat       FALSE      FALSE
## medv        FALSE      FALSE
## 1 subsets of each size up to 13
## Selection Algorithm: exhaustive
##           zn  indus chas nox rm  age dis rad tax ptratio black lstat medv
## 1  ( 1 )  " " " "   " "  " " " " " " " " "*" " " " "     " "   " "   " " 
## 2  ( 1 )  " " " "   " "  " " " " " " " " "*" " " " "     " "   "*"   " " 
## 3  ( 1 )  "*" " "   " "  " " " " " " " " "*" " " " "     " "   "*"   " " 
## 4  ( 1 )  "*" " "   " "  " " " " " " "*" "*" " " " "     " "   "*"   " " 
## 5  ( 1 )  "*" " "   " "  " " " " "*" "*" "*" " " " "     " "   "*"   " " 
## 6  ( 1 )  "*" "*"   " "  " " " " "*" "*" "*" " " " "     " "   "*"   " " 
## 7  ( 1 )  "*" "*"   "*"  " " " " "*" "*" "*" " " " "     " "   "*"   " " 
## 8  ( 1 )  "*" "*"   "*"  "*" " " "*" "*" "*" " " " "     " "   "*"   " " 
## 9  ( 1 )  "*" "*"   "*"  "*" " " "*" "*" "*" " " " "     " "   "*"   "*" 
## 10  ( 1 ) "*" " "   "*"  "*" " " "*" "*" "*" "*" "*"     " "   "*"   "*" 
## 11  ( 1 ) "*" "*"   "*"  "*" " " "*" "*" "*" "*" "*"     " "   "*"   "*" 
## 12  ( 1 ) "*" "*"   "*"  "*" " " "*" "*" "*" "*" "*"     "*"   "*"   "*" 
## 13  ( 1 ) "*" "*"   "*"  "*" "*" "*" "*" "*" "*" "*"     "*"   "*"   "*"
boston.test.mat = model.matrix(crim ~., data =Boston [test], nvmax = 13)
Train <- sample (c(TRUE , FALSE), nrow (Boston),replace = TRUE)
Test <- (!Train)
regfit.best <- regsubsets (crim~.,data = Boston[Train , ], nvmax = 13)
test.mat <- model.matrix (crim~ ., data =Boston[Test , ])
val.errors <- rep (NA, 12)
for (i in 1:12) {coefi <- coef (regfit.best , id = i)
pred <- test.mat[, names (coefi)] %*% coefi
val.errors[i] <- mean ((Boston$crim[Test] - pred)^2)}
val.errors
##  [1] 58.76859 57.21571 56.45847 56.72875 57.49931 56.15835 57.11990 56.89021
##  [9] 56.07045 55.85418 55.84659 55.79470
which.min(val.errors)
## [1] 12
plot(val.errors, xlab = "Number of predictors in model", ylab = "Test MSE", pch = 19, type = "b")

#Best subset collection doesnt have a pred function, so this is one we made for it#
predict.regsubsets=function(object , newdata ,id ,...){
 form=as.formula (object$call [[2]])
 mat=model.matrix(form ,newdata )
 coefi=coef(object ,id=id)
 xvars=names(coefi)
 mat[,xvars]%*%coefi}
regfit.best <- regsubsets (crim~., data = Boston ,nvmax = 12)
coef (regfit.best , 7)
##   (Intercept)            zn           nox           dis           rad 
##  22.711289450   0.044886656 -12.185035028  -1.017202266   0.541197849 
##       ptratio         black          medv 
##  -0.331185681  -0.008097571  -0.228833182
k <- 10
n <- nrow (Boston)
set.seed (8)
folds <- sample ( rep (1:k, length = n))
cv.errors <- matrix (NA, k, 12,
dimnames = list (NULL , paste (1:12)))
for (j in 1:k) {best.fit <- regsubsets (crim~.,data = Boston[folds != j, ],nvmax = 12)
for (i in 1:12) {pred <- predict (best.fit , Boston[folds == j, ], id = i)
cv.errors[j, i] <-mean ((Boston$crim[folds == j] - pred)^2)}}
mean.cv.errors <- apply (cv.errors , 2, mean)
mean.cv.errors
##        1        2        3        4        5        6        7        8 
## 45.48858 43.82839 44.30610 43.34916 43.98018 43.98637 43.27593 43.46415 
##        9       10       11       12 
## 42.82560 42.62594 42.60005 42.64713
which.min(mean.cv.errors)
## 11 
## 11
min(mean.cv.errors)
## [1] 42.60005
reg.best <- regsubsets (crim~., data =Boston ,nvmax = 12)
coef (reg.best , 11)
##   (Intercept)            zn         indus           nox            rm 
##  17.096652918   0.044858511  -0.069176572 -10.458590328   0.445708393 
##           dis           rad           tax       ptratio         black 
##  -0.997154027   0.583934313  -0.003454533  -0.265327998  -0.007599276 
##         lstat          medv 
##   0.127214918  -0.204431117
par (mfrow = c(1, 1))
plot (mean.cv.errors , type = "b")

the Best subset collection model picked the min test error with 11 variables along with a value of 42.55251

the lasso

lasso.mod=glmnet(x[train,],y[train], alpha=0, lamba=grid)
plot(lasso.mod)

cv.out=cv.glmnet(x[train,],y[train], alpha=1)
plot(cv.out)

bestlam=cv.out$lambda.min
bestlam
## [1] 0.07349932
lasso.pred=predict(lasso.mod,s= bestlam,newx = x[test,])
mean((lasso.pred-y.test)^2)
## [1] 66.17522
lasso.err <- mean((lasso.pred-y.test)^2)
out=glmnet(x,y,alpha = 1, lambda = grid)
lasso.coef=predict(out,type='coefficients', s=bestlam,)[1:13,]
lasso.coef
##  (Intercept)  (Intercept)           zn        indus         chas          nox 
## 11.172523225  0.000000000  0.034069571 -0.061464622 -0.550895956 -5.521435537 
##           rm          age          dis          rad          tax      ptratio 
##  0.139045671  0.000000000 -0.703531828  0.505446805  0.000000000 -0.151824863 
##        black 
## -0.007552768
lasso.err
## [1] 66.17522

The lasso model test error has a value of 66.3926

ridge regression

ridge.mod <- glmnet(x[train,], y[train], alpha = 0, lambda = grid, thresh = 1e-12)
plot(ridge.mod)

cv.out=cv.glmnet(x[train,],y[train], alpha=0)
plot(cv.out)

best.lambda <- cv.out$lambda.min
best.lambda
## [1] 0.5307245
ridge.pred=predict(ridge.mod,s=best.lambda,newx = x[test,])
mean((ridge.pred-y.test)^2)
## [1] 66.17445
ridge.err=mean((ridge.pred-y.test)^2)

The ridge regression model test error has a value of 66.38804

PCR

library(pls)
pcr.fit=pcr(crim~.,data=Boston,scale=TRUE,validation="CV")
summary(pcr.fit)
## Data:    X dimension: 506 13 
##  Y dimension: 506 1
## Fit method: svdpc
## Number of components considered: 13
## 
## VALIDATION: RMSEP
## Cross-validated using 10 random segments.
##        (Intercept)  1 comps  2 comps  3 comps  4 comps  5 comps  6 comps
## CV            8.61    7.198    7.189    6.761    6.769    6.766    6.780
## adjCV         8.61    7.196    7.187    6.755    6.761    6.762    6.775
##        7 comps  8 comps  9 comps  10 comps  11 comps  12 comps  13 comps
## CV       6.772    6.630    6.657     6.651     6.662     6.616     6.552
## adjCV    6.766    6.623    6.650     6.643     6.653     6.606     6.541
## 
## TRAINING: % variance explained
##       1 comps  2 comps  3 comps  4 comps  5 comps  6 comps  7 comps  8 comps
## X       47.70    60.36    69.67    76.45    82.99    88.00    91.14    93.45
## crim    30.69    30.87    39.27    39.61    39.61    39.86    40.14    42.47
##       9 comps  10 comps  11 comps  12 comps  13 comps
## X       95.40     97.04     98.46     99.52     100.0
## crim    42.55     42.78     43.04     44.13      45.4
validationplot(pcr.fit, val.type = "MSEP")

pcr.fit=pcr(crim~.,data=Boston[train,],scale=TRUE,validation="CV")
validationplot(pcr.fit, val.type = "MSEP")

pcr.pred=predict(pcr.fit,Boston[test,], ncomp = 8)
mean((pcr.pred-Boston$crim[test])^2)
## [1] 67.26048
pcr.err8=mean((pcr.pred-Boston$crim[test])^2)
pcr.pred=predict(pcr.fit,Boston[test,], ncomp = 9)
mean((pcr.pred-Boston$crim[test])^2)
## [1] 65.35002
pcr.err9=mean((pcr.pred-Boston$crim[test])^2)
pcr.pred=predict(pcr.fit,Boston[test,], ncomp = 10)
mean((pcr.pred-Boston$crim[test])^2)
## [1] 68.12076
pcr.err10=mean((pcr.pred-Boston$crim[test])^2)

The PCR model gives a min test error of 65.81688 along with the ncomp = 8

#(b) Propose a model (or set of models) that seem to perform well on this data set, and justify your answer. Make sure that you are evaluating model performance using validation set error, crossvalidation, or some other reasonable alternative, as opposed to using training error.

barplot(c(min(mean.cv.errors),ridge.err,lasso.err,pcr.err8),
        col = "gray",
        xlab = "Models",
        ylab = "Test Error",
        main = "Test Errors for models",
        names.arg = c("BSC", "Ridge", "Lasso", "PCR"))

#(c) Does your chosen model involve all of the features in the dataset? Why or why not?

The Best subset collection model had the lowest cross-validated RMSE.it used 11 variables of the features in the model. the rest of three models have a similar test error rete.but PCR model only used 8 variables which make it easier to interpret.