6. In this exercise, you will further analyze the Wage data set considered throughout this chapter.

(a) Perform polynomial regression to predict wage using age. Use cross-validation to select the optimal degree d for the polynomial. What degree was chosen, and how does this compare to the results of hypothesis testing using ANOVA? Make a plot of the resulting polynomial fit to the data.

First I will do k-fold cross validation approach

library(ISLR)
library(boot)
attach(Wage)
set.seed(4)
cv.error.10=rep(0,10)
for (i in 1:10){ 
 glm.fit=glm(wage~poly(age, i),data=Wage)
 cv.error.10[i]=cv.glm(Wage,glm.fit,K=10)$delta[1] 
}
print(cv.error.10)
##  [1] 1676.668 1601.984 1596.690 1594.591 1596.010 1594.852 1594.793 1596.763
##  [9] 1594.941 1595.876
plot(1:10, cv.error.10, xlab = "Degree", ylab = "Test MSE", type = "l")
d.min = which.min(cv.error.10)
points(which.min(cv.error.10), cv.error.10[which.min(cv.error.10)], col = "red", cex = 2, pch = 20)

According to my Cv error plot, d = 4 should be the optimal degree worth chosing

fit.1 = lm(wage~poly(age, 1), data=Wage)
fit.2 = lm(wage~poly(age, 2), data=Wage)
fit.3 = lm(wage~poly(age, 3), data=Wage)
fit.4 = lm(wage~poly(age, 4), data=Wage)
fit.5 = lm(wage~poly(age, 5), data=Wage)

anova(fit.1, fit.2, fit.3, fit.4, fit.5)
## Analysis of Variance Table
## 
## Model 1: wage ~ poly(age, 1)
## Model 2: wage ~ poly(age, 2)
## Model 3: wage ~ poly(age, 3)
## Model 4: wage ~ poly(age, 4)
## Model 5: wage ~ poly(age, 5)
##   Res.Df     RSS Df Sum of Sq        F    Pr(>F)    
## 1   2998 5022216                                    
## 2   2997 4793430  1    228786 143.5931 < 2.2e-16 ***
## 3   2996 4777674  1     15756   9.8888  0.001679 ** 
## 4   2995 4771604  1      6070   3.8098  0.051046 .  
## 5   2994 4770322  1      1283   0.8050  0.369682    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

According to my Cv error plot, d = 4 should be the optimal degree worth chosing. How does this compare to what I am seeing based on the ANOVA table output though? The ANOVA table shows evidence that a cubic or a quartic polynomial appear to provide a reasonable fit to the data, but lower- or higher-order models are not justified. In general, both methods appear to suggest a similar degree of freedom, although you could opt for d = 3 instead of d = 4.

fit = lm(wage~poly(age,4),data = Wage)
agelims = range(age)
age.grid = seq(from = agelims[1],to = agelims[2])
preds = predict(fit,newdata = list(age = age.grid),se = TRUE)
se.bands = cbind(preds$fit+2*preds$se.fit,preds$fit-2*preds$se.fit)
par(mfrow = c(1,1),mar = c(4.5,4.5,1,1),oma = c(0,0,2,0))
plot(age,wage,xlim=agelims,cex =.5,col = "darkgrey")
lines(age.grid,preds$fit,lwd = 2,col = "darkblue")
matlines(age.grid,se.bands,lwd = 1,col = "lightblue",lty = 3)

(b) Fit a step function to predict wage using age, and perform crossvalidation to choose the optimal number of cuts. Make a plot of the fit obtained.

cvs <- rep(NA, 10)
for (i in 2:10) {
    Wage$age.cut = cut(Wage$age, i)
    fit = glm(wage ~ age.cut, data = Wage)
    cvs[i] = cv.glm(Wage, fit, K = 10)$delta[1]}

plot(2:10, cvs[-1], xlab = "Cuts", ylab = "Test MSE", type = "l")
d.min = which.min(cvs)
points(which.min(cvs), cvs[which.min(cvs)], col = "red", cex = 2, pch = 20)

Cv plot suggests optimal number of cuts is 8.

plot(wage ~ age, data = Wage, col = "darkgrey")
agelims = range(Wage$age)
age.grid = seq(from = agelims[1], to = agelims[2])
fit = glm(wage ~ cut(age, 8), data = Wage)
preds = predict(fit, data.frame(age = age.grid))
lines(age.grid, preds, col = "red", lwd = 2)

10. This question relates to the College data set.

(a) Split the data into a training set and a test set. Using out-of-state tuition as the response and the other variables as the predictors, perform forward stepwise selection on the training set in order to identify a satisfactory model that uses just a subset of the predictors.

library(leaps)
set.seed(5)
attach(College)
train = sample(length(Outstate), length(Outstate) / 2)
test = -train
College.train = College[train, ]
College.test = College[test, ]
fit = regsubsets(Outstate ~ ., data = College.train, nvmax = 17, method = "forward")
fit.summary = summary(fit)
par(mfrow = c(1, 3))
plot(fit.summary$cp, xlab = "Number of variables", ylab = "Cp", type = "l")
min.cp = min(fit.summary$cp)
std.cp = sd(fit.summary$cp)
abline(h = min.cp + 0.2 * std.cp, col = "red", lty = 2)
abline(h = min.cp - 0.2 * std.cp, col = "red", lty = 2)
plot(fit.summary$bic, xlab = "Number of variables", ylab = "BIC", type='l')
min.bic = min(fit.summary$bic)
std.bic = sd(fit.summary$bic)
abline(h = min.bic + 0.2 * std.bic, col = "red", lty = 2)
abline(h = min.bic - 0.2 * std.bic, col = "red", lty = 2)
plot(fit.summary$adjr2, xlab = "Number of variables", ylab = "Adjusted R2", type = "l", ylim = c(0.4, 0.84))
max.adjr2 = max(fit.summary$adjr2)
std.adjr2 = sd(fit.summary$adjr2)
abline(h = max.adjr2 + 0.2 * std.adjr2, col = "red", lty = 2)
abline(h = max.adjr2 - 0.2 * std.adjr2, col = "red", lty = 2)

A small value of Cp and BIC indicates a low error, and thus a better model. A large value for the Adjusted R2 indicates a better model. According to the Cp, BIC, and Adjusted R2 plots that are all plotted with x as the # of variables, it appears that we can make a satisfactory model using the 6 predictor subset generated from the forward selection process. Although we might find a higher Adj R2 or lower CP from picking a model with more variables, the return on accuracy will decreases substantially and we could be needlessly sacrificing interpetability for these very minor increases.

fit = regsubsets(Outstate ~ ., data = College, method = "forward")
coef(fit, 6)
##   (Intercept)    PrivateYes    Room.Board           PhD   perc.alumni 
## -3553.2345268  2768.6347025     0.9679086    35.5283359    48.4221031 
##        Expend     Grad.Rate 
##     0.2210255    29.7119093

(b) Fit a GAM on the training data, using out-of-state tuition as the response and the features selected in the previous step as the predictors. Plot the results, and explain your findings.

library(gam)
## Loading required package: splines
## Loading required package: foreach
## Loaded gam 1.20
fit = gam(Outstate ~ Private + s(Room.Board, df = 2) + s(PhD, df = 2) + s(perc.alumni, df = 2) + s(Expend, df = 5) + s(Grad.Rate, df = 2), data=College.train)
par(mfrow = c(2, 3))
plot(fit, se = T, col = "blue")

(c) Evaluate the model obtained on the test set, and explain the results obtained.

preds = predict(fit, College.test)
err = mean((College.test$Outstate - preds)^2)
tss = mean((College.test$Outstate - mean(College.test$Outstate))^2)
rss = 1 - err / tss
rss
## [1] 0.7682627

We obtain a test R-squared of 0.77 using GAM with 6 predictors

(d) For which variables, if any, is there evidence of a non-linear relationship with the response?

summary(fit)
## 
## Call: gam(formula = Outstate ~ Private + s(Room.Board, df = 2) + s(PhD, 
##     df = 2) + s(perc.alumni, df = 2) + s(Expend, df = 5) + s(Grad.Rate, 
##     df = 2), data = College.train)
## Deviance Residuals:
##      Min       1Q   Median       3Q      Max 
## -7269.18 -1055.46    81.02  1249.63  4367.45 
## 
## (Dispersion Parameter for gaussian family taken to be 3442511)
## 
##     Null Deviance: 6473143184 on 387 degrees of freedom
## Residual Deviance: 1284057185 on 373.0002 degrees of freedom
## AIC: 6957.863 
## 
## Number of Local Scoring Iterations: NA 
## 
## Anova for Parametric Effects
##                         Df     Sum Sq    Mean Sq F value    Pr(>F)    
## Private                  1 1818777379 1818777379 528.329 < 2.2e-16 ***
## s(Room.Board, df = 2)    1 1309744031 1309744031 380.462 < 2.2e-16 ***
## s(PhD, df = 2)           1  544317371  544317371 158.116 < 2.2e-16 ***
## s(perc.alumni, df = 2)   1  387170220  387170220 112.467 < 2.2e-16 ***
## s(Expend, df = 5)        1  436629462  436629462 126.835 < 2.2e-16 ***
## s(Grad.Rate, df = 2)     1  103952245  103952245  30.197 7.246e-08 ***
## Residuals              373 1284057185    3442511                      
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Anova for Nonparametric Effects
##                        Npar Df  Npar F     Pr(F)    
## (Intercept)                                         
## Private                                             
## s(Room.Board, df = 2)        1  0.8452    0.3585    
## s(PhD, df = 2)               1  0.2365    0.6270    
## s(perc.alumni, df = 2)       1  0.2416    0.6233    
## s(Expend, df = 5)            4 14.4479 5.608e-11 ***
## s(Grad.Rate, df = 2)         1  1.1919    0.2757    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

According to the sumary data, the one variable that appears to have strong evidence of non-linear relationship with response is Expend (pvalue 5.608e-11).