6

In this exercise, you will further analyze the Wage data set considered throughout this chapter.

Wage = Wage

6-A)

Perform polynomial regression to predict wage using age. Use cross-validation to select the optimal degree d for the polynomial. What degree was chosen, and how does this compare to the results of hypothesis testing using ANOVA? Make a plot of the resulting polynomial fit to the data.

set.seed(1)

cv.error <- rep(NA,10)
for (i in 1:10) {
  glm.fit <- glm(wage~poly(age, i), data = Wage)
  cv.error[i] <- cv.glm(Wage, glm.fit, K=10)$delta[1]
}
#cv.error

plot(1:10, cv.error, xlab = 'Polynomial Degree', ylab = 'CV Error', type = 'l')
minpoint <- which.min(cv.error)
points(minpoint, cv.error[minpoint], col = 'red', cex = 2, pch=20)

cat('Polynomial Degree with lowest CV error: ', minpoint)
## Polynomial Degree with lowest CV error:  9
cat("\nLowest CV Error: ", cv.error[minpoint])
## 
## Lowest CV Error:  1593.913

We observe the degree with lowest CV error is 9. The CV error associated with that is 1593.913.

Let’s take a look at the ANOVA hypothesis testing.

fit.1 = lm(wage~poly(age, 1), data = Wage)
fit.2 = lm(wage~poly(age, 2), data = Wage)
fit.3 = lm(wage~poly(age, 3), data = Wage)
fit.4 = lm(wage~poly(age, 4), data = Wage)
fit.5 = lm(wage~poly(age, 5), data = Wage)
fit.6 = lm(wage~poly(age, 6), data = Wage)
fit.7 = lm(wage~poly(age, 7), data = Wage)
fit.8 = lm(wage~poly(age, 8), data = Wage)
fit.9 = lm(wage~poly(age, 9), data = Wage)
fit.10 = lm(wage~poly(age, 10), data = Wage)
anova(fit.1, fit.2, fit.3, fit.4, fit.5, fit.6, fit.7, fit.8, fit.9, fit.10)
## Analysis of Variance Table
## 
## Model  1: wage ~ poly(age, 1)
## Model  2: wage ~ poly(age, 2)
## Model  3: wage ~ poly(age, 3)
## Model  4: wage ~ poly(age, 4)
## Model  5: wage ~ poly(age, 5)
## Model  6: wage ~ poly(age, 6)
## Model  7: wage ~ poly(age, 7)
## Model  8: wage ~ poly(age, 8)
## Model  9: wage ~ poly(age, 9)
## Model 10: wage ~ poly(age, 10)
##    Res.Df     RSS Df Sum of Sq        F    Pr(>F)    
## 1    2998 5022216                                    
## 2    2997 4793430  1    228786 143.7638 < 2.2e-16 ***
## 3    2996 4777674  1     15756   9.9005  0.001669 ** 
## 4    2995 4771604  1      6070   3.8143  0.050909 .  
## 5    2994 4770322  1      1283   0.8059  0.369398    
## 6    2993 4766389  1      3932   2.4709  0.116074    
## 7    2992 4763834  1      2555   1.6057  0.205199    
## 8    2991 4763707  1       127   0.0796  0.777865    
## 9    2990 4756703  1      7004   4.4014  0.035994 *  
## 10   2989 4756701  1         3   0.0017  0.967529    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Results show that polynomials of degree 2, 3 and 9 are significant if we use a p = 0.05 selection threshold. We observe that the 9-degree model becomes unstable at the end (see figure below). A model of degree 2 may be too simplistic. We may be best off with a 3-degree model.

agelims <- range(Wage$age)
plot(wage~age, data=Wage, col="gray", main ="Polynomial Degree 2,3,9")
age.grid <- seq(from=agelims[1], to=agelims[2])
lm.fit0 <- lm(wage~poly(age, 2), data = Wage)
lm.fit1 <- lm(wage~poly(age, 3), data = Wage)
lm.pred0 <- predict(lm.fit0, data.frame(age = age.grid))
lm.pred1 <- predict(lm.fit1, data.frame(age = age.grid))
lines(age.grid, lm.pred0, col = "green", lwd = 2)
lines(age.grid, lm.pred1, col = "blue", lwd = 2)
lm.fit2 <- lm(wage~poly(age, 9), data = Wage)
lm.pred2 <- predict(lm.fit2, data.frame(age = age.grid))
lines(age.grid, lm.pred2, col = "red", lwd = 2)

6-B)

Fit a step function to predict wage using age, and perform cross-validation to choose the optimal number of cuts. Make a plot of

the fit obtained.

set.seed(1)
cv.err <- rep(NA, 10)
for (i in 2:10) {
  Wage$age.cut <- cut(Wage$age, i)
  cv.fit <- glm(wage~age.cut, data = Wage)
  cv.err[i] <- cv.glm(Wage, cv.fit, K=10)$delta[2]
}
#cv.err
plot(2:10, cv.err[-1], main = "CV Error vs. Number of Cuts", xlab = "Number of Cuts", ylab ="CV Error", type ="l")
cv.err.min <- which.min(cv.err)
points(cv.err.min, cv.err[cv.err.min], col ="blue", cex = 2, pch = 20)

cat("Optimal number of cuts: ", cv.err.min)
## Optimal number of cuts:  8
cat("\nCV Error at Optimal Point: ", cv.err[cv.err.min])
## 
## CV Error at Optimal Point:  1600.896

The optimal number of cuts is 8, with a CV Error of 1600.896.

The plot using 8 cuts follows:

lm.fit.step <- glm(wage~cut(age, 8), data=Wage)
agelims = range(Wage$age)
age.grid = seq(from=agelims[1], to=agelims[2])
lm.pred.step = predict(lm.fit.step, data.frame(age=age.grid))
plot(wage~age, data=Wage, col="grey")
lines(age.grid, lm.pred.step, col="red", lwd=2)

10

This question relates to the College data set.

10-A) Split the data into a training set and a test set. Using out-of-state tuition as the response and the other variables as the predictors, perform forward stepwise selection on the training set in order to identify a satisfactory model that uses just a subset of the predictors.

set.seed(1)
College = College
Index <- sample(length(College$Outstate), 0.8*length(College$Outstate))
College.train <- College[Index, ]
College.test <- College[-Index, ]
reg.fit <- regsubsets(Outstate ~ ., data = College.train, nvmax = 17, method = "forward")
reg.fit.summary <- summary(reg.fit)
reg.fit.summary
## Subset selection object
## Call: regsubsets.formula(Outstate ~ ., data = College.train, nvmax = 17, 
##     method = "forward")
## 17 Variables  (and intercept)
##             Forced in Forced out
## PrivateYes      FALSE      FALSE
## Apps            FALSE      FALSE
## Accept          FALSE      FALSE
## Enroll          FALSE      FALSE
## Top10perc       FALSE      FALSE
## Top25perc       FALSE      FALSE
## F.Undergrad     FALSE      FALSE
## P.Undergrad     FALSE      FALSE
## Room.Board      FALSE      FALSE
## Books           FALSE      FALSE
## Personal        FALSE      FALSE
## PhD             FALSE      FALSE
## Terminal        FALSE      FALSE
## S.F.Ratio       FALSE      FALSE
## perc.alumni     FALSE      FALSE
## Expend          FALSE      FALSE
## Grad.Rate       FALSE      FALSE
## 1 subsets of each size up to 17
## Selection Algorithm: forward
##           PrivateYes Apps Accept Enroll Top10perc Top25perc F.Undergrad
## 1  ( 1 )  " "        " "  " "    " "    " "       " "       " "        
## 2  ( 1 )  "*"        " "  " "    " "    " "       " "       " "        
## 3  ( 1 )  "*"        " "  " "    " "    " "       " "       " "        
## 4  ( 1 )  "*"        " "  " "    " "    " "       " "       " "        
## 5  ( 1 )  "*"        " "  " "    " "    " "       " "       " "        
## 6  ( 1 )  "*"        " "  " "    " "    " "       " "       " "        
## 7  ( 1 )  "*"        " "  " "    " "    " "       " "       " "        
## 8  ( 1 )  "*"        " "  "*"    " "    " "       " "       " "        
## 9  ( 1 )  "*"        "*"  "*"    " "    " "       " "       " "        
## 10  ( 1 ) "*"        "*"  "*"    " "    "*"       " "       " "        
## 11  ( 1 ) "*"        "*"  "*"    "*"    "*"       " "       " "        
## 12  ( 1 ) "*"        "*"  "*"    "*"    "*"       " "       " "        
## 13  ( 1 ) "*"        "*"  "*"    "*"    "*"       " "       " "        
## 14  ( 1 ) "*"        "*"  "*"    "*"    "*"       " "       "*"        
## 15  ( 1 ) "*"        "*"  "*"    "*"    "*"       "*"       "*"        
## 16  ( 1 ) "*"        "*"  "*"    "*"    "*"       "*"       "*"        
## 17  ( 1 ) "*"        "*"  "*"    "*"    "*"       "*"       "*"        
##           P.Undergrad Room.Board Books Personal PhD Terminal S.F.Ratio
## 1  ( 1 )  " "         " "        " "   " "      " " " "      " "      
## 2  ( 1 )  " "         " "        " "   " "      " " " "      " "      
## 3  ( 1 )  " "         "*"        " "   " "      " " " "      " "      
## 4  ( 1 )  " "         "*"        " "   " "      " " " "      " "      
## 5  ( 1 )  " "         "*"        " "   " "      "*" " "      " "      
## 6  ( 1 )  " "         "*"        " "   " "      "*" " "      " "      
## 7  ( 1 )  " "         "*"        " "   "*"      "*" " "      " "      
## 8  ( 1 )  " "         "*"        " "   "*"      "*" " "      " "      
## 9  ( 1 )  " "         "*"        " "   "*"      "*" " "      " "      
## 10  ( 1 ) " "         "*"        " "   "*"      "*" " "      " "      
## 11  ( 1 ) " "         "*"        " "   "*"      "*" " "      " "      
## 12  ( 1 ) " "         "*"        " "   "*"      "*" "*"      " "      
## 13  ( 1 ) " "         "*"        " "   "*"      "*" "*"      "*"      
## 14  ( 1 ) " "         "*"        " "   "*"      "*" "*"      "*"      
## 15  ( 1 ) " "         "*"        " "   "*"      "*" "*"      "*"      
## 16  ( 1 ) "*"         "*"        " "   "*"      "*" "*"      "*"      
## 17  ( 1 ) "*"         "*"        "*"   "*"      "*" "*"      "*"      
##           perc.alumni Expend Grad.Rate
## 1  ( 1 )  " "         "*"    " "      
## 2  ( 1 )  " "         "*"    " "      
## 3  ( 1 )  " "         "*"    " "      
## 4  ( 1 )  "*"         "*"    " "      
## 5  ( 1 )  "*"         "*"    " "      
## 6  ( 1 )  "*"         "*"    "*"      
## 7  ( 1 )  "*"         "*"    "*"      
## 8  ( 1 )  "*"         "*"    "*"      
## 9  ( 1 )  "*"         "*"    "*"      
## 10  ( 1 ) "*"         "*"    "*"      
## 11  ( 1 ) "*"         "*"    "*"      
## 12  ( 1 ) "*"         "*"    "*"      
## 13  ( 1 ) "*"         "*"    "*"      
## 14  ( 1 ) "*"         "*"    "*"      
## 15  ( 1 ) "*"         "*"    "*"      
## 16  ( 1 ) "*"         "*"    "*"      
## 17  ( 1 ) "*"         "*"    "*"

Let’s break this down into performance metrics: looking at adjusted R^2, Cp and BIC.

#Adjusted R^2
reg.fit.summary$adjr2
##  [1] 0.4444624 0.5948054 0.6731989 0.7172554 0.7345580 0.7430221 0.7460647
##  [8] 0.7481164 0.7506451 0.7536999 0.7566230 0.7581630 0.7585695 0.7584475
## [15] 0.7582785 0.7579074 0.7575276
cat("\nModel with highest adjusted R^2:", which.max(reg.fit.summary$adjr2))
## 
## Model with highest adjusted R^2: 13
cat("\nValue of maximum adjusted R^2: ", reg.fit.summary$adjr2[which.max(reg.fit.summary$adjr2)])
## 
## Value of maximum adjusted R^2:  0.7585695
#Mallow's Cp
reg.fit.summary$cp
##  [1] 801.21399 417.73714 218.58432 107.31125  64.25938  43.73149  36.97962
##  [8]  32.75369  27.34288  20.62951  14.27192  11.40667  11.39167  12.70098
## [15]  14.12632  16.05374  18.00000
cat("\nModel with lowest Mallow's Cp:", which.min(reg.fit.summary$cp))
## 
## Model with lowest Mallow's Cp: 13
cat("\nValue of lowest Mallow's Cp: ", reg.fit.summary$cp[which.min(reg.fit.summary$cp)])
## 
## Value of lowest Mallow's Cp:  11.39167
#BIC
reg.fit.summary$bic
##  [1] -353.1753 -543.7163 -671.8155 -756.3170 -790.1093 -804.8128 -806.7901
##  [8] -806.4106 -807.2607 -809.5011 -811.5029 -810.0340 -805.6696 -799.9483
## [15] -794.1084 -787.7518 -781.3758
cat("\nModel with lowest BIC:", which.min(reg.fit.summary$bic))
## 
## Model with lowest BIC: 11
cat("\nValue of lowest BIC: ", reg.fit.summary$bic[which.min(reg.fit.summary$bic)])
## 
## Value of lowest BIC:  -811.5029

Let’s plot these.

par(mfrow = c(1, 3))
plot(reg.fit.summary$adjr2, xlab = "Number of Variables", ylab = "Adjusted R^2", type = "l")
points(13, reg.fit.summary$adjr2[13], col ='blue', cex = 2, pch = 20)
abline(h = max(reg.fit.summary$adjr2) + 0.2 * sd(reg.fit.summary$adjr2), col = "red", lty = 2)
abline(h = max(reg.fit.summary$adjr2) - 0.2 * sd(reg.fit.summary$adjr2), col = "red", lty = 2)

plot(reg.fit.summary$cp, xlab = "Number of Variables", ylab = "Mallow's Cp", type = "l")
points(13, reg.fit.summary$cp[13], col ='blue', cex = 2, pch = 20)
abline(h = min(reg.fit.summary$cp) + 0.2 * sd(reg.fit.summary$cp), col = "red", lty = 2)
abline(h = min(reg.fit.summary$cp) - 0.2 * sd(reg.fit.summary$cp), col = "red", lty = 2)

plot(reg.fit.summary$bic, xlab = "Number of Variables", ylab = "BIC", type = "l")
points(11, reg.fit.summary$bic[11], col ='blue', cex = 2, pch = 20)
abline(h = min(reg.fit.summary$bic) + 0.2 * sd(reg.fit.summary$bic), col = "red", lty = 2)
abline(h = min(reg.fit.summary$bic) - 0.2 * sd(reg.fit.summary$bic), col = "red", lty = 2)

Based upon minimum/maximum values above, we could go with a model with 13 variables. This is pretty complex. But if we wanted to go by scores are within +/- 0.2 standard deviations, we could go with something around 8 variables. The next step is to identify which of these variables should go into the model.

coef(reg.fit, 8)
##   (Intercept)    PrivateYes        Accept    Room.Board      Personal 
## -2721.1783874  2938.1932552     0.1041922     0.9261394    -0.4080624 
##           PhD   perc.alumni        Expend     Grad.Rate 
##    34.6535029    55.7425360     0.2089285    23.2330647

We would choose PrivateYes, Accept, Room.Board, Personal, PhD, perc.alumni, Expend, Grad.Rate to go into the simplified model.

10-B)

Fit a GAM on the training data, using out-of-state tuition as the response and the features selected in the previous step as the predictors. Plot the results, and explain your findings.

gam.1 <- gam(Outstate ~ Private + s(Accept, 2) + s(Room.Board, 2) + s(Personal, 2) + s(PhD, 2) + s(perc.alumni, 2) + 
               s(Expend, 2) + s(Grad.Rate, 2), data = College.train)

par(mfrow = c(2,3))
plot(gam.1, se= T, col = 'green')

The plots suggest a non-linear relation in Personal, PhD and Grad.Rate.

10-C)

Evaluate the model obtained on the test set, and explain the results obtained.

gam.1.pred <- predict(gam.1, College.test)
gam.1.err <- mean((College.test$Outstate - gam.1.pred)^2)
cat("GAM mean squared error: ", gam.1.err)
## GAM mean squared error:  3251932
gam.1.tss <- mean((College.test$Outstate - mean(College.test$Outstate))^2)
test.1.rss <- 1 - gam.1.err/gam.1.tss
cat("\nGAM test R^2: ", test.1.rss)
## 
## GAM test R^2:  0.7681916

With a model of 8 variables, we have a test R^2 of approximately 77%. The Mean squared error is 3251932.

10-D)

For which variables, if any, is there evidence of a non-linear relationship with the response?

Let’s take a look at the summary of the model:

summary(gam.1)
## 
## Call: gam(formula = Outstate ~ Private + s(Accept, 2) + s(Room.Board, 
##     2) + s(Personal, 2) + s(PhD, 2) + s(perc.alumni, 2) + s(Expend, 
##     2) + s(Grad.Rate, 2), data = College.train)
## Deviance Residuals:
##      Min       1Q   Median       3Q      Max 
## -6092.87 -1263.01   -44.62  1244.02  8611.64 
## 
## (Dispersion Parameter for gaussian family taken to be 3648208)
## 
##     Null Deviance: 10354544612 on 620 degrees of freedom
## Residual Deviance: 2207166384 on 605.0001 degrees of freedom
## AIC: 11163.26 
## 
## Number of Local Scoring Iterations: NA 
## 
## Anova for Parametric Effects
##                    Df     Sum Sq    Mean Sq F value    Pr(>F)    
## Private             1 2861142601 2861142601 784.260 < 2.2e-16 ***
## s(Accept, 2)        1  690143357  690143357 189.173 < 2.2e-16 ***
## s(Room.Board, 2)    1 1583140176 1583140176 433.950 < 2.2e-16 ***
## s(Personal, 2)      1  121916453  121916453  33.418 1.191e-08 ***
## s(PhD, 2)           1  638747450  638747450 175.085 < 2.2e-16 ***
## s(perc.alumni, 2)   1  479664895  479664895 131.480 < 2.2e-16 ***
## s(Expend, 2)        1  621717460  621717460 170.417 < 2.2e-16 ***
## s(Grad.Rate, 2)     1   61078099   61078099  16.742 4.863e-05 ***
## Residuals         605 2207166384    3648208                      
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Anova for Nonparametric Effects
##                   Npar Df Npar F     Pr(F)    
## (Intercept)                                   
## Private                                       
## s(Accept, 2)            1  9.436  0.002224 ** 
## s(Room.Board, 2)        1  2.864  0.091112 .  
## s(Personal, 2)          1  7.169  0.007619 ** 
## s(PhD, 2)               1  3.498  0.061923 .  
## s(perc.alumni, 2)       1  0.366  0.545502    
## s(Expend, 2)            1 68.880 6.661e-16 ***
## s(Grad.Rate, 2)         1  2.256  0.133644    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Based upon the ANOVA section (ANOVA for Nonparametric Effects) we’re looking for any variable below the selection threshold. There are three variables that demonstrate statistically significant evidence of a non-linear relation between the dependent variable (Outstate): Accept, Personal and Expend.