In this exercise, you will further analyze the Wage data set considered throughout this chapter.
(a) Perform polynomial regression to predict wage using age.
set.seed(1)
library(ISLR)
attach(Wage)
library(boot)
all.deltas = rep(NA, 10)
for (i in 1:10) {
glm.fit = glm(wage~poly(age, i), data=Wage)
all.deltas[i] = cv.glm(Wage, glm.fit, K=10)$delta[2]
}
plot(1:10, all.deltas, xlab="Degree", ylab="CV error", type="l", pch=20, lwd=2, ylim=c(1590, 1700))
min.point = min(all.deltas)
sd.points = sd(all.deltas)
abline(h=min.point + 0.2 * sd.points, col="red", lty="dashed")
abline(h=min.point - 0.2 * sd.points, col="red", lty="dashed")
legend("topright", "0.2-standard deviation lines", lty="dashed", col="red")
To determine the optimal degree “d” K-fold cross valdiation was used with K=10. Based off the graph the optimal degree is d=3; this value has the lowest CV error.
What degree was chosen, and how does this compare to the results of hypothesis testing using ANOVA?
wage.fit1=lm(wage~ poly(age,1), data=Wage)
wage.fit2=lm(wage~ poly(age,2), data=Wage)
wage.fit3=lm(wage~ poly(age,3), data=Wage)
wage.fit4=lm(wage~ poly(age,4), data=Wage)
wage.fit5=lm(wage~ poly(age,5), data=Wage)
wage.fit6=lm(wage~ poly(age,6), data=Wage)
wage.fit7=lm(wage~ poly(age,7), data=Wage)
wage.fit8=lm(wage~ poly(age,8), data=Wage)
wage.fit9=lm(wage~ poly(age,9), data=Wage)
wage.fit10=lm(wage~ poly(age,10), data=Wage)
anova(wage.fit1,wage.fit2,wage.fit3,wage.fit4,wage.fit5,wage.fit6,wage.fit7,wage.fit8,wage.fit9,wage.fit10)
## Analysis of Variance Table
##
## Model 1: wage ~ poly(age, 1)
## Model 2: wage ~ poly(age, 2)
## Model 3: wage ~ poly(age, 3)
## Model 4: wage ~ poly(age, 4)
## Model 5: wage ~ poly(age, 5)
## Model 6: wage ~ poly(age, 6)
## Model 7: wage ~ poly(age, 7)
## Model 8: wage ~ poly(age, 8)
## Model 9: wage ~ poly(age, 9)
## Model 10: wage ~ poly(age, 10)
## Res.Df RSS Df Sum of Sq F Pr(>F)
## 1 2998 5022216
## 2 2997 4793430 1 228786 143.7638 < 2.2e-16 ***
## 3 2996 4777674 1 15756 9.9005 0.001669 **
## 4 2995 4771604 1 6070 3.8143 0.050909 .
## 5 2994 4770322 1 1283 0.8059 0.369398
## 6 2993 4766389 1 3932 2.4709 0.116074
## 7 2992 4763834 1 2555 1.6057 0.205199
## 8 2991 4763707 1 127 0.0796 0.777865
## 9 2990 4756703 1 7004 4.4014 0.035994 *
## 10 2989 4756701 1 3 0.0017 0.967529
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
The results of ANOVA illustrate similar results as the K-folds cross validation. Any model containing any higher than 3 is statistically insignificant at level of significance of \(\alpha\) = 0.01.
Make a plot of the resulting polynomial fit to the data.
plot(wage~age, data=Wage, col="grey")
agelims=range(age)
age.grid=seq(from=agelims[1], to=agelims[2])
wage.lm=lm(wage~poly(age,3), data=Wage)
lm.pred = predict(wage.lm, data.frame(age=age.grid))
lines(age.grid, lm.pred, col="blue", lwd=2)
(b) Fit a step function to predict wage using age, and perform crossvalidation to choose the optimal number of cuts. Make a plot of the fit obtained.
all.cvs = rep(NA, 10)
for (i in 2:10) {
Wage$age.cut = cut(Wage$age, i)
lm.fit = glm(wage~age.cut, data=Wage)
all.cvs[i] = cv.glm(Wage, lm.fit, K=10)$delta[2]
}
plot(2:10, all.cvs[-1], xlab="Number of cuts", ylab="CV error", type="l", pch=20, lwd=2)
- The K-fold cross validation plot depicts that k=8 cuts would result in a minimum CV error.
wage.lm2 = glm(wage~cut(age, 8), data=Wage)
agelims=range(age)
age.grid=seq(from=agelims[1], to=agelims[2])
wage.lm=lm(wage~poly(age,3), data=Wage)
lm.pred = predict(wage.lm2, data.frame(age=age.grid))
plot(wage~age,data=Wage,col="grey")
lines(age.grid, lm.pred, col="yellow", lwd=2)
detach(Wage)
This question relates to the College data set.
library(ISLR)
library(leaps)
set.seed(1)
attach(College)
#Splitting Dataset
train=sample(length(Outstate), length(Outstate)/2)
test=-train
ctrain=College[train,]
ctest=College[test,]
#Plots CP, AdjR2, and BIC
reg.fit=regsubsets(Outstate~., data=ctrain, nvmax=17,method='forward')
reg.summary<-summary(reg.fit)
par(mfrow=c(1,3))
plot(reg.summary$cp, xlab='Number of Variables',ylab = 'Cp', type='l')
min.cp=min(reg.summary$cp)
std.cp=sd(reg.summary$cp)
abline(h=min.cp+0.2*std.cp, col='red', lty=2)
plot(reg.summary$bic, xlab='Number of Variables',ylab = 'BIC', type='l')
min.bic=min(reg.summary$bic)
std.bic=sd(reg.summary$bic)
abline(h=min.bic+0.2*std.bic, col='red', lty=2)
plot(reg.summary$adjr2, xlab='Number of Variables',ylab = 'Adj R2', type='l')
max.adjr2=max(reg.summary$adjr2)
std.adjr2=sd(reg.summary$adjr2)
abline(h= max.adjr2+0.2*std.adjr2, col='red', lty=2)
#Coefficients for Model
min.adjr2=min(reg.summary$adjr2)
which(reg.summary$adjr2==min.adjr2)
## [1] 1
coefi=coef(reg.fit, id=6)
names(coefi)
## [1] "(Intercept)" "PrivateYes" "Room.Board" "Terminal" "perc.alumni"
## [6] "Expend" "Grad.Rate"
As depicted above the variables: Intercept, PrivateYes, Room.Board, Terminal, perc.alumni, Expend and Grad.Rate would be the best to create a model for prediction.
(b) Fit a GAM on the training data, using out-of-state tuition as the response and the features selected in the previous step as the predictors. Plot the results, and explain your findings.
library(gam)
## Warning: package 'gam' was built under R version 3.5.3
## Loading required package: splines
## Loading required package: foreach
## Warning: package 'foreach' was built under R version 3.5.3
## Loaded gam 1.16
gam.fit<-gam(Outstate~Private+s(Room.Board,df=8)+s(PhD,df=5)+s(perc.alumni,df=2)+s(Expend, df=10)+s(Grad.Rate, df=7), data=ctrain)
plot(Outstate, Room.Board, col='darkgrey')
fit=smooth.spline(Room.Board, Outstate, cv=FALSE)
fit$df
## [1] 8.804822
fit=smooth.spline(PhD, Outstate, cv=FALSE)
fit$df
## [1] 5.028726
fit=smooth.spline(perc.alumni, Outstate, cv=FALSE)
fit$df
## [1] 2.000204
fit=smooth.spline(Expend, Outstate, cv=FALSE)
fit$df
## [1] 10.22874
fit=smooth.spline(Grad.Rate, Outstate, cv=FALSE)
fit$df
## [1] 7.166701
par(mfrow=c(2,3))
plot(gam.fit, se=T, col="red")
preds2=predict(gam.fit,ctest)
gam.err=mean((ctest$Outstate-preds2)^2)
gam.err
## [1] 3944784
gam.tss = mean((ctest$Outstate - mean(ctest$Outstate))^2)
test.rss = 1 - gam.err/gam.tss
test.rss
## [1] 0.7574351
The MSE for this model is 3,944,784 and the \(R^2\) for the 6 variable model is ~.7574. This states that approximately 76% of the variance in the out of state tuition is attributed to the previously stated variables.
(d) For which variables, if any, is there evidence of a non-linear relationship with the response?
summary(gam.fit)
##
## Call: gam(formula = Outstate ~ Private + s(Room.Board, df = 8) + s(PhD,
## df = 5) + s(perc.alumni, df = 2) + s(Expend, df = 10) + s(Grad.Rate,
## df = 7), data = ctrain)
## Deviance Residuals:
## Min 1Q Median 3Q Max
## -5192.43 -1139.59 47.77 1095.82 6606.40
##
## (Dispersion Parameter for gaussian family taken to be 3195461)
##
## Null Deviance: 6221998532 on 387 degrees of freedom
## Residual Deviance: 1131190964 on 353.9993 degrees of freedom
## AIC: 6946.684
##
## Number of Local Scoring Iterations: 2
##
## Anova for Parametric Effects
## Df Sum Sq Mean Sq F value Pr(>F)
## Private 1 1726646042 1726646042 540.343 < 2.2e-16 ***
## s(Room.Board, df = 8) 1 1212684616 1212684616 379.502 < 2.2e-16 ***
## s(PhD, df = 5) 1 376119840 376119840 117.704 < 2.2e-16 ***
## s(perc.alumni, df = 2) 1 311440781 311440781 97.463 < 2.2e-16 ***
## s(Expend, df = 10) 1 352370023 352370023 110.272 < 2.2e-16 ***
## s(Grad.Rate, df = 7) 1 58755160 58755160 18.387 2.328e-05 ***
## Residuals 354 1131190964 3195461
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Anova for Nonparametric Effects
## Npar Df Npar F Pr(F)
## (Intercept)
## Private
## s(Room.Board, df = 8) 7 2.0001 0.05433 .
## s(PhD, df = 5) 4 2.6990 0.03060 *
## s(perc.alumni, df = 2) 1 2.2723 0.13260
## s(Expend, df = 10) 9 7.9894 8.815e-11 ***
## s(Grad.Rate, df = 7) 6 1.5566 0.15894
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
detach(College)