Problem #7.9.6
The cv plot with lines denoting standard deviations shows that d=3 has the smallest degree giving the smallest cross-validation error.
I am now finding the best degree using ANOVA
## Analysis of Variance Table
##
## Model 1: wage ~ poly(age, 1)
## Model 2: wage ~ poly(age, 2)
## Model 3: wage ~ poly(age, 3)
## Model 4: wage ~ poly(age, 4)
## Model 5: wage ~ poly(age, 5)
## Model 6: wage ~ poly(age, 6)
## Model 7: wage ~ poly(age, 7)
## Model 8: wage ~ poly(age, 8)
## Model 9: wage ~ poly(age, 9)
## Model 10: wage ~ poly(age, 10)
## Res.Df RSS Df Sum of Sq F Pr(>F)
## 1 2998 5022216
## 2 2997 4793430 1 228786 143.7638 < 2.2e-16 ***
## 3 2996 4777674 1 15756 9.9005 0.001669 **
## 4 2995 4771604 1 6070 3.8143 0.050909 .
## 5 2994 4770322 1 1283 0.8059 0.369398
## 6 2993 4766389 1 3932 2.4709 0.116074
## 7 2992 4763834 1 2555 1.6057 0.205199
## 8 2991 4763707 1 127 0.0796 0.777865
## 9 2990 4756703 1 7004 4.4014 0.035994 *
## 10 2989 4756701 1 3 0.0017 0.967529
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
The ANOVA shows that all the polynomials above degree 3 are insignificant at 1 significance level, except for a polynomial of degree 9.
Both the ANOVA and cross-validation agree that the optimoal degree of d for the polynomial is d=3.
This is a plot of the polynomial prediction on the data.
When K=8 cuts the cross validation shows that the test error is at its minimum.
I can now train the entire data with a step function using 8 cuts, and plot it.
Problem #7.9.10
Thankfully, cp,BIC, and adjr^2 shows that a size 6 is the minimum size for the subset for which the scores are within 0.2 standard deviations of the optimum. Thus, I pick 6 as the best subset size, and find the resulting best 6 variables.
## [1] "(Intercept)" "PrivateYes" "Room.Board" "PhD" "perc.alumni"
## [6] "Expend" "Grad.Rate"
## Loading required package: splines
## Loading required package: foreach
## Loaded gam 1.12
These plots are using a smoothing spline using an approach called backfitting. This allows us to model non-linear relationships that standard linear regression will miss. We can also examine the effect of each predictor on Y indivdually while holding all other variables fixed, which is what is occuring in this example.
## [1] 3745460
## [1] 0.7696916
By using GAM with 6 predictors, I find a test R-squared of 0.77. This is a small improvement over a RSS test of 0.74 using OLS.
##
## Call: gam(formula = Outstate ~ Private + s(Room.Board, df = 2) + s(PhD,
## df = 2) + s(perc.alumni, df = 2) + s(Expend, df = 5) + s(Grad.Rate,
## df = 2), data = College.train)
## Deviance Residuals:
## Min 1Q Median 3Q Max
## -4977.74 -1184.52 58.33 1220.04 7688.30
##
## (Dispersion Parameter for gaussian family taken to be 3300711)
##
## Null Deviance: 6221998532 on 387 degrees of freedom
## Residual Deviance: 1231165118 on 373 degrees of freedom
## AIC: 6941.542
##
## Number of Local Scoring Iterations: 2
##
## Anova for Parametric Effects
## Df Sum Sq Mean Sq F value Pr(>F)
## Private 1 1779433688 1779433688 539.106 < 2.2e-16 ***
## s(Room.Board, df = 2) 1 1221825562 1221825562 370.171 < 2.2e-16 ***
## s(PhD, df = 2) 1 382472137 382472137 115.876 < 2.2e-16 ***
## s(perc.alumni, df = 2) 1 328493313 328493313 99.522 < 2.2e-16 ***
## s(Expend, df = 5) 1 416585875 416585875 126.211 < 2.2e-16 ***
## s(Grad.Rate, df = 2) 1 55284580 55284580 16.749 5.232e-05 ***
## Residuals 373 1231165118 3300711
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Anova for Nonparametric Effects
## Npar Df Npar F Pr(F)
## (Intercept)
## Private
## s(Room.Board, df = 2) 1 3.5562 0.06010 .
## s(PhD, df = 2) 1 4.3421 0.03786 *
## s(perc.alumni, df = 2) 1 1.9158 0.16715
## s(Expend, df = 5) 4 16.8636 1.016e-12 ***
## s(Grad.Rate, df = 2) 1 3.7208 0.05450 .
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
This non-parametric ANOVA tests shows a strong evidence of a non-linear relationship between Expend and the response; additionally, it shows a not as strong non-linear relationship between response and PhD or Grad Rate.
Problem #8.3.3