Chapter 08 (page 332): 3, 8, 9
library(ISLR)
library(tree)
library(rpart)
library(caret)
## Loading required package: lattice
## Loading required package: ggplot2
library(randomForest)
## randomForest 4.6-14
## Type rfNews() to see new features/changes/bug fixes.
##
## Attaching package: 'randomForest'
## The following object is masked from 'package:ggplot2':
##
## margin
3. Consider the Gini index, classification error, and entropy in a simple classification setting with two classes. Create a single plot that displays each of these quantities as a function of pˆm1. The x- axis should display pˆm1, ranging from 0 to 1, and the y-axis should display the value of the Gini index, classification error, and entropy. Hint: In a setting with two classes, pˆm1 = 1 − pˆm2. You could make this plot by hand, but it will be much easier to make in R.
p = seq(0, 1, 0.001)
gini.index = 2 * p * (1 - p)
class.error = 1 - pmax(p, 1 - p)
cross.entropy = - (p * log(p) + (1 - p) * log(1 - p))
matplot(p, cbind(gini.index, class.error, cross.entropy), ylab = "gini.index, class.error, cross.entropy", col = c("green", "blue", "orange"))
8. In the lab, a classification tree was applied to the Carseats data set after converting Sales into a qualitative response variable. Now we will seek to predict Sales using regression trees and related approaches, treating the response as a quantitative variable.
(a) Split the data set into a training set and a test set.
set.seed(1)
train = sample(1:nrow(Carseats), nrow(Carseats)/2)
strain = Carseats[train, ]
stest = Carseats[-train, ]
(b) Fit a regression tree to the training set. Plot the tree, and interpret the results. What test MSE do you obtain?
The test MSE is 4.922039
tree.seats = tree(Sales ~ ., data = strain)
summary(tree.seats)
##
## Regression tree:
## tree(formula = Sales ~ ., data = strain)
## Variables actually used in tree construction:
## [1] "ShelveLoc" "Price" "Age" "Advertising" "CompPrice"
## [6] "US"
## Number of terminal nodes: 18
## Residual mean deviance: 2.167 = 394.3 / 182
## Distribution of residuals:
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## -3.88200 -0.88200 -0.08712 0.00000 0.89590 4.09900
plot(tree.seats)
text(tree.seats, pretty = 0)
treeseat.pred = predict(tree.seats, newdata = stest)
mean((treeseat.pred - stest$Sales)^2)
## [1] 4.922039
(c) Use cross-validation in order to determine the optimal level of tree complexity. Does pruning the tree improve the test MSE?
Yes, pruning the tree to 10 improved the test MSE from 4.922 to 4.918
set.seed(1)
cv.seats = cv.tree(tree.seats)
plot(cv.seats$size, cv.seats$dev, type = "b")
prune.car = prune.tree(tree.seats, best = 10)
plot(prune.car)
text(prune.car,pretty=0)
treeseat.pred = predict(prune.car, newdata = stest)
mean((treeseat.pred - stest$Sales)^2)
## [1] 4.918134
(d) Use the bagging approach in order to analyze this data. What test MSE do you obtain? Use the importance() function to determine which variables are most important.
The test MSE is 2.60. The variables that are most important are Price and ShelveLoc.
set.seed(1)
bag.seats = randomForest(Sales~., data = strain, mtry = 10, ntree = 551, importance = TRUE)
bagseat.pred = predict(bag.seats, newdata = stest)
mean((bagseat.pred - stest$Sales)^2)
## [1] 2.599099
importance(bag.seats)
## %IncMSE IncNodePurity
## CompPrice 26.18616309 170.781666
## Income 5.25063979 90.717958
## Advertising 13.25673204 97.498810
## Population -2.14346969 58.289311
## Price 60.58241525 503.478806
## ShelveLoc 50.77308639 380.258594
## Age 19.03720001 158.282846
## Education 1.24264920 44.834257
## Urban -0.08461165 9.883299
## US 4.71515903 17.907727
(e) Use random forests to analyze this data. What test MSE do you obtain? Use the importance() function to determine which variables are most important. Describe the effect of m, the number of variables considered at each split, on the error rate obtained.
The test MSE is 2.61. The variables that are most important are still Price and ShelveLoc.
set.seed(1)
rando.seats = randomForest(Sales~., data = strain, mtry = 10, importance = TRUE)
randseat.pred = predict(rando.seats, newdata = stest)
mean((randseat.pred - stest$Sales)^2)
## [1] 2.605253
importance(rando.seats)
## %IncMSE IncNodePurity
## CompPrice 24.8888481 170.182937
## Income 4.7121131 91.264880
## Advertising 12.7692401 97.164338
## Population -1.8074075 58.244596
## Price 56.3326252 502.903407
## ShelveLoc 48.8886689 380.032715
## Age 17.7275460 157.846774
## Education 0.5962186 44.598731
## Urban 0.1728373 9.822082
## US 4.2172102 18.073863
9. This problem involves the OJ data set which is part of the ISLR package.
(a) Create a training set containing a random sample of 800 observations, and a test set containing the remaining observations.
set.seed(1)
train = sample(1:nrow(OJ), 800)
OJtrain = OJ[train, ]
OJtest = OJ[-train, ]
(b) Fit a tree to the training data, with Purchase as the response and the other variables as predictors. Use the summary() function to produce summary statistics about the tree, and describe the results obtained. What is the training error rate? How many terminal nodes does the tree have?
5 variables were actually used in the tree construction. The training error rate is 0.1588. There are 9 terminal nodes on the tree.
tree.OJ = tree(Purchase ~ ., data = OJtrain)
summary(tree.OJ)
##
## Classification tree:
## tree(formula = Purchase ~ ., data = OJtrain)
## Variables actually used in tree construction:
## [1] "LoyalCH" "PriceDiff" "SpecialCH" "ListPriceDiff"
## [5] "PctDiscMM"
## Number of terminal nodes: 9
## Residual mean deviance: 0.7432 = 587.8 / 791
## Misclassification error rate: 0.1588 = 127 / 800
plot(tree.OJ)
text(tree.OJ, pretty = 0)
(c) Type in the name of the tree object in order to get a detailed text output. Pick one of the terminal nodes, and interpret the information displayed. Line 9 (LoyalCH) has 118 observations. It also shows that branch 9 has a value of LoyalCH < 0.0356415. over 80% of the observations in branch 9 take the value of MM and just under 20% of the observations take the value of CH.
tree.OJ
## node), split, n, deviance, yval, (yprob)
## * denotes terminal node
##
## 1) root 800 1073.00 CH ( 0.60625 0.39375 )
## 2) LoyalCH < 0.5036 365 441.60 MM ( 0.29315 0.70685 )
## 4) LoyalCH < 0.280875 177 140.50 MM ( 0.13559 0.86441 )
## 8) LoyalCH < 0.0356415 59 10.14 MM ( 0.01695 0.98305 ) *
## 9) LoyalCH > 0.0356415 118 116.40 MM ( 0.19492 0.80508 ) *
## 5) LoyalCH > 0.280875 188 258.00 MM ( 0.44149 0.55851 )
## 10) PriceDiff < 0.05 79 84.79 MM ( 0.22785 0.77215 )
## 20) SpecialCH < 0.5 64 51.98 MM ( 0.14062 0.85938 ) *
## 21) SpecialCH > 0.5 15 20.19 CH ( 0.60000 0.40000 ) *
## 11) PriceDiff > 0.05 109 147.00 CH ( 0.59633 0.40367 ) *
## 3) LoyalCH > 0.5036 435 337.90 CH ( 0.86897 0.13103 )
## 6) LoyalCH < 0.764572 174 201.00 CH ( 0.73563 0.26437 )
## 12) ListPriceDiff < 0.235 72 99.81 MM ( 0.50000 0.50000 )
## 24) PctDiscMM < 0.196196 55 73.14 CH ( 0.61818 0.38182 ) *
## 25) PctDiscMM > 0.196196 17 12.32 MM ( 0.11765 0.88235 ) *
## 13) ListPriceDiff > 0.235 102 65.43 CH ( 0.90196 0.09804 ) *
## 7) LoyalCH > 0.764572 261 91.20 CH ( 0.95785 0.04215 ) *
(d) Create a plot of the tree, and interpret the results.
LoyalCH, SpecialCH, PriceDiff, PctDiscMM, and ListPriceDiff are the most important variables.
plot(tree.OJ)
text(tree.OJ, pretty = 0)
(e) Predict the response on the test data, and produce a confusion matrix comparing the test labels to the predicted test labels. What is the test error rate?
0.1703704 is the test error rate.
treeOJ.pred = predict(tree.OJ, newdata = OJtest, type = "class")
table(treeOJ.pred, OJtest$Purchase)
##
## treeOJ.pred CH MM
## CH 160 38
## MM 8 64
(38 + 8) / 270
## [1] 0.1703704
(f) Apply the cv.tree() function to the training set in order to determine the optimal tree size.
OJcv = cv.tree(tree.OJ, FUN = prune.misclass)
OJcv
## $size
## [1] 9 8 7 4 2 1
##
## $dev
## [1] 150 150 149 158 172 315
##
## $k
## [1] -Inf 0.000000 3.000000 4.333333 10.500000 151.000000
##
## $method
## [1] "misclass"
##
## attr(,"class")
## [1] "prune" "tree.sequence"
(g) Produce a plot with tree size on the x-axis and cross-validated classification error rate on the y-axis.
plot(OJcv$size, OJcv$dev, type = "b", xlab = "Tree Size", ylab = "cv classification error rate")
(h) Which tree size corresponds to the lowest cross-validated classification error rate?
The tree size of 7 corresponds to the lowest cross-validated classification error rate.
(i) Produce a pruned tree corresponding to the optimal tree size obtained using cross-validation. If cross-validation does not lead to selection of a pruned tree, then create a pruned tree with five terminal nodes.
prune.OJ=prune.tree(tree.OJ,best=7)
(j) Compare the training error rates between the pruned and un-pruned trees. Which is higher?
The pruned tree has a higher training error rate (0.1625) than the un-pruned tree (0.1588).
summary(tree.OJ)
##
## Classification tree:
## tree(formula = Purchase ~ ., data = OJtrain)
## Variables actually used in tree construction:
## [1] "LoyalCH" "PriceDiff" "SpecialCH" "ListPriceDiff"
## [5] "PctDiscMM"
## Number of terminal nodes: 9
## Residual mean deviance: 0.7432 = 587.8 / 791
## Misclassification error rate: 0.1588 = 127 / 800
summary(prune.OJ)
##
## Classification tree:
## snip.tree(tree = tree.OJ, nodes = c(10L, 4L))
## Variables actually used in tree construction:
## [1] "LoyalCH" "PriceDiff" "ListPriceDiff" "PctDiscMM"
## Number of terminal nodes: 7
## Residual mean deviance: 0.7748 = 614.4 / 793
## Misclassification error rate: 0.1625 = 130 / 800
(k) Compare the test error rates between the pruned and unpruned trees. Which is higher?
The pruned tree has a lower test error rate (0.162963) than the un-pruned tree (0.1703704). The un-pruned tree has a higher test error rate.
treeOJ.pred = predict(tree.OJ, newdata = OJtest, type = "class")
table(treeOJ.pred, OJtest$Purchase)
##
## treeOJ.pred CH MM
## CH 160 38
## MM 8 64
unprunedOJvalerr = (38 + 8) / 270
unprunedOJvalerr
## [1] 0.1703704
pruneOJ.pred = predict(prune.OJ, newdata = OJtest, type = "class")
table(pruneOJ.pred, OJtest$Purchase)
##
## pruneOJ.pred CH MM
## CH 160 36
## MM 8 66
prunedOJvalerr = (36 + 8) / 270
prunedOJvalerr
## [1] 0.162963