Consider the Gini index, classification error, and entropy in
a simple classification setting with two classes. Create a single plot
that displays each of these quantities as a function of
ˆpm1. The x-axis should display ˆpm1, ranging
from 0 to 1, and the y-axis should display the value of the Gini index,
classification error, and entropy.
p<-seq(0,1,0.01)
gini_index<-2*p*(1-p)
entropy<--(p*log(p)+(1-p)*log(1-p))
class_error<-1-pmax(p,1-p)
matplot(p,cbind(gini_index,entropy,class_error),pch=c(15,17,19),ylab="Gini index,Class error,Entropy",col=c("red","blue","green"),type="b")
legend('bottom',inset=0.01,legend=c('entropy','gini_index','class_error'),col=c("blue","red","green"),pch=c(17,15,19))
In the lab, a classification tree was applied to the
Carseats data set after converting Sales into
a qualitative response variable. Now we will seek to predict
Sales using regression trees and related approaches,
treating the response as a quantitative variable.
library(ISLR)
library(tree)
library(dplyr)
library(randomForest)
attach(Carseats)
str(Carseats)
## 'data.frame': 400 obs. of 11 variables:
## $ Sales : num 9.5 11.22 10.06 7.4 4.15 ...
## $ CompPrice : num 138 111 113 117 141 124 115 136 132 132 ...
## $ Income : num 73 48 35 100 64 113 105 81 110 113 ...
## $ Advertising: num 11 16 10 4 3 13 0 15 0 0 ...
## $ Population : num 276 260 269 466 340 501 45 425 108 131 ...
## $ Price : num 120 83 80 97 128 72 108 120 124 124 ...
## $ ShelveLoc : Factor w/ 3 levels "Bad","Good","Medium": 1 2 3 3 1 1 3 2 3 3 ...
## $ Age : num 42 65 59 55 38 78 71 67 76 76 ...
## $ Education : num 17 10 12 14 13 16 15 10 10 17 ...
## $ Urban : Factor w/ 2 levels "No","Yes": 2 2 2 2 2 1 2 2 1 1 ...
## $ US : Factor w/ 2 levels "No","Yes": 2 2 2 2 1 2 1 2 1 2 ...
(a) Split the data set into a training set and a test set.
set.seed(1)
train<-sample(1:nrow(Carseats),nrow(Carseats)/2)
Car_train<-Carseats[train, ]
Car_test<-Carseats[-train, ]
(b) Fit a regression tree to the training set. Plot the tree, and interpret the results. What test MSE do you obtain?
tree_carseats<-tree(Sales~.,Carseats,subset=train)
summary(tree_carseats)
##
## Regression tree:
## tree(formula = Sales ~ ., data = Carseats, subset = train)
## Variables actually used in tree construction:
## [1] "ShelveLoc" "Price" "Age" "Advertising" "CompPrice"
## [6] "US"
## Number of terminal nodes: 18
## Residual mean deviance: 2.167 = 394.3 / 182
## Distribution of residuals:
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## -3.88200 -0.88200 -0.08712 0.00000 0.89590 4.09900
Below is a plot of the regression tree fit to the training set with
18 terminal nodes. From the plot, we can see that
shelving location is identified as the most important
indicator of Sales. Price above or below 94.50 for car
seats with bad or medium shelf locations or price above or below 135.00
for car seats with good shelf location appears to be the second most
important indicator of Sales.
plot(tree_carseats)
text (tree_carseats,pretty=0)
Using the regression tree fit to the training set yields a test MSE of 4.922 when applied to predict on the test set.
pred_carseats<-predict(tree_carseats,newdata=Car_test)
mean((pred_carseats-Car_test$Sales)^2)
## [1] 4.922039
(c) Use cross-validation in order to determine the optimal level of tree complexity. Does pruning the tree improve the test MSE?
From the plot of cross validation errors below, it appears that a tree with 14 nodes would result in the minimum error while slightly reducing tree complexity. Simpler models tend to perform better, and from the plot below we can see that a tree with 10 nodes results in a similar level of error with even more reduction in tree complexity.
set.seed(1)
cv_carseats<-cv.tree(tree_carseats,FUN=prune.tree)
plot(cv_carseats$size,cv_carseats$dev,xlab='Size',ylab='CV Error',type="b")
prune_carseats<-prune.tree(tree_carseats,best=10)
plot(prune_carseats)
text(prune_carseats,pretty=0)
Pruning the regression tree to 10 nodes yields a test MSE of 4.91813. This would be an improvement on the test MSE of 4.922 yielded by the more complex regression tree with 18 nodes.
set.seed(1)
yhat<-predict(prune_carseats,newdata=Carseats[-train, ])
carseats_test<-Carseats[-train,"Sales"]
mean((yhat-carseats_test)^2)
## [1] 4.918134
(d) Use the bagging approach in order to analyze this data.
What test MSE do you obtain? Use the importance() function
to determine which variables are most important.
Bagging is a special case of a random forest in which we set we
consider all of the predictors at each split. In our case this means
\(m=p=10\), so we set the
mtry argument equal to 10.
set.seed (1)
bag_carseats=randomForest(Sales~.,data=Car_train,mtry=10,importance=TRUE)
bag_carseats
##
## Call:
## randomForest(formula = Sales ~ ., data = Car_train, mtry = 10, importance = TRUE)
## Type of random forest: regression
## Number of trees: 500
## No. of variables tried at each split: 10
##
## Mean of squared residuals: 2.889221
## % Var explained: 63.26
The test set MSE yielded by a bagged regression tree is 2.60525. This is a significant improvement from the test MSE of 4.91813 seen from pruning the regression tree to 10 nodes.
yhat_bag<-predict(bag_carseats,newdata=Car_test)
mean((yhat_bag-Car_test$Sales)^2)
## [1] 2.605253
By applying the importance() function to our bagged
regression tree, we see that Price, ShelveLoc,
and CompPrice are the three most important indicators of
Sales.
importance(bag_carseats)
## %IncMSE IncNodePurity
## CompPrice 24.8888481 170.182937
## Income 4.7121131 91.264880
## Advertising 12.7692401 97.164338
## Population -1.8074075 58.244596
## Price 56.3326252 502.903407
## ShelveLoc 48.8886689 380.032715
## Age 17.7275460 157.846774
## Education 0.5962186 44.598731
## Urban 0.1728373 9.822082
## US 4.2172102 18.073863
(e) Use random forests to analyze this data. What test MSE do
you obtain? Use the importance() function to determine
which variables are most important. Describe the effect of m, the number
of variables considered at each split, on the error rate
obtained.
When using random forests to to build regression trees we reduce
m, which refers to the number of variables considered at
each split. By default the randomForest() function will use
\(p/3\) variables. In our case would
this would be \(10/3\), so we use the
argument mtry=3.
The test MSE obtained is 2.96056, which is worse than our results when using bagging.
set.seed (1)
rf_carseats<-randomForest(Sales~.,data=Car_train,mtry=3,importance=TRUE)
yhat_rf<-predict(rf_carseats,newdata=Car_test)
mean((yhat_rf-Car_test$Sales)^2)
## [1] 2.960559
rf_carseats
##
## Call:
## randomForest(formula = Sales ~ ., data = Car_train, mtry = 3, importance = TRUE)
## Type of random forest: regression
## Number of trees: 500
## No. of variables tried at each split: 3
##
## Mean of squared residuals: 3.363781
## % Var explained: 57.22
By applying the importance() function to our regression
tree built using random forest, we see that Price,
ShelveLoc, and CompPrice are still the three
most important indicators of Sales.
importance(rf_carseats)
## %IncMSE IncNodePurity
## CompPrice 14.8840765 158.82956
## Income 4.3293950 125.64850
## Advertising 8.2215192 107.51700
## Population -0.9488134 97.06024
## Price 34.9793386 385.93142
## ShelveLoc 34.9248499 298.54210
## Age 14.3055912 178.42061
## Education 1.3117842 70.49202
## Urban -1.2680807 17.39986
## US 6.1139696 33.98963
Increasing the value of m to 5, yields a slight
reduction in the test MSE to 2.71417, but still under performs the
bagged regression tree. We also see that Price,
ShelveLoc, and CompPrice remain the three most
important indicators of Sales.
set.seed(1)
rf_carseats2<-randomForest(Sales~.,data=Car_train,mtry=5,importance=TRUE)
yhat_rf2<-predict(rf_carseats2,newdata=Car_test)
mean((yhat_rf2-Car_test$Sales)^2)
## [1] 2.714168
rf_carseats2
##
## Call:
## randomForest(formula = Sales ~ ., data = Car_train, mtry = 5, importance = TRUE)
## Type of random forest: regression
## Number of trees: 500
## No. of variables tried at each split: 5
##
## Mean of squared residuals: 3.060785
## % Var explained: 61.08
importance(rf_carseats2)
## %IncMSE IncNodePurity
## CompPrice 17.4126238 157.53631
## Income 2.9969399 110.40731
## Advertising 11.0485672 105.75049
## Population -1.5321044 80.73318
## Price 43.3572135 452.02367
## ShelveLoc 44.4474163 331.64508
## Age 14.5322339 176.64252
## Education 0.8237454 55.91141
## Urban -2.7805788 11.07321
## US 3.7773881 23.75322
detach(Carseats)
This problem involves the OJ data set which is
part of the ISLR package.
library(ISLR)
attach(OJ)
str(OJ)
## 'data.frame': 1070 obs. of 18 variables:
## $ Purchase : Factor w/ 2 levels "CH","MM": 1 1 1 2 1 1 1 1 1 1 ...
## $ WeekofPurchase: num 237 239 245 227 228 230 232 234 235 238 ...
## $ StoreID : num 1 1 1 1 7 7 7 7 7 7 ...
## $ PriceCH : num 1.75 1.75 1.86 1.69 1.69 1.69 1.69 1.75 1.75 1.75 ...
## $ PriceMM : num 1.99 1.99 2.09 1.69 1.69 1.99 1.99 1.99 1.99 1.99 ...
## $ DiscCH : num 0 0 0.17 0 0 0 0 0 0 0 ...
## $ DiscMM : num 0 0.3 0 0 0 0 0.4 0.4 0.4 0.4 ...
## $ SpecialCH : num 0 0 0 0 0 0 1 1 0 0 ...
## $ SpecialMM : num 0 1 0 0 0 1 1 0 0 0 ...
## $ LoyalCH : num 0.5 0.6 0.68 0.4 0.957 ...
## $ SalePriceMM : num 1.99 1.69 2.09 1.69 1.69 1.99 1.59 1.59 1.59 1.59 ...
## $ SalePriceCH : num 1.75 1.75 1.69 1.69 1.69 1.69 1.69 1.75 1.75 1.75 ...
## $ PriceDiff : num 0.24 -0.06 0.4 0 0 0.3 -0.1 -0.16 -0.16 -0.16 ...
## $ Store7 : Factor w/ 2 levels "No","Yes": 1 1 1 1 2 2 2 2 2 2 ...
## $ PctDiscMM : num 0 0.151 0 0 0 ...
## $ PctDiscCH : num 0 0 0.0914 0 0 ...
## $ ListPriceDiff : num 0.24 0.24 0.23 0 0 0.3 0.3 0.24 0.24 0.24 ...
## $ STORE : num 1 1 1 1 0 0 0 0 0 0 ...
(a) Create a training set containing a random sample of 800 observations, and a test set containing the remaining observations.
set.seed(1)
train<-sample(dim(OJ)[1], 800)
OJ_train<-OJ[train, ]
OJ_test<-OJ[-train, ]
(b) Fit a tree to the training data, with
Purchase as the response and the other variables as
predictors. Use the summary() function to produce summary
statistics about the tree, and describe the results obtained. What is
the training error rate? How many terminal nodes does the tree
have?
The tree uses the following five variables as indicators of whether
or not a customer purchased Citrus Hill or Minute Maid Orange Juice:
LoyalCH, PriceDiff, SpecialCH,
ListPriceDiff, and PctDiscMM. The tree has
nine nodes and yields a training error rate of 0.1588.
oj_tree<-tree(Purchase~.,data=OJ_train)
summary(oj_tree)
##
## Classification tree:
## tree(formula = Purchase ~ ., data = OJ_train)
## Variables actually used in tree construction:
## [1] "LoyalCH" "PriceDiff" "SpecialCH" "ListPriceDiff"
## [5] "PctDiscMM"
## Number of terminal nodes: 9
## Residual mean deviance: 0.7432 = 587.8 / 791
## Misclassification error rate: 0.1588 = 127 / 800
(c) Type in the name of the tree object in order to get a detailed text output. Pick one of the terminal nodes, and interpret the information displayed.
The summary below provides important information for each of the
nodes in the tree fit to the training data. Looking closer at node
number eight, we can see that the splitting variable for this node is
LoyalCH. This variable is a measure of customer brand
loyalty for Citrus Hill orange juice. The splitting value for this node
is 0.0356. The asterisk indicates that this branch leads to a terminal
node. It appears that at this point in the decision tree, if customer
brand loyalty for Citrus Hill orange juice is less than 0.0356, then the
prediction for Purchase is Minute Maid. From node number
nine, we can see that the prediction for Purchase will be
Minute Maid at this point in the tree regardless of whether
LoyalCH is less than or greater than 0.0356. There are 59
observations in the branch at node eight with a deviance of 10.14.
Smaller values of deviance are indicative of how pure the node is. The
fraction of observations in this branch that take on the value
Yes is 1.695%. The fraction of observations that take on
the value No at this branch is 98.305%.
oj_tree
## node), split, n, deviance, yval, (yprob)
## * denotes terminal node
##
## 1) root 800 1073.00 CH ( 0.60625 0.39375 )
## 2) LoyalCH < 0.5036 365 441.60 MM ( 0.29315 0.70685 )
## 4) LoyalCH < 0.280875 177 140.50 MM ( 0.13559 0.86441 )
## 8) LoyalCH < 0.0356415 59 10.14 MM ( 0.01695 0.98305 ) *
## 9) LoyalCH > 0.0356415 118 116.40 MM ( 0.19492 0.80508 ) *
## 5) LoyalCH > 0.280875 188 258.00 MM ( 0.44149 0.55851 )
## 10) PriceDiff < 0.05 79 84.79 MM ( 0.22785 0.77215 )
## 20) SpecialCH < 0.5 64 51.98 MM ( 0.14062 0.85938 ) *
## 21) SpecialCH > 0.5 15 20.19 CH ( 0.60000 0.40000 ) *
## 11) PriceDiff > 0.05 109 147.00 CH ( 0.59633 0.40367 ) *
## 3) LoyalCH > 0.5036 435 337.90 CH ( 0.86897 0.13103 )
## 6) LoyalCH < 0.764572 174 201.00 CH ( 0.73563 0.26437 )
## 12) ListPriceDiff < 0.235 72 99.81 MM ( 0.50000 0.50000 )
## 24) PctDiscMM < 0.196196 55 73.14 CH ( 0.61818 0.38182 ) *
## 25) PctDiscMM > 0.196196 17 12.32 MM ( 0.11765 0.88235 ) *
## 13) ListPriceDiff > 0.235 102 65.43 CH ( 0.90196 0.09804 ) *
## 7) LoyalCH > 0.764572 261 91.20 CH ( 0.95785 0.04215 ) *
(d) Create a plot of the tree, and interpret the results.
The plot below suggests that customer brand loyalty for Citrus Hill
is the most important indicator of Purchase, given that the
first branch of the tree splits on whether or not the
LoyalCH variable is less than or greater than 0.5036.
LoyalCH is also used to split the tree at the next level.
If LoyalCH is less than 0.2809, the tree’s prediction for
Purchase is Minute Maid. If LoyalCH is greater
than 0.7646, the tree’s prediction for Purchase is Citrus
Hill. The PriceDiff variable indicating whether the sales
price of Minute Maid is less than sales price of Citrus Hill, appears to
be the second most important indicator of Purchase.
plot(oj_tree)
text(oj_tree,pretty = 0)
(e) Predict the response on the test data, and produce a confusion matrix comparing the test labels to the predicted test labels. What is the test error rate?
From the confusion matrix below, we can calculate the test error rate to be \((38+8)/270 = .17037\), or 17.037%.
oj_pred<-predict(oj_tree,OJ_test,type="class")
table(OJ_test$Purchase,oj_pred)
## oj_pred
## CH MM
## CH 160 8
## MM 38 64
(f) Apply the cv.tree() function to the training
set in order to determine the optimal tree size.
set.seed(1)
cv_oj<-cv.tree(oj_tree,FUN=prune.tree)
(g) Produce a plot with tree size on the x-axis and cross-validated classification error rate on the y-axis.
plot(cv_oj$size,cv_oj$dev,type="b",xlab="Tree Size",ylab="CV Error")
(h) Which tree size corresponds to the lowest cross-validated classification error rate?
From the plot produced above, it appears that the lowest cross validated error rate is achieved at a tree size of five. While larger trees may yield a smaller error rate, a simpler model is usually preferred and there does not seem to be a significant decrease in error after a tree size of five.
(i) Produce a pruned tree corresponding to the optimal tree size obtained using cross-validation. If cross-validation does not lead to selection of a pruned tree, then create a pruned tree with five terminal nodes.
prune_oj<-prune.tree(oj_tree,best=5)
plot(prune_oj)
text(prune_oj,pretty=0)
(j) Compare the training error rates between the pruned and unpruned trees. Which is higher?
From the summaries below, we can see that the training error rate for our pruned tree of 0.205 is higher than the training error rate of 0.1588 yielded by the unpruned tree.
summary(oj_tree)
##
## Classification tree:
## tree(formula = Purchase ~ ., data = OJ_train)
## Variables actually used in tree construction:
## [1] "LoyalCH" "PriceDiff" "SpecialCH" "ListPriceDiff"
## [5] "PctDiscMM"
## Number of terminal nodes: 9
## Residual mean deviance: 0.7432 = 587.8 / 791
## Misclassification error rate: 0.1588 = 127 / 800
summary(prune_oj)
##
## Classification tree:
## snip.tree(tree = oj_tree, nodes = c(4L, 12L, 5L))
## Variables actually used in tree construction:
## [1] "LoyalCH" "ListPriceDiff"
## Number of terminal nodes: 5
## Residual mean deviance: 0.8239 = 655 / 795
## Misclassification error rate: 0.205 = 164 / 800
(k) Compare the test error rates between the pruned and unpruned trees. Which is higher?
From the confusion matrix below, we can calculate the test error rate of the pruned tree to be \((18+33)/270 = .18889\), or 18.889%. In this case the test error rate yielded by the pruned tree is higher the error rate of 17.037% seen from the unpruned tree.
set.seed(4)
prune_pred<-predict(prune_oj,OJ_test,type="class")
table(OJ_test$Purchase,prune_pred)
## prune_pred
## CH MM
## CH 135 33
## MM 18 84