Consider the Gini index, classification error, and ?maentropy in a simple classification setting with two classes. Create a single plot that displays each of these quantities as a function of \(\hat{p}_{m1}\). The x axis should display \(\hat{p}_{m1}\), ranging from 0 to 1, and the y-axis should display the value of the Gini index, classification error, and entropy.
Hint: In a setting with two classes, \(\hat{p}_{m1}= 1-\hat{p}_{m2}\). You could make this plot by hand, but it will be much easier to make in R.
p<- seq(0, 1, 0.1)
gini.index<- p * (1-p) * 2
class.error<- 1-pmax(p, 1-p)
entropy<- - (p * log(p) + (1 - p) * log(1 - p))
matplot(p, cbind(gini.index, class.error, entropy), col = c("red", "yellow", "orange"))
In the lab, a classification tree was applied to the Carseats dataset after converting Sales into a qualitative response variable. Now we will seek to predict Sales using regression trees and related approaches, treating the response as a quantitative variable.
library(ISLR)
attach(Carseats)
train<- sample(nrow(Carseats), .75*nrow(Carseats))
train.seats<-Carseats[train,]
test.seats<-Carseats[-train,]
library(tree)
## Warning: package 'tree' was built under R version 3.5.3
set.seed(1)
#Create tree
tree.seats<-tree(Sales ~.,data = train.seats)
summary(tree.seats)
##
## Regression tree:
## tree(formula = Sales ~ ., data = train.seats)
## Variables actually used in tree construction:
## [1] "ShelveLoc" "Price" "Income" "CompPrice" "Age"
## [6] "Advertising" "Population" "Education"
## Number of terminal nodes: 17
## Residual mean deviance: 2.363 = 668.6 / 283
## Distribution of residuals:
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## -5.260000 -0.947800 -0.009382 0.000000 1.098000 4.011000
#plot tree
plot(tree.seats)
text(tree.seats, pretty = 0)
#test MSE
y.tree<-predict(tree.seats,newdata=test.seats)
mean((y.tree-test.seats$Sales)^2)
## [1] 5.449054
+ The test MSE for this tree is 4.248
cv.carseats=cv.tree(tree.seats,FUN=prune.tree)
names(cv.carseats)
## [1] "size" "dev" "k" "method"
par(mfrow = c(1, 2))
plot(cv.carseats$size, cv.carseats$dev, type = "b")
plot(cv.carseats$k, cv.carseats$dev, type = "b")
#Best size is 9
pruned.seats<-prune.tree(tree.seats, best=9)
par(mfrow = c(1, 1))
plot(pruned.seats)
text(pruned.seats, pretty = 0)
#MSE after Pruning
y.prune<-predict(pruned.seats, newdata=test.seats)
mean((y.prune-test.seats$Sales)^2)
## [1] 5.917148
+ After pruning the tree, the MSE increased to 4.537.
library(randomForest)
## Warning: package 'randomForest' was built under R version 3.5.3
## randomForest 4.6-14
## Type rfNews() to see new features/changes/bug fixes.
bag.seats=randomForest(Sales~., data=train.seats, mtry=10, ntree=500, importance=TRUE)
y.bag<-predict(bag.seats, newdata=test.seats)
mean((y.bag-test.seats$Sales)^2)
## [1] 2.883449
# Most important variables
importance(bag.seats)
## %IncMSE IncNodePurity
## CompPrice 31.8364205 214.802718
## Income 10.9120827 116.934983
## Advertising 16.5399070 117.376062
## Population 4.2840043 91.612292
## Price 63.9327158 635.373780
## ShelveLoc 82.3137911 808.214599
## Age 18.8535884 170.811015
## Education 4.0079371 65.033426
## Urban -1.3454466 8.863985
## US 0.2531485 9.871925
+ Using the bagging approach to analyze the data results in the lowest MSE so far(*2.0907*). The *importance()* function revealed that **ShelveLoc**, **Price**, and **CompPrice** are the three best predictors of Sales.
rf.seats<-randomForest(Sales~ ., data=train.seats, mtry=5, ntree=500, importance=TRUE)
y.rf<-predict(rf.seats, newdata=test.seats)
mean((y.rf-test.seats$Sales)^2)
## [1] 2.973095
# Most important variables
importance(rf.seats)
## %IncMSE IncNodePurity
## CompPrice 21.4632180 207.30060
## Income 9.0879961 139.58852
## Advertising 15.6531826 141.39932
## Population 0.1021106 114.54354
## Price 53.1284355 591.02551
## ShelveLoc 68.9037466 729.74969
## Age 14.6971590 202.74460
## Education 4.5377837 84.85002
## Urban -1.1036286 12.87725
## US 3.0146113 15.69638
detach(Carseats)
+ Using random forest to analyze the data results in the lowest MSE of *2.0573*. As with the bag approach **ShelveLoc**, **Price**, and **CompPrice** are the three best predictors of Sales. The only difference is that **Age** is close to replacing the **ComPrice** variable.
+ changing the value of *m* resulted in MSE values ranging from *2.14-2.68*
This problem involves the OJ data set which is part of the ISLR package.
attach(OJ)
set.seed(1)
train<-sample(dim(OJ)[1], 800)
train.OJ<-OJ[train,]
test.OJ<-OJ[-train,]
oj.tree<-tree(Purchase~ .,data=train.OJ)
summary(oj.tree)
##
## Classification tree:
## tree(formula = Purchase ~ ., data = train.OJ)
## Variables actually used in tree construction:
## [1] "LoyalCH" "PriceDiff" "SpecialCH" "ListPriceDiff"
## Number of terminal nodes: 8
## Residual mean deviance: 0.7305 = 578.6 / 792
## Misclassification error rate: 0.165 = 132 / 800
+ The variables that are used to construct the tree are *LoyalCH*, *PriceDiff*, *specialCH*, *ListPriceDiff*. The number of terminal nodes in this tree is 8. The training error rate is **0.165**.
oj.tree
## node), split, n, deviance, yval, (yprob)
## * denotes terminal node
##
## 1) root 800 1064.00 CH ( 0.61750 0.38250 )
## 2) LoyalCH < 0.508643 350 409.30 MM ( 0.27143 0.72857 )
## 4) LoyalCH < 0.264232 166 122.10 MM ( 0.12048 0.87952 )
## 8) LoyalCH < 0.0356415 57 10.07 MM ( 0.01754 0.98246 ) *
## 9) LoyalCH > 0.0356415 109 100.90 MM ( 0.17431 0.82569 ) *
## 5) LoyalCH > 0.264232 184 248.80 MM ( 0.40761 0.59239 )
## 10) PriceDiff < 0.195 83 91.66 MM ( 0.24096 0.75904 )
## 20) SpecialCH < 0.5 70 60.89 MM ( 0.15714 0.84286 ) *
## 21) SpecialCH > 0.5 13 16.05 CH ( 0.69231 0.30769 ) *
## 11) PriceDiff > 0.195 101 139.20 CH ( 0.54455 0.45545 ) *
## 3) LoyalCH > 0.508643 450 318.10 CH ( 0.88667 0.11333 )
## 6) LoyalCH < 0.764572 172 188.90 CH ( 0.76163 0.23837 )
## 12) ListPriceDiff < 0.235 70 95.61 CH ( 0.57143 0.42857 ) *
## 13) ListPriceDiff > 0.235 102 69.76 CH ( 0.89216 0.10784 ) *
## 7) LoyalCH > 0.764572 278 86.14 CH ( 0.96403 0.03597 ) *
+ The terminal node that was picked was the node labelled **20)**; the terminal node in this tree are noted with an asterisk. The splitting variable for this node is *specialCH* with a splitting value of *0.5*. There are *70* observations in this branch with a deviance of *60.89*. This node predicts that Minute Maid is being purchased;this is true with 84.3% of the observations. The remaining 15.7% of the observations show that Citrus Hill was purchased.
plot(oj.tree)
text(oj.tree, pretty=0)
+ The tree depicts that the most important variable in purchasing either Minute Maid or Citrus Hill is the loyality of the Citrus Hill customers. The first first three nodes for classification are based off this variable. Other variables that effect *Purchase* are the *PriceDiff*, *ListPriceDiff*, *specialCH*.
oj.pred=predict(oj.tree, newdata=test.OJ, type="class")
table(test.OJ$Purchase, oj.pred)
## oj.pred
## CH MM
## CH 147 12
## MM 49 62
mean(oj.pred != test.OJ$Purchase)
## [1] 0.2259259
+ The test error rate is 22.6%.
cv.oj<-cv.tree(oj.tree, FUN=prune.misclass)
cv.oj
## $size
## [1] 8 5 2 1
##
## $dev
## [1] 156 156 156 306
##
## $k
## [1] -Inf 0.000000 4.666667 160.000000
##
## $method
## [1] "misclass"
##
## attr(,"class")
## [1] "prune" "tree.sequence"
plot(cv.oj$size, cv.oj$dev, type = "b", xlab = "Tree Size", ylab = "Deviance")
points(which.min(cv.oj$dev), col = "red")
(h) Which tree size corresponds to the lowest cross-validated classification error rate?
(i) Produce a pruned tree corresponding to the optimal tree size obtained using cross-validation. If cross-validation does not lead to selection of a pruned tree, then create a pruned tree with five terminal nodes.
oj.pruned<-prune.misclass(oj.tree, best=2)
plot(oj.pruned)
text(oj.pruned, pretty=0)
summary(oj.pruned)
##
## Classification tree:
## snip.tree(tree = oj.tree, nodes = 3:2)
## Variables actually used in tree construction:
## [1] "LoyalCH"
## Number of terminal nodes: 2
## Residual mean deviance: 0.9115 = 727.4 / 798
## Misclassification error rate: 0.1825 = 146 / 800
+ The training error for the pruned tree is .1825, this is higher than the original tree.
ojprune.pred=predict(oj.pruned, newdata=test.OJ, type="class")
table(test.OJ$Purchase, ojprune.pred)
## ojprune.pred
## CH MM
## CH 119 40
## MM 30 81
mean(ojprune.pred != test.OJ$Purchase)
## [1] 0.2592593
+ The test error rate for the pruned tree is 25.9%, which again is higer than the original tree.