Chapter 08: 3, 8, 9
library(ISLR)
attach(Carseats)
library(tree)
library(rpart)
library(caret)
## Loading required package: ggplot2
## Warning: package 'ggplot2' was built under R version 4.3.2
## Loading required package: lattice
library(randomForest)
## randomForest 4.7-1.1
## Type rfNews() to see new features/changes/bug fixes.
##
## Attaching package: 'randomForest'
## The following object is masked from 'package:ggplot2':
##
## margin
library(car)
## Loading required package: carData
library(tree)
p = seq(0, 1, 0.001)
gini.index = 2 * p * (1 - p)
class.error = 1 - pmax(p, 1 - p)
cross.entropy = - (p * log(p) + (1 - p) * log(1 - p))
matplot(p, cbind(gini.index, class.error, cross.entropy), ylab = "gini.index, class.error, cross.entropy", col = c("yellow", "blue", "red"))
set.seed(1)
train = sample(1:nrow(Carseats), nrow(Carseats)/2)
strain = Carseats[train, ]
stest = Carseats[-train, ]
tree.seats = tree(Sales ~ ., data = strain)
summary(tree.seats)
##
## Regression tree:
## tree(formula = Sales ~ ., data = strain)
## Variables actually used in tree construction:
## [1] "ShelveLoc" "Price" "Age" "Advertising" "CompPrice"
## [6] "US"
## Number of terminal nodes: 18
## Residual mean deviance: 2.167 = 394.3 / 182
## Distribution of residuals:
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## -3.88200 -0.88200 -0.08712 0.00000 0.89590 4.09900
plot(tree.seats)
text(tree.seats, pretty = 0)
treeseat.pred = predict(tree.seats, newdata = stest)
mean((treeseat.pred - stest$Sales)^2)
## [1] 4.922039
The test MSE is 4.922039. (c) Use cross-validation in order to determine the optimal level of tree complexity. Does pruning the tree improve the test MSE?
set.seed(1)
cv.seats = cv.tree(tree.seats)
plot(cv.seats$size, cv.seats$dev, type = "b")
prune.car = prune.tree(tree.seats, best = 10)
plot(prune.car)
text(prune.car,pretty=0)
treeseat.pred = predict(prune.car, newdata = stest)
mean((treeseat.pred - stest$Sales)^2)
## [1] 4.918134
Pruning the tree does improve the MSE as the new MSE is 4.918134, as opposed to the previous MSE of 4.922039.
set.seed(1)
bag.seats = randomForest(Sales~., data = strain, mtry = 10, ntree = 551, importance = TRUE)
bagseat.pred = predict(bag.seats, newdata = stest)
mean((bagseat.pred - stest$Sales)^2)
## [1] 2.599099
The test MSE is 2.599099.
importance(bag.seats)
## %IncMSE IncNodePurity
## CompPrice 26.18616309 170.781666
## Income 5.25063979 90.717958
## Advertising 13.25673204 97.498810
## Population -2.14346969 58.289311
## Price 60.58241525 503.478806
## ShelveLoc 50.77308639 380.258594
## Age 19.03720001 158.282846
## Education 1.24264920 44.834257
## Urban -0.08461165 9.883299
## US 4.71515903 17.907727
set.seed(1)
rando.seats = randomForest(Sales~., data = strain, mtry = 10, importance = TRUE)
randseat.pred = predict(rando.seats, newdata = stest)
mean((randseat.pred - stest$Sales)^2)
## [1] 2.605253
The test MSE is 2.605253.
importance(rando.seats)
## %IncMSE IncNodePurity
## CompPrice 24.8888481 170.182937
## Income 4.7121131 91.264880
## Advertising 12.7692401 97.164338
## Population -1.8074075 58.244596
## Price 56.3326252 502.903407
## ShelveLoc 48.8886689 380.032715
## Age 17.7275460 157.846774
## Education 0.5962186 44.598731
## Urban 0.1728373 9.822082
## US 4.2172102 18.073863
It looks like Random Forest increases the MSE. Price, ShelveLoc, and Age are the three most important predictors of Sale.
library(BART)
## Warning: package 'BART' was built under R version 4.3.2
## Loading required package: nlme
## Loading required package: nnet
## Loading required package: survival
##
## Attaching package: 'survival'
## The following object is masked from 'package:caret':
##
## cluster
xtrain <- Carseats[train, 2:11]
ytrain <- Carseats[train, "Sales"]
xtest <- Carseats[-train, 2:11]
set.seed(1)
bartfit <- gbart(xtrain, ytrain, x.test = xtest)
## *****Calling gbart: type=1
## *****Data:
## data:n,p,np: 200, 14, 200
## y1,yn: 2.781850, 1.091850
## x1,x[n*p]: 107.000000, 1.000000
## xp1,xp[np*p]: 111.000000, 1.000000
## *****Number of Trees: 200
## *****Number of Cut Points: 63 ... 1
## *****burn,nd,thin: 100,1000,1
## *****Prior:beta,alpha,tau,nu,lambda,offset: 2,0.95,0.273474,3,0.23074,7.57815
## *****sigma: 1.088371
## *****w (weights): 1.000000 ... 1.000000
## *****Dirichlet:sparse,theta,omega,a,b,rho,augment: 0,0,1,0.5,1,14,0
## *****printevery: 100
##
## MCMC
## done 0 (out of 1100)
## done 100 (out of 1100)
## done 200 (out of 1100)
## done 300 (out of 1100)
## done 400 (out of 1100)
## done 500 (out of 1100)
## done 600 (out of 1100)
## done 700 (out of 1100)
## done 800 (out of 1100)
## done 900 (out of 1100)
## done 1000 (out of 1100)
## time: 7s
## trcnt,tecnt: 1000,1000
set.seed(1)
train = sample(1:nrow(OJ), 800)
OJtrain = OJ[train, ]
OJtest = OJ[-train, ]
tree.OJ = tree(Purchase ~ ., data = OJtrain)
summary(tree.OJ)
##
## Classification tree:
## tree(formula = Purchase ~ ., data = OJtrain)
## Variables actually used in tree construction:
## [1] "LoyalCH" "PriceDiff" "SpecialCH" "ListPriceDiff"
## [5] "PctDiscMM"
## Number of terminal nodes: 9
## Residual mean deviance: 0.7432 = 587.8 / 791
## Misclassification error rate: 0.1588 = 127 / 800
plot(tree.OJ)
text(tree.OJ, pretty = 0)
5 variables are used here, the training error rate is 0.1588, and there
are 9 terminal nodes.
tree.OJ
## node), split, n, deviance, yval, (yprob)
## * denotes terminal node
##
## 1) root 800 1073.00 CH ( 0.60625 0.39375 )
## 2) LoyalCH < 0.5036 365 441.60 MM ( 0.29315 0.70685 )
## 4) LoyalCH < 0.280875 177 140.50 MM ( 0.13559 0.86441 )
## 8) LoyalCH < 0.0356415 59 10.14 MM ( 0.01695 0.98305 ) *
## 9) LoyalCH > 0.0356415 118 116.40 MM ( 0.19492 0.80508 ) *
## 5) LoyalCH > 0.280875 188 258.00 MM ( 0.44149 0.55851 )
## 10) PriceDiff < 0.05 79 84.79 MM ( 0.22785 0.77215 )
## 20) SpecialCH < 0.5 64 51.98 MM ( 0.14062 0.85938 ) *
## 21) SpecialCH > 0.5 15 20.19 CH ( 0.60000 0.40000 ) *
## 11) PriceDiff > 0.05 109 147.00 CH ( 0.59633 0.40367 ) *
## 3) LoyalCH > 0.5036 435 337.90 CH ( 0.86897 0.13103 )
## 6) LoyalCH < 0.764572 174 201.00 CH ( 0.73563 0.26437 )
## 12) ListPriceDiff < 0.235 72 99.81 MM ( 0.50000 0.50000 )
## 24) PctDiscMM < 0.196196 55 73.14 CH ( 0.61818 0.38182 ) *
## 25) PctDiscMM > 0.196196 17 12.32 MM ( 0.11765 0.88235 ) *
## 13) ListPriceDiff > 0.235 102 65.43 CH ( 0.90196 0.09804 ) *
## 7) LoyalCH > 0.764572 261 91.20 CH ( 0.95785 0.04215 ) *
Line 9 (LoyalCH) has 118 observations. It also shows that branch 9 has a value of LoyalCH < 0.0356415. over 80% of the observations in branch 9 take the value of MM and just under 20% of the observations take the value of CH.
plot(tree.OJ)
text(tree.OJ, pretty = 0)
LoyalCH, SpecialCH, PriceDiff, PctDiscMM, and ListPriceDiff are the most
important variables.
treeOJ.pred = predict(tree.OJ, newdata = OJtest, type = "class")
table(treeOJ.pred, OJtest$Purchase)
##
## treeOJ.pred CH MM
## CH 160 38
## MM 8 64
(38 + 8) / 270
## [1] 0.1703704
The test error rate is 0.1703704.
OJcv = cv.tree(tree.OJ, FUN = prune.misclass)
OJcv
## $size
## [1] 9 8 7 4 2 1
##
## $dev
## [1] 150 150 149 158 172 315
##
## $k
## [1] -Inf 0.000000 3.000000 4.333333 10.500000 151.000000
##
## $method
## [1] "misclass"
##
## attr(,"class")
## [1] "prune" "tree.sequence"
plot(OJcv$size, OJcv$dev, type = "b", xlab = "Tree Size", ylab = "cv classification error rate")
Which tree size corresponds to the lowest cross-validated classi- fication error rate? The tree size of 7 corresponds to the lowest cross-validated classification error rate.
Produce a pruned tree corresponding to the optimal tree size obtained using cross-validation. If cross-validation does not lead to selection of a pruned tree, then create a pruned tree with five terminal nodes.
prune.OJ=prune.tree(tree.OJ,best=7)
summary(tree.OJ)
##
## Classification tree:
## tree(formula = Purchase ~ ., data = OJtrain)
## Variables actually used in tree construction:
## [1] "LoyalCH" "PriceDiff" "SpecialCH" "ListPriceDiff"
## [5] "PctDiscMM"
## Number of terminal nodes: 9
## Residual mean deviance: 0.7432 = 587.8 / 791
## Misclassification error rate: 0.1588 = 127 / 800
summary(prune.OJ)
##
## Classification tree:
## snip.tree(tree = tree.OJ, nodes = c(10L, 4L))
## Variables actually used in tree construction:
## [1] "LoyalCH" "PriceDiff" "ListPriceDiff" "PctDiscMM"
## Number of terminal nodes: 7
## Residual mean deviance: 0.7748 = 614.4 / 793
## Misclassification error rate: 0.1625 = 130 / 800
The pruned tree has a higher training error rate (0.1625) than the un-pruned tree (0.1588).
treeOJ.pred = predict(tree.OJ, newdata = OJtest, type = "class")
table(treeOJ.pred, OJtest$Purchase)
##
## treeOJ.pred CH MM
## CH 160 38
## MM 8 64
unprunedOJvalerr = (38 + 8) / 270
unprunedOJvalerr
## [1] 0.1703704
pruneOJ.pred = predict(prune.OJ, newdata = OJtest, type = "class")
table(pruneOJ.pred, OJtest$Purchase)
##
## pruneOJ.pred CH MM
## CH 160 36
## MM 8 66
prunedOJvalerr = (36 + 8) / 270
prunedOJvalerr
## [1] 0.162963