Question 3. Consider the Gini index, classification error, and
entropy in a simple classification setting with two classes. Create a
single plot that displays each of these quantities.
p = seq(0, 1, 0.001)
gini.index = 2 * p * (1 - p)
class.error = 1 - pmax(p, 1 - p)
entropy = - (p * log(p) + (1 - p) * log(1 - p))
matplot(p, cbind(gini.index, class.error, entropy), ylab = "gini.index, class.error, entropy", col = c("green", "blue", "purple"))
legend('bottom', inset=.01, legend = c('gini.index', 'class.error', 'entropy'), col = c("green" , "blue", "purple"), pch=c(15,17,19))

Question 8. In the lab, a classification tree was applied to the
Carseats data set after converting Sales into a qualitative response
variable. Now we will seek to predict Sales using regression trees and
related approaches, treating the response as a quantitative
variable.
a. Split the data into a training set and a test set.
library(tree)
library(rpart)
library(ISLR2)
attach(Carseats)
set.seed(1)
train<- sample(1:nrow(Carseats), nrow(Carseats)/2)
carseats.train <- Carseats[train,]
carseats.test<- Carseats[-train,]
b. Fit a regression tree to the training set. Plot the tree, and
interpret the results. What test MSE did you obtain?
I obtain a test MSE of 4.922.
tree.carseats <- tree(Sales~., data=carseats.train)
summary(tree.carseats)
##
## Regression tree:
## tree(formula = Sales ~ ., data = carseats.train)
## Variables actually used in tree construction:
## [1] "ShelveLoc" "Price" "Age" "Advertising" "CompPrice"
## [6] "US"
## Number of terminal nodes: 18
## Residual mean deviance: 2.167 = 394.3 / 182
## Distribution of residuals:
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## -3.88200 -0.88200 -0.08712 0.00000 0.89590 4.09900
plot(tree.carseats)
text(tree.carseats, pretty=0)

yhat <- predict(tree.carseats, newdata = carseats.test)
car.test <- Carseats[-train,"Sales"]
mean((yhat-car.test)^2)
## [1] 4.922039
c. Use cross-validation in order to determine the optimal level of
tree complexity. Does pruning the tree improve test MSE?
It did not appear to improve it significantly.
set.seed(1)
cv.carseats <- cv.tree(tree.carseats)
cv.carseats
## $size
## [1] 18 17 16 15 14 13 12 11 10 8 7 6 5 4 3 2 1
##
## $dev
## [1] 984.3936 1031.3372 1036.0021 1027.2166 1027.2166 1055.8168 1044.6955
## [8] 1061.0899 1061.0899 1225.5973 1221.3487 1219.0219 1231.6886 1337.3952
## [15] 1300.0524 1338.3702 1605.0221
##
## $k
## [1] -Inf 16.99544 20.56322 25.01730 25.57104 28.01938 30.36962
## [8] 31.56747 31.80816 40.75445 44.44673 52.57126 76.21881 99.59459
## [15] 116.69889 159.79501 337.60153
##
## $method
## [1] "deviance"
##
## attr(,"class")
## [1] "prune" "tree.sequence"
plot(cv.carseats$size, cv.carseats$dev, type = "b")

prune.carseats <- prune.tree(tree.carseats, best = 10)
plot(prune.carseats)
text(prune.carseats, pretty =0)

carseat.pred <- predict(prune.carseats, newdata = carseats.test)
mean((carseat.pred -carseats.test$Sales)^2)
## [1] 4.918134
d. Use the bagging approach in order to analyze this data. What test
MSE do you obtain? Use the importance function to determine which
variables are most important.
Test MSE is 2.605 with the bagging approach. The more imporant
variables are “price” and “ShelveLoc”.
library(randomForest)
## randomForest 4.7-1.1
## Type rfNews() to see new features/changes/bug fixes.
set.seed(1)
car.bag <- randomForest(Sales~., data=carseats.train, mtry = 10, importance = TRUE)
yhat.bag <- predict(car.bag, newdata = carseats.test)
mean((yhat.bag-carseats.test$Sales)^2)
## [1] 2.605253
importance(car.bag)
## %IncMSE IncNodePurity
## CompPrice 24.8888481 170.182937
## Income 4.7121131 91.264880
## Advertising 12.7692401 97.164338
## Population -1.8074075 58.244596
## Price 56.3326252 502.903407
## ShelveLoc 48.8886689 380.032715
## Age 17.7275460 157.846774
## Education 0.5962186 44.598731
## Urban 0.1728373 9.822082
## US 4.2172102 18.073863
e. Use random forests to analyze data. What test MSE do you obtain?
Use the importance function to determine which variables are most
important. Describe the effect of m, the number of variables considered
at each split, on the error rate obtained.
Random forest yielded a test MSE of 10.859. The two imporant
variables are “price” and “ShelveLoc”.
rf.carseats <- randomForest(Sales~., data = carseats.train, mtry = 10, importance = TRUE)
yhat.rf <- predict(rf.carseats, data = carseats.test, mtry = 10, importance = TRUE)
mean((yhat.rf - carseats.test$Sales)^2)
## [1] 10.7912
importance(rf.carseats)
## %IncMSE IncNodePurity
## CompPrice 25.9874368 166.421253
## Income 5.4460933 88.845318
## Advertising 13.4960704 102.994131
## Population -1.1630794 57.365541
## Price 56.5693519 501.992734
## ShelveLoc 43.7210975 381.998662
## Age 19.5610068 155.641228
## Education 2.2878692 46.703185
## Urban 0.9424674 9.320797
## US 5.3414975 18.107439
Question 9. This problem involves the OJ data set with is part of
the ISLR2 package.
a. Create a training set containing a random sample of 800
observations, and a test set containing the remaining observations.
library(ISLR2)
attach(OJ)
set.seed(3)
train <- sample(1:nrow(OJ), 800)
OJtrain <- OJ[train,]
OJtest <- OJ[-train,]
b. Fit a tree to the training data, with Purchase as the response
and the other variables as predictors. Use the summary() function to
produce summary statistics about the tree, and describe the results
obtained. What is the training error rate? How many terminal nodes does
the tree have?
The training error rate is 18%. Four variables were used, and the
tree has 9 terminal nodes.
OJ.tree <- tree(Purchase~., data = OJtrain)
summary(OJ.tree)
##
## Classification tree:
## tree(formula = Purchase ~ ., data = OJtrain)
## Variables actually used in tree construction:
## [1] "LoyalCH" "PriceDiff" "PriceMM" "SalePriceMM"
## Number of terminal nodes: 9
## Residual mean deviance: 0.7247 = 573.2 / 791
## Misclassification error rate: 0.1812 = 145 / 800
c. Type in the name of the tree object in order to get a detailed
text output. Pick one of the terminal nodes, and interpret the
information displayed.
Line 10 “PriceDiff” has a split criterion of <.05, 114
observations, and a deviance of 105.9.
OJ.tree
## node), split, n, deviance, yval, (yprob)
## * denotes terminal node
##
## 1) root 800 1068.00 CH ( 0.61250 0.38750 )
## 2) LoyalCH < 0.5036 346 414.30 MM ( 0.28613 0.71387 )
## 4) LoyalCH < 0.0356415 57 0.00 MM ( 0.00000 1.00000 ) *
## 5) LoyalCH > 0.0356415 289 371.50 MM ( 0.34256 0.65744 )
## 10) PriceDiff < 0.05 114 105.90 MM ( 0.17544 0.82456 )
## 20) PriceMM < 2.11 89 94.84 MM ( 0.22472 0.77528 ) *
## 21) PriceMM > 2.11 25 0.00 MM ( 0.00000 1.00000 ) *
## 11) PriceDiff > 0.05 175 240.90 MM ( 0.45143 0.54857 )
## 22) LoyalCH < 0.277221 62 66.24 MM ( 0.22581 0.77419 ) *
## 23) LoyalCH > 0.277221 113 154.10 CH ( 0.57522 0.42478 ) *
## 3) LoyalCH > 0.5036 454 365.70 CH ( 0.86123 0.13877 )
## 6) LoyalCH < 0.764572 187 221.10 CH ( 0.72193 0.27807 )
## 12) PriceDiff < 0.265 113 154.70 CH ( 0.56637 0.43363 )
## 24) SalePriceMM < 2.155 102 141.20 CH ( 0.51961 0.48039 ) *
## 25) SalePriceMM > 2.155 11 0.00 CH ( 1.00000 0.00000 ) *
## 13) PriceDiff > 0.265 74 25.11 CH ( 0.95946 0.04054 ) *
## 7) LoyalCH > 0.764572 267 91.71 CH ( 0.95880 0.04120 ) *
d. Create a plot of the tree, and interpret the results.
The plot is showing the most important indicator of purchase is
LoyalCH.
plot(OJ.tree)
text(OJ.tree, pretty = 0)
### e. Predict the response on the test data, and produce a confusion
matrix comparing the test labels to the predicted test labels. What is
the test error rate?
Will have a test error rate of 0.8296
tree.pred <- predict(OJ.tree, OJtest, type = "class")
table(tree.pred, OJtest$Purchase)
##
## tree.pred CH MM
## CH 148 31
## MM 15 76
(148+76)/270
## [1] 0.8296296
f. Apply the cv.tree() function to the training set in order to
determine the optimal tree size.
set.seed(3)
OJ.cv <- cv.tree(OJ.tree, FUN = prune.misclass)
OJ.cv
## $size
## [1] 9 5 2 1
##
## $dev
## [1] 175 175 169 310
##
## $k
## [1] -Inf 0.000000 5.666667 148.000000
##
## $method
## [1] "misclass"
##
## attr(,"class")
## [1] "prune" "tree.sequence"
g. Produce a plot with tree size on the x-axis and cross-validated
classification rate on the y-axis.
plot(OJ.cv$size, OJ.cv$dev, type = "b", xlab = "Tree Size", ylab = "CV Classification Error Rate")
### h. Which tree size corresponds to the lowest cross-validated
classification error rate? ### The tree size of 5 has the lowest
cross-validation error rate.
i. Produce a pruned tree corresponding to the optimal tree tree size
obtained using cross-validation. If cross-validation does not lead to a
selection of a pruned tree, then create a pruned tree with five terminal
nodes.
OJprune <- prune.misclass(OJ.tree, best = 5)
plot(OJprune)
text(OJprune, pretty = 0)
### j. Compare the training error rates between the pruned and unpruned
trees. Which is higher?
The pruned tree has a training error of 18%. The unpruned tree also
has a training error of 18%.
summary(OJprune)
##
## Classification tree:
## snip.tree(tree = OJ.tree, nodes = c(3L, 10L))
## Variables actually used in tree construction:
## [1] "LoyalCH" "PriceDiff"
## Number of terminal nodes: 5
## Residual mean deviance: 0.8703 = 691.9 / 795
## Misclassification error rate: 0.1812 = 145 / 800
summary(OJ.tree)
##
## Classification tree:
## tree(formula = Purchase ~ ., data = OJtrain)
## Variables actually used in tree construction:
## [1] "LoyalCH" "PriceDiff" "PriceMM" "SalePriceMM"
## Number of terminal nodes: 9
## Residual mean deviance: 0.7247 = 573.2 / 791
## Misclassification error rate: 0.1812 = 145 / 800
k. Compare the test error rates between the pruned and unpruned
trees. Which is higher?
The test error rates are also the same at 0.8296.
OJtree.pred = predict(OJprune, newdata = OJtest, type = "class")
table(OJtree.pred, OJtest$Purchase)
##
## OJtree.pred CH MM
## CH 148 31
## MM 15 76
(148+76)/270
## [1] 0.8296296
OJtree.pred2 <- predict(OJ.tree, newdata = OJtest, type = "class")
table(OJtree.pred2, OJtest$Purchase)
##
## OJtree.pred2 CH MM
## CH 148 31
## MM 15 76
(148+76)/270
## [1] 0.8296296