Consider the Gini index, classification error, and entropy in a simple classification setting with two classes. Create a single plot that displays each of these quantities as a function of \(\hat{p}_{m1}\). The \(x\)-axis should display \(\hat{p}_{m1}\), ranging from 0 to 1, and the \(y\)-axis should display the value of the Gini index, classification error, and entropy.
library(tidyverse)
data.frame(p = seq(0, 1, 0.0001)) %>%
mutate("Gini Index" = p * (1 - p) * 2,
"Classification Error" = 1 - pmax(p, 1 - p),
"Entropy" = -(p * log(p) + (1 - p) * log(1 - p))) %>%
pivot_longer(!p, names_to = "name", values_to = "value") %>%
mutate(name = fct_relevel(name, "Entropy", "Gini Index", "Classification Error")) %>%
ggplot(aes(x = p, y = value, color = name)) +
geom_line(size = 1.25) +
scale_color_manual(values = c("#A0A0D0", "#95B2E5", "#98D0CB")) +
labs(x=expression(paste("Proportion of Observations,"~hat(p)[m1])),
y="Entropy, Gini Index, and Classification Error \n", color = "Splitting Criterion",
title="Classification Tree Measures of Node Purity \n") +
theme(legend.position = c(.9, .9), legend.title.align = 0.1,
legend.background = element_rect(fill="white", linetype=1, color="grey70"))
In the lab, a classification tree was applied to the Carseats data set after converting Sales
into a qualitative response variable. Now we will seek to predict Sales
using regression trees and related approaches, treating the response as a quantitative variable.
## [1] 400 11
## 'data.frame': 400 obs. of 11 variables:
## $ Sales : num 9.5 11.22 10.06 7.4 4.15 ...
## $ CompPrice : num 138 111 113 117 141 124 115 136 132 132 ...
## $ Income : num 73 48 35 100 64 113 105 81 110 113 ...
## $ Advertising: num 11 16 10 4 3 13 0 15 0 0 ...
## $ Population : num 276 260 269 466 340 501 45 425 108 131 ...
## $ Price : num 120 83 80 97 128 72 108 120 124 124 ...
## $ ShelveLoc : Factor w/ 3 levels "Bad","Good","Medium": 1 2 3 3 1 1 3 2 3 3 ...
## $ Age : num 42 65 59 55 38 78 71 67 76 76 ...
## $ Education : num 17 10 12 14 13 16 15 10 10 17 ...
## $ Urban : Factor w/ 2 levels "No","Yes": 2 2 2 2 2 1 2 2 1 1 ...
## $ US : Factor w/ 2 levels "No","Yes": 2 2 2 2 1 2 1 2 1 2 ...
## [1] 0
library(caret)
set.seed(8)
inTrain <- Carseats$Sales %>%
createDataPartition(p = 0.5, list = FALSE)
car.train <- Carseats[inTrain,]
car.test <- Carseats[-inTrain,]
##
## Regression tree:
## tree(formula = Sales ~ ., data = car.train)
## Variables actually used in tree construction:
## [1] "ShelveLoc" "Price" "Age" "Advertising" "CompPrice"
## Number of terminal nodes: 19
## Residual mean deviance: 1.985 = 361.3 / 182
## Distribution of residuals:
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## -3.4630 -0.9433 0.0000 0.0000 0.9500 3.9020
Only five variables have been used in constructing the tree: ShelveLoc
, Price
, Age
, Advertising
, and CompPrice
. The tree has 19 terminal nodes and the test error rate is 5.27.
plot(car.tree)
text(car.tree, pretty = 0, cex = 0.75)
mtext("Regression Tree to Predict Sales \n", side = 3)
## [1] 5.26838
car.cv <- cv.tree(car.tree, FUN = prune.tree)
par(mfrow=c(1,2))
plot(car.cv$size, car.cv$dev, type="b")
plot(car.cv$k, car.cv$dev, type="b")
A tree size of 9 gives us the lowest cross-validation error. Pruning the tree decreases the test error to 5.23.
car.prune <- prune.tree(car.tree, best = 9)
plot(car.prune)
text(car.prune, pretty = 0, cex = 0.75)
mtext("Pruned Tree \n", side = 3)
## [1] 5.231176
importance()
function to determine which variables are most important.library(randomForest)
car.bag <- randomForest(Sales ~., data = car.train, mtry = 10, importance = TRUE)
car.bag
##
## Call:
## randomForest(formula = Sales ~ ., data = car.train, mtry = 10, importance = TRUE)
## Type of random forest: regression
## Number of trees: 500
## No. of variables tried at each split: 10
##
## Mean of squared residuals: 2.541699
## % Var explained: 67.63
## [1] 2.746171
Bagging reduces the test error to 2.75, almost half that obtained by the pruned tree. The three most important predictors of Sales
are ShelveLoc
, Price
, and CompPrice
.
data.frame(importance(car.bag)) %>%
rownames_to_column(var = "Variable") %>%
arrange(desc(X.IncMSE)) %>%
mutate(X.IncMSE = scales::percent(X.IncMSE, scale = 1, accuracy = 0.01)) %>%
rename("Increase in MSE" = X.IncMSE, "Increase in Node Purity" = IncNodePurity) %>%
kable(align = c("l", "c", "c"), digits = 2) %>%
kable_styling(bootstrap_options = c("striped", "hover"))
Variable | Increase in MSE | Increase in Node Purity |
---|---|---|
ShelveLoc | 59.31% | 509.32 |
Price | 51.55% | 390.87 |
CompPrice | 29.74% | 185.86 |
Age | 18.94% | 148.15 |
Advertising | 15.66% | 108.60 |
Income | 7.50% | 70.10 |
Education | 2.27% | 39.26 |
US | 1.92% | 5.54 |
Population | 1.51% | 66.42 |
Urban | -0.54% | 6.24 |
importance()
function to determine which variables are most important. Describe the effect of \(m\), the number of variables considered at each split, on the error rate obtained.MSE <- c()
for(i in 1:9){
set.seed(1)
rf.fit <- randomForest(Sales ~ ., data = car.train, mtry = i, importance = TRUE)
rf.preds <- predict(rf.fit, car.test)
MSE[i] <- mean((car.test$Sales - rf.preds)^2)
}
data.frame(MSE) %>%
mutate("Number of Variables" = 1:9) %>%
relocate(MSE, .after = last_col()) %>%
rename("Test Error" = MSE) %>%
kable(align = c("c", "c"), digits = 2) %>%
kable_styling(bootstrap_options = c("striped", "hover"), full_width = F)
Number of Variables | Test Error |
---|---|
1 | 5.20 |
2 | 3.89 |
3 | 3.46 |
4 | 3.15 |
5 | 3.05 |
6 | 2.88 |
7 | 2.88 |
8 | 2.79 |
9 | 2.78 |
As the size of \(m\) increases, the test error decreases from 5.2 to 2.78, slightly higher than the bagging approach. The three most important predictors of Sales
in the random forest model are also ShelveLoc
, Price
, and CompPrice
.
rf.fit <- randomForest(Sales ~ ., data = car.train, mtry = 9, importance = TRUE)
data.frame(importance(rf.fit)) %>%
rownames_to_column(var = "Variable") %>%
arrange(desc(X.IncMSE)) %>%
mutate(X.IncMSE = scales::percent(X.IncMSE, scale = 1, accuracy = 0.01)) %>%
rename("Increase in MSE" = X.IncMSE, "Increase in Node Purity" = IncNodePurity) %>%
kable(align = c("l", "c", "c"), digits = 2) %>%
kable_styling(bootstrap_options = c("striped", "hover"))
Variable | Increase in MSE | Increase in Node Purity |
---|---|---|
ShelveLoc | 56.97% | 506.51 |
Price | 49.62% | 385.90 |
CompPrice | 30.14% | 186.06 |
Age | 18.28% | 148.86 |
Advertising | 16.96% | 113.28 |
Income | 6.72% | 76.15 |
Education | 3.40% | 41.40 |
US | 2.13% | 6.58 |
Population | 1.75% | 69.75 |
Urban | -0.94% | 5.84 |
This problem involves the OJ data set which is part of the ISLR
package.
## [1] 1070 18
## 'data.frame': 1070 obs. of 18 variables:
## $ Purchase : Factor w/ 2 levels "CH","MM": 1 1 1 2 1 1 1 1 1 1 ...
## $ WeekofPurchase: num 237 239 245 227 228 230 232 234 235 238 ...
## $ StoreID : num 1 1 1 1 7 7 7 7 7 7 ...
## $ PriceCH : num 1.75 1.75 1.86 1.69 1.69 1.69 1.69 1.75 1.75 1.75 ...
## $ PriceMM : num 1.99 1.99 2.09 1.69 1.69 1.99 1.99 1.99 1.99 1.99 ...
## $ DiscCH : num 0 0 0.17 0 0 0 0 0 0 0 ...
## $ DiscMM : num 0 0.3 0 0 0 0 0.4 0.4 0.4 0.4 ...
## $ SpecialCH : num 0 0 0 0 0 0 1 1 0 0 ...
## $ SpecialMM : num 0 1 0 0 0 1 1 0 0 0 ...
## $ LoyalCH : num 0.5 0.6 0.68 0.4 0.957 ...
## $ SalePriceMM : num 1.99 1.69 2.09 1.69 1.69 1.99 1.59 1.59 1.59 1.59 ...
## $ SalePriceCH : num 1.75 1.75 1.69 1.69 1.69 1.69 1.69 1.75 1.75 1.75 ...
## $ PriceDiff : num 0.24 -0.06 0.4 0 0 0.3 -0.1 -0.16 -0.16 -0.16 ...
## $ Store7 : Factor w/ 2 levels "No","Yes": 1 1 1 1 2 2 2 2 2 2 ...
## $ PctDiscMM : num 0 0.151 0 0 0 ...
## $ PctDiscCH : num 0 0 0.0914 0 0 ...
## $ ListPriceDiff : num 0.24 0.24 0.23 0 0 0.3 0.3 0.24 0.24 0.24 ...
## $ STORE : num 1 1 1 1 0 0 0 0 0 0 ...
## [1] 0
Fit a tree to the training data, with Purchase
as the response and the other variables as predictors. Use the summary()
function to produce summary statistics about the tree, and describe the results obtained. What is the training error rate? How many terminal nodes does the tree have?
The classification tree uses only three variables, LoyalCH
, PriceDiff
, and StoreID
. It has 8 terminal nodes and the training error rate (misclassification error) for the tree is 15.88%.
##
## Classification tree:
## tree(formula = Purchase ~ ., data = oj.train)
## Variables actually used in tree construction:
## [1] "LoyalCH" "PriceDiff" "StoreID"
## Number of terminal nodes: 8
## Residual mean deviance: 0.7679 = 608.2 / 792
## Misclassification error rate: 0.1588 = 127 / 800
Type in the name of the tree object in order to get a detailed text output. Pick one of the terminal nodes, and interpret the information displayed.
Let’s look at terminal node 4
. The splitting variable at this node is LoyalCH
and the splitting value is 0.0356415. Customers in this node have brand loyalty to Citrus Hill of less than 0.48285 (the root node) and brand loyalty of less than 0.0356. The prediction at this node will be a purchase of Minute Maid, MM
. About 98.33% of the 60 customers in this node have a purchase corresponding to Minute Maid, while the remaining customers have purchased Citrus Hill.
## node), split, n, deviance, yval, (yprob)
## * denotes terminal node
##
## 1) root 800 1067.000 CH ( 0.61375 0.38625 )
## 2) LoyalCH < 0.48285 297 329.000 MM ( 0.24242 0.75758 )
## 4) LoyalCH < 0.0356415 60 10.170 MM ( 0.01667 0.98333 ) *
## 5) LoyalCH > 0.0356415 237 289.400 MM ( 0.29958 0.70042 )
## 10) PriceDiff < 0.49 228 268.800 MM ( 0.27632 0.72368 )
## 20) PriceDiff < -0.34 16 0.000 MM ( 0.00000 1.00000 ) *
## 21) PriceDiff > -0.34 212 258.000 MM ( 0.29717 0.70283 ) *
## 11) PriceDiff > 0.49 9 6.279 CH ( 0.88889 0.11111 ) *
## 3) LoyalCH > 0.48285 503 453.800 CH ( 0.83300 0.16700 )
## 6) LoyalCH < 0.764572 233 288.100 CH ( 0.69099 0.30901 )
## 12) PriceDiff < 0.015 83 113.600 MM ( 0.43373 0.56627 )
## 24) StoreID < 3.5 44 49.490 MM ( 0.25000 0.75000 ) *
## 25) StoreID > 3.5 39 50.920 CH ( 0.64103 0.35897 ) *
## 13) PriceDiff > 0.015 150 135.200 CH ( 0.83333 0.16667 ) *
## 7) LoyalCH > 0.764572 270 98.180 CH ( 0.95556 0.04444 ) *
Create a plot of the tree, and interpret the results.
The most important indicator of Purchases
is brand loyalty to Citrus Hill since the first three tree nodes split based on the value of LoyalCH
. The next most associated variable to Purchases
is PriceDiff
, followed by StoreID
.
plot(oj.tree)
text(oj.tree, cex = 0.75)
mtext("Classification Tree to Predict Purchases \n", side = 3)
Predict the response on the test data, and produce a confusion matrix comparing the test labels to the predicted test labels. What is the test error rate?
The overall accuracy of the model is 80.74%, which gives us a misclassification rate of 19.26%.
oj.preds <- oj.tree %>%
predict(oj.test, type = "class")
confusionMatrix(oj.test$Purchase, oj.preds)
## Confusion Matrix and Statistics
##
## Reference
## Prediction CH MM
## CH 134 28
## MM 24 84
##
## Accuracy : 0.8074
## 95% CI : (0.7552, 0.8527)
## No Information Rate : 0.5852
## P-Value [Acc > NIR] : 6.519e-15
##
## Kappa : 0.6012
##
## Mcnemar's Test P-Value : 0.6774
##
## Sensitivity : 0.8481
## Specificity : 0.7500
## Pos Pred Value : 0.8272
## Neg Pred Value : 0.7778
## Prevalence : 0.5852
## Detection Rate : 0.4963
## Detection Prevalence : 0.6000
## Balanced Accuracy : 0.7991
##
## 'Positive' Class : CH
##
cv.tree()
function to the training set in order to determine the optimal tree size.Which tree size corresponds to the lowest cross-validated classification error rate?
A tree size of 6 minimizes the cross-validation error.
Produce a pruned tree corresponding to the optimal tree size obtained using cross-validation. If cross-validation does not lead to selection of a pruned tree, then create a pruned tree with five terminal nodes.
Compare the training error rates between the pruned and unpruned trees. Which is higher?
The training error rate of the pruned tree is about 1.37% higher than the unpruned tree.
##
## Classification tree:
## snip.tree(tree = oj.tree, nodes = c(10L, 12L))
## Variables actually used in tree construction:
## [1] "LoyalCH" "PriceDiff"
## Number of terminal nodes: 6
## Residual mean deviance: 0.7962 = 632.2 / 794
## Misclassification error rate: 0.1725 = 138 / 800
Compare the test error rates between the pruned and unpruned trees. Which is higher?
The pruned tree gives us a test error rate of 18.52%, lower than the unpruned tree at 19.26%.
## [1] 0.1851852
## [1] 0.1925926