3. Consider the Gini index, classification error, and entropy in a
simple classification setting with two classes. Create a single plot
that displays each of these quantities as a function of ˆpm1. The x-axis
should
FIGURE 8.14. Left: A partition of the predictor space corresponding
to Exercise 4a. Right: A tree corresponding to Exercise 4b. display
ˆpm1, ranging from 0 to 1, and the y-axis should display the value of
the Gini index, classification error, and entropy. Hint: In a setting
with two classes, ˆpm1 = 1 − ˆpm2. You could make this plot by hand, but
it will be much easier to make in R.
p=seq(0,1,0.0001)
#Gini
G=2*p*(1-p)
#Classification Error
E=1-pmax(p,1-p)
#Entropy
D=-(p*log(p) + (1-p)*log(1-p))
plot(p,D, col="red",ylab="")
lines(p,E,col='green')
lines(p,G,col='blue')
legend(0.3,0.15,c("Entropy", "Missclassification","Gini"),lty=c(1,1,1),lwd=c(2.5,2.5,2.5),col=c('red','green','blue'))

8. In the lab, a classification tree was applied to the Carseats
data set after converting Sales into a qualitative response variable.
Now we will seek to predict Sales using regression trees and related
approaches, treating the response as a quantitative variable.
- Split the data set into a training set and a test set.
library(ISLR2)
## Warning: package 'ISLR2' was built under R version 4.4.3
library(tree)
## Warning: package 'tree' was built under R version 4.4.3
library(caret)
## Warning: package 'caret' was built under R version 4.4.3
## Loading required package: ggplot2
## Warning: package 'ggplot2' was built under R version 4.4.3
## Loading required package: lattice
set.seed(1)
train_index<-sample(1:nrow(Carseats),nrow(Carseats)/2)
train_data<-Carseats[train_index,]
test_data<-Carseats[-train_index,]
- Fit a regression tree to the training set. Plot the tree, and
interpret the results.
reg_tree <- tree(Sales ~ ., data = train_data)
summary(reg_tree)
##
## Regression tree:
## tree(formula = Sales ~ ., data = train_data)
## Variables actually used in tree construction:
## [1] "ShelveLoc" "Price" "Age" "Advertising" "CompPrice"
## [6] "US"
## Number of terminal nodes: 18
## Residual mean deviance: 2.167 = 394.3 / 182
## Distribution of residuals:
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## -3.88200 -0.88200 -0.08712 0.00000 0.89590 4.09900
plot(reg_tree,compress=TRUE,margin=0.1)
## Warning in text.default(x[1L], y[1L], "|", ...): "compress" is not a graphical
## parameter
## Warning in text.default(x[1L], y[1L], "|", ...): "margin" is not a graphical
## parameter
## Warning in plot.xy(xy.coords(x, y), type = type, ...): "compress" is not a
## graphical parameter
## Warning in plot.xy(xy.coords(x, y), type = type, ...): "margin" is not a
## graphical parameter
text(reg_tree, pretty = 0,cex=0.4)

pred_tree <- predict(reg_tree, newdata = test_data)
mse_tree <- mean((pred_tree - test_data$Sales)^2)
mse_tree
## [1] 4.922039
What test MSE do you obtain?
The test MSE is 4.92
- Use cross-validation in order to determine the optimal level of tree
complexity.
cv_tree <- cv.tree(reg_tree)
plot(cv_tree$size, cv_tree$dev, type = "b")

best_size <- cv_tree$size[which.min(cv_tree$dev)]
pruned_tree <- prune.tree(reg_tree, best = best_size)
plot(pruned_tree,compress=TRUE,margin=0.1)
## Warning in text.default(x[1L], y[1L], "|", ...): "compress" is not a graphical
## parameter
## Warning in text.default(x[1L], y[1L], "|", ...): "margin" is not a graphical
## parameter
## Warning in plot.xy(xy.coords(x, y), type = type, ...): "compress" is not a
## graphical parameter
## Warning in plot.xy(xy.coords(x, y), type = type, ...): "margin" is not a
## graphical parameter
text(pruned_tree, pretty = 0,cex=0.5)

pred_pruned <- predict(pruned_tree, newdata = test_data)
mse_pruned <- mean((pred_pruned - test_data$Sales)^2)
mse_pruned
## [1] 4.922039
Does pruning the tree improve the test MSE?
No, pruning the tree generated a test MSE of 4.922, thus
indicating minimal impact.
- Use the bagging approach in order to analyze this data.
library(randomForest)
## Warning: package 'randomForest' was built under R version 4.4.3
## randomForest 4.7-1.2
## Type rfNews() to see new features/changes/bug fixes.
##
## Attaching package: 'randomForest'
## The following object is masked from 'package:ggplot2':
##
## margin
set.seed(1)
bag_model <- randomForest(Sales ~ ., data = train_data, mtry = ncol(train_data) - 1, importance = TRUE)
pred_bag <- predict(bag_model, newdata = test_data)
mse_bag <- mean((pred_bag - test_data$Sales)^2)
mse_bag
## [1] 2.605253
importance(bag_model)
## %IncMSE IncNodePurity
## CompPrice 24.8888481 170.182937
## Income 4.7121131 91.264880
## Advertising 12.7692401 97.164338
## Population -1.8074075 58.244596
## Price 56.3326252 502.903407
## ShelveLoc 48.8886689 380.032715
## Age 17.7275460 157.846774
## Education 0.5962186 44.598731
## Urban 0.1728373 9.822082
## US 4.2172102 18.073863
What test MSE do you obtain? Use the importance() function to
determine which variables are most important.
Test MSE decreased to 2.605 using bagging. Price, ShelveLoc, and
Age are the most influential predictors of sales.
- Use random forests to analyze this data.
set.seed(1)
rf_model <- randomForest(Sales ~ ., data = train_data, mtry = 4, importance = TRUE)
pred_rf <- predict(rf_model, newdata = test_data)
mse_rf <- mean((pred_rf - test_data$Sales)^2)
mse_rf
## [1] 2.787584
importance(rf_model)
## %IncMSE IncNodePurity
## CompPrice 15.7891655 160.57944
## Income 4.1275509 121.12953
## Advertising 9.6425758 111.54581
## Population -1.3596645 85.92575
## Price 43.4055391 423.06225
## ShelveLoc 37.8850232 311.97119
## Age 13.8924424 174.18229
## Education 0.1960888 62.77782
## Urban 0.1393816 12.92952
## US 6.3532441 30.42255
varImpPlot(rf_model)

m_values <- 1:(ncol(train_data) - 1)
rf_errors <- sapply(m_values, function(m) {
model <- randomForest(Sales ~ ., data = train_data, mtry = m)
mean((predict(model, newdata = test_data) - test_data$Sales)^2)
})
plot(m_values, rf_errors, type = "b", xlab = "mtry", ylab = "Test MSE")

What test MSE do you obtain? Use the importance() function to
determine which variables are most important. Describe the effect of m,
the number of variables considered at each split, on the error rate
obtained.
Utilizing random forests with mtry=4, test MSE was 2.788
indicating a slightly worse MSE than bagging, and significantly worse
than the basic regression tree. Price, ShelveLock, and Age are most
important variables.
- Now analyze the data using BART, and report your results.
library(BART)
## Warning: package 'BART' was built under R version 4.4.3
## Loading required package: nlme
## Loading required package: survival
##
## Attaching package: 'survival'
## The following object is masked from 'package:caret':
##
## cluster
x_train <- train_data[, -which(names(train_data) == "Sales")]
y_train <- train_data$Sales
x_test <- test_data[, -which(names(test_data) == "Sales")]
set.seed(1)
bart_fit <- gbart(x.train = x_train, y.train = y_train, x.test = x_test)
## *****Calling gbart: type=1
## *****Data:
## data:n,p,np: 200, 14, 200
## y1,yn: 2.781850, 1.091850
## x1,x[n*p]: 107.000000, 1.000000
## xp1,xp[np*p]: 111.000000, 1.000000
## *****Number of Trees: 200
## *****Number of Cut Points: 63 ... 1
## *****burn,nd,thin: 100,1000,1
## *****Prior:beta,alpha,tau,nu,lambda,offset: 2,0.95,0.273474,3,0.23074,7.57815
## *****sigma: 1.088371
## *****w (weights): 1.000000 ... 1.000000
## *****Dirichlet:sparse,theta,omega,a,b,rho,augment: 0,0,1,0.5,1,14,0
## *****printevery: 100
##
## MCMC
## done 0 (out of 1100)
## done 100 (out of 1100)
## done 200 (out of 1100)
## done 300 (out of 1100)
## done 400 (out of 1100)
## done 500 (out of 1100)
## done 600 (out of 1100)
## done 700 (out of 1100)
## done 800 (out of 1100)
## done 900 (out of 1100)
## done 1000 (out of 1100)
## time: 4s
## trcnt,tecnt: 1000,1000
mean((bart_fit$yhat.test.mean - test_data$Sales)^2)
## [1] 1.450842
BART model returned MSE 1.451, substantially lower than
regression, pruned, bagging, and random forests, suggesting BART is
particularly effective at capturing complex, nonlinear relationships in
the Carseats dataset
9. This problem involves the OJ data set which is part of the ISLR2
package.
- Create a training set containing a random sample of 800
observations, and a test set containing the remaining observations.
set.seed(1)
train_index <- sample(1:nrow(OJ), 800)
train_data <- OJ[train_index, ]
test_data <- OJ[-train_index, ]
- Fit a tree to the training data, with Purchase as the response and
the other variables as predictors. Use the summary() function to produce
summary statistics about the tree, and..
oj_tree <- tree(Purchase ~ ., data = train_data)
summary(oj_tree)
##
## Classification tree:
## tree(formula = Purchase ~ ., data = train_data)
## Variables actually used in tree construction:
## [1] "LoyalCH" "PriceDiff" "SpecialCH" "ListPriceDiff"
## [5] "PctDiscMM"
## Number of terminal nodes: 9
## Residual mean deviance: 0.7432 = 587.8 / 791
## Misclassification error rate: 0.1588 = 127 / 800
..describe the results obtained. What is the training error rate? How
many terminal nodes does the tree have?
The tree used LoyalCH, PriceDiff, SpecialCH, ListPriceDiff, and
PctDiscMM variables to make splits. These variables suggest that
customer brand loyalty, price differences, and promotional activity play
key roles in distinguishing between Citris Hill and Minute Maid
purchases. The tree has 9 terminal nodes.
- Type in the name of the tree object in order to get a detailed text
output.
oj_tree
## node), split, n, deviance, yval, (yprob)
## * denotes terminal node
##
## 1) root 800 1073.00 CH ( 0.60625 0.39375 )
## 2) LoyalCH < 0.5036 365 441.60 MM ( 0.29315 0.70685 )
## 4) LoyalCH < 0.280875 177 140.50 MM ( 0.13559 0.86441 )
## 8) LoyalCH < 0.0356415 59 10.14 MM ( 0.01695 0.98305 ) *
## 9) LoyalCH > 0.0356415 118 116.40 MM ( 0.19492 0.80508 ) *
## 5) LoyalCH > 0.280875 188 258.00 MM ( 0.44149 0.55851 )
## 10) PriceDiff < 0.05 79 84.79 MM ( 0.22785 0.77215 )
## 20) SpecialCH < 0.5 64 51.98 MM ( 0.14062 0.85938 ) *
## 21) SpecialCH > 0.5 15 20.19 CH ( 0.60000 0.40000 ) *
## 11) PriceDiff > 0.05 109 147.00 CH ( 0.59633 0.40367 ) *
## 3) LoyalCH > 0.5036 435 337.90 CH ( 0.86897 0.13103 )
## 6) LoyalCH < 0.764572 174 201.00 CH ( 0.73563 0.26437 )
## 12) ListPriceDiff < 0.235 72 99.81 MM ( 0.50000 0.50000 )
## 24) PctDiscMM < 0.196196 55 73.14 CH ( 0.61818 0.38182 ) *
## 25) PctDiscMM > 0.196196 17 12.32 MM ( 0.11765 0.88235 ) *
## 13) ListPriceDiff > 0.235 102 65.43 CH ( 0.90196 0.09804 ) *
## 7) LoyalCH > 0.764572 261 91.20 CH ( 0.95785 0.04215 ) *
Pick one of the terminal nodes, and interpret the information
displayed.
Node 8: terminal node where LoyalCH<0.0356, with 59
obversations - Predicted class has very high class probability: 98.3%,
indicating almost all customers with extremely low brand loyalty chose
Minute Maid. Node deviance is 10.14, indicating high purity (confident
and consistent).
- Create a plot of the tree, and..
plot(oj_tree,compress=TRUE,margin=0.1)
## Warning in text.default(x[1L], y[1L], "|", ...): "compress" is not a graphical
## parameter
## Warning in text.default(x[1L], y[1L], "|", ...): "margin" is not a graphical
## parameter
## Warning in plot.xy(xy.coords(x, y), type = type, ...): "compress" is not a
## graphical parameter
## Warning in plot.xy(xy.coords(x, y), type = type, ...): "margin" is not a
## graphical parameter
text(oj_tree, pretty = 0,cex=0.7)

..interpret the results.
When LoyalCH<0.5036, customers are more likely to purchase
Minute Maid, while higher loyalty scores are classified as choosing
Citrus Hill. The tree shows that brand loyalty is the strongest
predictor, with price and promotion effects modifying the decision in
specific loyalty segments.
- Predict the response on the test data, and produce a confusion
matrix comparing the test labels to the predicted test labels.
oj_pred <- predict(oj_tree, test_data, type = "class")
confusion <- table(Predicted = oj_pred, Actual = test_data$Purchase)
confusion
## Actual
## Predicted CH MM
## CH 160 38
## MM 8 64
test_error <- 1 - sum(diag(confusion)) / sum(confusion)
test_error
## [1] 0.1703704
What is the test error rate?
Test error rate: 17.04%
- Apply the cv.tree() function to the training set in order to
determine the optimal tree size.
cv_oj <- cv.tree(oj_tree, FUN = prune.misclass)
- Produce a plot with tree size on the x-axis and cross-validated
classification error rate on the y-axis.
plot(cv_oj$size, cv_oj$dev, type = "b", xlab = "Tree Size", ylab = "CV Classification Error")

- Which tree size corresponds to the lowest cross-validated
classification error rate?
best_size <- cv_oj$size[which.min(cv_oj$dev)]
best_size
## [1] 7
Tree size corresponding to lowest cross-validated classification
error rate: 7
- Produce a pruned tree corresponding to the optimal tree size
obtained using cross-validation. If cross-validation does not lead to
selection of a pruned tree, then create a pruned tree with five terminal
nodes.
pruned_oj <- prune.misclass(oj_tree, best = best_size)
plot(pruned_oj)
text(pruned_oj, pretty = 0)

- Compare the training error rates between the pruned and unpruned
trees.
summary(oj_tree)$misclass
## [1] 127 800
summary(pruned_oj)$misclass
## [1] 130 800
Which is higher?
Pruned tree with 7 terminal nodes: 16.25%
- Compare the test error rates between the pruned and unpruned
trees.
oj_pred_pruned <- predict(pruned_oj, test_data, type = "class")
conf_pruned <- table(Predicted = oj_pred_pruned, Actual = test_data$Purchase)
test_error_pruned <- 1 - sum(diag(conf_pruned)) / sum(conf_pruned)
test_error_pruned
## [1] 0.162963
Which is higher?
Unpruned tree: 17.04%