libraries:
library(ggplot2)
library(ISLR)
library(tree)
library(randomForest)
## randomForest 4.7-1.2
## Type rfNews() to see new features/changes/bug fixes.
##
## Attaching package: 'randomForest'
## The following object is masked from 'package:ggplot2':
##
## margin
library(dplyr)
##
## Attaching package: 'dplyr'
## The following object is masked from 'package:randomForest':
##
## combine
## The following objects are masked from 'package:stats':
##
## filter, lag
## The following objects are masked from 'package:base':
##
## intersect, setdiff, setequal, union
library(caret)
## Loading required package: lattice
library(dbarts)
Chapter 08 (page 361): 3, 8, 9
Chapter 08 Problem 3: Consider the Gini index, classification error,
and entropy in a simple classification setting with two classes. Create
a single plot that displays each of these quantities as a function of ˆ
pm1. The x-axis should display ˆ pm1, ranging from 0 to 1, and the
y-axis should display the value of the Gini index, classification error,
and entropy. Hint: In a setting with two classes, ˆ pm1 =1− ˆ pm2. You
could make this plot by hand, but it will be much easier to make in
R.
pm1 <- seq(0, 1, length.out = 100)
giniidx <- 2 * pm1 * (1 - pm1)
class_error <- pmin(pm1, 1 - pm1)
entropy <- -(pm1 * log2(pm1) + (1 - pm1) * log2(1 - pm1))
entropy[is.nan(entropy)] <- 0
dataframe <- data.frame(pm1, giniidx, class_error, entropy)
ggplot(dataframe, aes(x = pm1)) +
geom_line(aes(y = giniidx, color = "Gini Index")) +
geom_line(aes(y = class_error, color = "Classification Error")) +
geom_line(aes(y = entropy, color = "Entropy")) +
labs(title = "Gini Index, Classification Error, and Entropy as a function of Pm1",
x = "P",
y = "Value",
color = "Measure") +
theme_minimal() +
scale_color_manual(values = c("lightblue", "lightpink", "lightgrey"))

Chapter 08 Problem 8: In the lab, a classification tree was applied
to the Carseats data set after converting Sales into a qualitative
response variable. Now we will seek to predict Sales using regression
trees and related approaches, treating the response as a quantitative
variable.
data(Carseats)
set.seed(123)
a) Split the data set into a training set and a test set.
sample = sample(nrow(Carseats), nrow(Carseats)/2)
train = Carseats[sample,]
test = Carseats [-sample,]
b) Fit a regression tree to the training set. Plot the tree, and
interpret the results. What test MSE do you obtain?
reg_tree <- tree(Sales~., data = train)
summary(reg_tree)
##
## Regression tree:
## tree(formula = Sales ~ ., data = train)
## Variables actually used in tree construction:
## [1] "ShelveLoc" "Price" "Income" "Age" "Population"
## [6] "Education" "CompPrice" "Advertising"
## Number of terminal nodes: 18
## Residual mean deviance: 2.132 = 388.1 / 182
## Distribution of residuals:
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## -4.08000 -0.92870 0.06244 0.00000 0.87020 3.71700
plot(reg_tree)
text(reg_tree)

preds = predict(reg_tree, test)
MSE = mean((test$Sales - preds)^2)
MSE
## [1] 4.395357
Answer: The test MSE is 4.395357.
c) Use cross-validation in order to determine the optimal level of
tree complexity. Does pruning the tree improve the test MSE?
cv = cv.tree(reg_tree, FUN = prune.tree)
par(mfrow = c(1,2))
plot(cv$size, cv$dev, type = "b")
plot(cv$k, cv$dev, type = "b")

prune = prune.tree(reg_tree, best = cv$size[which.min(cv$dev)])
par(mfrow = c(1, 1))
plot(prune)
text(prune)

preds2 = predict(prune, test)
mse2=mean((test$Sales - preds2)^2)
mse2
## [1] 4.591618
Answer: The pruned MSE changed to 4.591618, which is a slight
increase from the non-pruned tree.
d) Use the bagging approach in order to analyze this data. What test
MSE do you obtain? Use the importance() function to determine which
variables are most important.
bagged = randomForest(Sales ~ ., data = train, mtry = 10, ntree = 500, importance = T)
preds3 = predict(bagged, test)
mse3=mean((test$Sales - preds3)^2)
mse3
## [1] 2.706945
importance <- importance(bagged) |>
as.data.frame() |>
arrange(desc(`%IncMSE`))
print(importance)
## %IncMSE IncNodePurity
## ShelveLoc 49.31816789 391.948958
## Price 46.01586429 395.251820
## CompPrice 20.45893952 163.315084
## Age 17.74691675 171.659574
## Advertising 6.70900949 73.007073
## Income 5.99352172 88.626184
## Education 2.98753578 57.308595
## US 0.05207339 6.011265
## Urban 0.04864498 7.721022
## Population -1.84004720 53.079505
Answer: The test MSE is 2.706945 and the most important variables
are ShelveLoc, Price, CompPrice, and Age.
e) Use random forests to analyze this data. What test MSE do you
obtain? Use the importance() function to determine which variables are
most important. Describe the effect of m, the number of variables
considered at each split, on the error rate obtained.
m = 7
forest = randomForest(Sales ~., data = train, mtry = 7, ntree = 500, importance = T)
preds4 = predict(forest, test)
mse4 = mean((test$Sales - preds4)^2)
mse4
## [1] 2.873842
importance2 <- importance(forest) |>
as.data.frame() |>
arrange(desc(`%IncMSE`))
print(importance2)
## %IncMSE IncNodePurity
## ShelveLoc 46.8288658 361.226223
## Price 38.3186008 360.315314
## Age 18.8177150 192.886612
## CompPrice 18.2257655 156.625792
## Advertising 7.6316599 78.322723
## Income 6.3861965 99.436639
## Education 2.5929683 60.280418
## Population 1.1198324 65.874513
## US 0.1613835 7.243356
## Urban -0.5099159 9.332069
m = 9
forest2 = randomForest(Sales ~., data = train, mtry = 9, ntree = 500, importance = T)
preds5 = predict(forest2, test)
mse5 = mean(test$Sales - preds5)^2
mse5
## [1] 0.0007263172
importance3 <- importance(forest2) |>
as.data.frame() |>
arrange(desc(`%IncMSE`))
print(importance3)
## %IncMSE IncNodePurity
## ShelveLoc 49.7269895 377.473482
## Price 44.6936356 370.573559
## Age 20.3770121 185.693076
## CompPrice 19.5581082 159.471501
## Advertising 8.1381939 76.571983
## Income 7.3959069 96.017796
## Education 3.6988296 60.338195
## US 1.8455998 6.926799
## Urban -0.1967108 9.040539
## Population -0.4675407 55.809690
Answer: The test MSE is 0.002174022 which is really good! The most
important variables are ShelveLoc, Price, Age, and CompPrice. Increasing
the m to 9 changed the test MSE to 0.0007263172 which is even better,
however, it increased the MSE for specific variables because of
increased number of features considered at each split. A more broad
model can help generalization, but reduce impact of some variables.
f) Now analyze the data using BART, and report your results.
set.seed(123)
x_train <- train[, -which(names(train) == "Sales")]
x_test <- test[, -which(names(test) == "Sales")]
y_train <- train$Sales
bart_model <- bart(x.train = x_train, y.train = y_train, x.test = x_test)
##
## Running BART with numeric y
##
## number of trees: 200
## number of chains: 1, default number of threads 1
## tree thinning rate: 1
## Prior:
## k prior fixed to 2.000000
## degrees of freedom in sigma prior: 3.000000
## quantile in sigma prior: 0.900000
## scale in sigma prior: 0.000882
## power and base for tree prior: 2.000000 0.950000
## use quantiles for rule cut points: false
## proposal probabilities: birth/death 0.50, swap 0.10, change 0.40; birth 0.50
## data:
## number of training observations: 200
## number of test observations: 200
## number of explanatory variables: 12
## init sigma: 0.991574, curr sigma: 0.991574
##
## Cutoff rules c in x<=c vs x>c
## Number of cutoffs: (var: number of possible c):
## (1: 100) (2: 100) (3: 100) (4: 100) (5: 100)
## (6: 100) (7: 100) (8: 100) (9: 100) (10: 100)
## (11: 100) (12: 100)
## Running mcmc loop:
## iteration: 100 (of 1000)
## iteration: 200 (of 1000)
## iteration: 300 (of 1000)
## iteration: 400 (of 1000)
## iteration: 500 (of 1000)
## iteration: 600 (of 1000)
## iteration: 700 (of 1000)
## iteration: 800 (of 1000)
## iteration: 900 (of 1000)
## iteration: 1000 (of 1000)
## total seconds in loop: 0.848662
##
## Tree sizes, last iteration:
## [1] 2 3 3 2 3 2 2 2 3 3 2 2 2 3 3 2 2 2
## 2 2 2 2 2 2 3 3 2 3 3 3 2 3 2 2 3 2 4 5
## 2 3 1 3 3 2 2 2 2 3 2 2 2 3 2 2 3 2 3 2
## 2 4 1 4 2 2 2 2 2 1 2 3 3 3 2 3 1 3 3 3
## 2 2 3 3 2 3 2 4 3 2 2 3 2 1 2 2 2 2 3 2
## 4 3 2 2 2 3 2 2 3 2 2 2 2 1 2 2 2 2 3 2
## 2 2 3 3 3 3 2 2 3 1 3 3 2 3 1 2 3 2 2 2
## 4 2 2 2 3 2 2 2 4 4 2 2 6 3 3 3 2 2 2 2
## 2 2 3 1 2 2 2 3 3 3 2 3 2 2 2 2 2 2 2 2
## 2 2 2 2 4 1 2 2 2 4 3 2 2 2 2 3 2 2 3 2
## 3 2
##
## Variable Usage, last iteration (var:count):
## (1: 27) (2: 27) (3: 23) (4: 16) (5: 29)
## (6: 24) (7: 31) (8: 17) (9: 18) (10: 23)
## (11: 20) (12: 22)
## DONE BART
pred_bart <- bart_model$yhat.test.mean
mse_bart <- mean((pred_bart - test$Sales)^2)
mse_bart
## [1] 1.56543
Answer: The Bart model, ran with 200 trees and 1,100 total MCMC
iterations provided an MSE of 1.662133
Chapter 08 Problem 9: This problem involves the OJ data set which is
part of the ISLR2 package.
attach(OJ)
set.seed(1)
a) Create a training set containing a random sample of 800
observations, and a test set containing the remaining observations.
sample = sample(nrow(OJ), 800)
train = OJ[sample,]
test = OJ[-sample,]
b) Fit a tree to the training data, with Purchase as the response
and the other variables as predictors. Use the summary() function to
produce summary statistics about the tree, and describe the results
obtained. What is the training error rate? How many terminal nodes does
the tree have?
tree = tree(Purchase ~., data = train)
summary(tree)
##
## Classification tree:
## tree(formula = Purchase ~ ., data = train)
## Variables actually used in tree construction:
## [1] "LoyalCH" "PriceDiff" "SpecialCH" "ListPriceDiff"
## [5] "PctDiscMM"
## Number of terminal nodes: 9
## Residual mean deviance: 0.7432 = 587.8 / 791
## Misclassification error rate: 0.1588 = 127 / 800
Answer: The tree used 5 of the variables: LoyalCH
,
PriceDiff
, SpecialCh
,
ListPriceDiff
, and PctDiscMM
. It obtained a
training error rate of 0.1588. It has 9 terminal nodes.
c) Type in the name of the tree object in order to get a detailed
text output. Pick one of the terminal nodes, and interpret the
information displayed.
tree
## node), split, n, deviance, yval, (yprob)
## * denotes terminal node
##
## 1) root 800 1073.00 CH ( 0.60625 0.39375 )
## 2) LoyalCH < 0.5036 365 441.60 MM ( 0.29315 0.70685 )
## 4) LoyalCH < 0.280875 177 140.50 MM ( 0.13559 0.86441 )
## 8) LoyalCH < 0.0356415 59 10.14 MM ( 0.01695 0.98305 ) *
## 9) LoyalCH > 0.0356415 118 116.40 MM ( 0.19492 0.80508 ) *
## 5) LoyalCH > 0.280875 188 258.00 MM ( 0.44149 0.55851 )
## 10) PriceDiff < 0.05 79 84.79 MM ( 0.22785 0.77215 )
## 20) SpecialCH < 0.5 64 51.98 MM ( 0.14062 0.85938 ) *
## 21) SpecialCH > 0.5 15 20.19 CH ( 0.60000 0.40000 ) *
## 11) PriceDiff > 0.05 109 147.00 CH ( 0.59633 0.40367 ) *
## 3) LoyalCH > 0.5036 435 337.90 CH ( 0.86897 0.13103 )
## 6) LoyalCH < 0.764572 174 201.00 CH ( 0.73563 0.26437 )
## 12) ListPriceDiff < 0.235 72 99.81 MM ( 0.50000 0.50000 )
## 24) PctDiscMM < 0.196196 55 73.14 CH ( 0.61818 0.38182 ) *
## 25) PctDiscMM > 0.196196 17 12.32 MM ( 0.11765 0.88235 ) *
## 13) ListPriceDiff > 0.235 102 65.43 CH ( 0.90196 0.09804 ) *
## 7) LoyalCH > 0.764572 261 91.20 CH ( 0.95785 0.04215 ) *
Answer: Terminal node 7) has a splitting variable of LoyalCH at
value .764572. Below this node, there are 261 points and the deviance
for all points contained in this region below this node is 91.20. The *
in this line shows that it is a terminal node and the prediction at this
node is that Sales = CH. About 4.2% of points in this node have MM as
the value of Sales and the rest of them have CH as the value of
Sales.
d) Create a plot of the tree, and interpret the results.
plot(tree)
text(tree)

Answer: The most important variable is LoyalCH. If LoyalCH is less
than 0.280875, the tree will always predict MM. However, if LoyalCH is
greater than 0.280875, and the priceDiff is greater than 0.05, and the
SpecialCH is greater thatn 0.5 it will predict CH. On the other hand, If
LoyalCH is greater than 0.5036 and ListPriceDiff is less than 0.235 and
PctDiscMM is greater than 0.196196 , it will predict MM. Otherwise, if
LoyalCH is greater than 0.5036, it will predict CH.
e) Predict the response on the test data, and produce a confusion
matrix comparing the test labels to the predicted test labels. What is
the test error rate?
preds = predict(tree, test, type = "class")
confusionMatrix(test$Purchase, preds)
## Confusion Matrix and Statistics
##
## Reference
## Prediction CH MM
## CH 160 8
## MM 38 64
##
## Accuracy : 0.8296
## 95% CI : (0.7794, 0.8725)
## No Information Rate : 0.7333
## P-Value [Acc > NIR] : 0.0001259
##
## Kappa : 0.6154
##
## Mcnemar's Test P-Value : 1.904e-05
##
## Sensitivity : 0.8081
## Specificity : 0.8889
## Pos Pred Value : 0.9524
## Neg Pred Value : 0.6275
## Prevalence : 0.7333
## Detection Rate : 0.5926
## Detection Prevalence : 0.6222
## Balanced Accuracy : 0.8485
##
## 'Positive' Class : CH
##
Answer: The test error rate is 0.1704 based on the accuracy of
.8296.
f) Apply the cv.tree() function to the training set in order to
determine the optimal tree size.
cv= cv.tree(tree, FUN = prune.tree)
g) Produce a plot with tree size on the x-axis and cross-validated
classification error rate on the y-axis.
plot(cv$size, cv$dev, type = "b")

h) Which tree size corresponds to the lowest cross-validated
classification error rate?
9 has the lowest cv error.
i) Produce a pruned tree corresponding to the optimal tree size
obtained using cross-validation. If cross-validation does not lead to
selection of a pruned tree, then create a pruned tree with five terminal
nodes.
pruned = prune.tree(tree, best = 5)
summary(pruned)
##
## Classification tree:
## snip.tree(tree = tree, nodes = c(4L, 12L, 5L))
## Variables actually used in tree construction:
## [1] "LoyalCH" "ListPriceDiff"
## Number of terminal nodes: 5
## Residual mean deviance: 0.8239 = 655 / 795
## Misclassification error rate: 0.205 = 164 / 800
preds2 = predict(pruned, test, type = "class")
confusionMatrix(test$Purchase, preds2)
## Confusion Matrix and Statistics
##
## Reference
## Prediction CH MM
## CH 133 35
## MM 17 85
##
## Accuracy : 0.8074
## 95% CI : (0.7552, 0.8527)
## No Information Rate : 0.5556
## P-Value [Acc > NIR] : <2e-16
##
## Kappa : 0.6041
##
## Mcnemar's Test P-Value : 0.0184
##
## Sensitivity : 0.8867
## Specificity : 0.7083
## Pos Pred Value : 0.7917
## Neg Pred Value : 0.8333
## Prevalence : 0.5556
## Detection Rate : 0.4926
## Detection Prevalence : 0.6222
## Balanced Accuracy : 0.7975
##
## 'Positive' Class : CH
##
j) Compare the training error rates between the pruned and unpruned
trees. Which is higher?
Answer: The training error for the pruned tree was: 0.205. The
training error for the unpruned trees is: 0.1588. Therefore the pruned
tree’s error rate was higher.
k) Compare the test error rates between the pruned and unpruned
trees. Which is higher?
Answer: The test error for the pruned tree was .2037. The test error
for the unpruned tree is: 0.1704. Therefore the pruned tree’s error rate
was higher.