Load Libraries and attach data sets

library(ISLR2)
library(caret)
library(tree)
library(randomForest)
library(pander)
library(BART)
library(tidyverse)
attach(Carseats)
attach(OJ)





Problem 3.

Consider the Gini index, classification error, and entropy in a simple classification setting with two classes. Create a single plot that displays each of these quantities as a function of \(\hat{p}_{m1}\). The x-axis should display \(\hat{p}_{m1}\), ranging from 0 to 1, and the y-axis should display the value of the Gini index, classification error, and entropy.


Plot quantities as a function of \(\hat{p}_{m1}\)

p <-  seq(0, 1, 0.001)
gini <-  2 * p * (1 - p)
class_error <- 1 - pmax(p, 1 - p)
ent <-  -(p * log(p) + (1 - p) * log(1 - p))
plot(p, ent, typ = 'l', col = 'Gold', lwd = 4, xlab = 'Probability', ylab = 'Values', col.lab = 'Dark Blue', col.axis = 'Dark Blue')
box(col = 'Dark Blue')
lines(p, class_error, col = 'Red', lwd = 4)
lines(p, gini, col = 'Black', lwd = 4)
title(main = 'Value of Gini Index, Classification Error, and Entropy', col.main = 'Dark Blue')
legend(.33, .2, legend=c('Entropy', 'Gini Index', 'Classification Error'), 
       col=c('Gold', 'Black', 'Red'), lwd = 3, cex=1, bty = 'n')






Problem 8.

Problem 9.

This problem involves the OJ data set which is part of the ISLR2 package.


Create a training set containing a random sample of 800 observations, and a test set containing the remaining observations.


Split data into train & test

set.seed(22)

in_train <- sample(nrow(OJ), 800)
train <- OJ[in_train,]
test <- OJ[-in_train, ]


(b) Fit a tree to the training data, with Purchase as the response and the other variables as predictors. Use the summary() function to produce summary statistics about the tree, and describe the results obtained. What is the training error rate? How many terminal nodes does the tree have?

The tree has a training misclassification error rate was .1638, used three predictors (LoyalCH, ListPriceDiff, and PriceDiff), and had 6 terminal nodes.


Fit the model and print the tree summary

train_tree <- tree(Purchase~., data = train)

summary(train_tree)
## 
## Classification tree:
## tree(formula = Purchase ~ ., data = train)
## Variables actually used in tree construction:
## [1] "LoyalCH"       "ListPriceDiff" "PriceDiff"    
## Number of terminal nodes:  6 
## Residual mean deviance:  0.7857 = 623.8 / 794 
## Misclassification error rate: 0.1638 = 131 / 800


(c) Type in the name of the tree object in order to get a detailed text output. Pick one of the terminal nodes, and interpret the information displayed.

Node 7 had a split for LoyalCH at > .7535. The number of observations at this node was 261 with a deviance of 84.85. 96% of the observations in branch 9 were characterized as MM while 4% were CH.


Print tree object

train_tree
## node), split, n, deviance, yval, (yprob)
##       * denotes terminal node
## 
##  1) root 800 1073.000 CH ( 0.60625 0.39375 )  
##    2) LoyalCH < 0.48285 294  308.100 MM ( 0.21769 0.78231 )  
##      4) LoyalCH < 0.035047 51    9.844 MM ( 0.01961 0.98039 ) *
##      5) LoyalCH > 0.035047 243  278.100 MM ( 0.25926 0.74074 ) *
##    3) LoyalCH > 0.48285 506  458.100 CH ( 0.83202 0.16798 )  
##      6) LoyalCH < 0.753545 245  301.800 CH ( 0.69388 0.30612 )  
##       12) ListPriceDiff < 0.18 66   89.300 MM ( 0.40909 0.59091 ) *
##       13) ListPriceDiff > 0.18 179  179.700 CH ( 0.79888 0.20112 )  
##         26) PriceDiff < -0.165 8    6.028 MM ( 0.12500 0.87500 ) *
##         27) PriceDiff > -0.165 171  155.700 CH ( 0.83041 0.16959 ) *
##      7) LoyalCH > 0.753545 261   84.850 CH ( 0.96169 0.03831 ) *


(d) Create a plot of the tree, and interpret the results.

The model shows the most important predictors and their split point values, LoyalCH is the most important predictor with PriceDiff coming next.


Create a plot of the model

plot(train_tree, col = 'Light Blue')
text(train_tree, pretty = 1, cex = .9, col = 'dark blue')


(e) Predict the response on the test data, and produce a confusion matrix comparing the test labels to the predicted test labels. What is the test error rate?

The model predicted 50 of the 270 observations incorrectly, for an error rate of 18.5% (50/270).


Train model and create confusion matrix

oj_pred <-  predict(train_tree, test, type = 'class')
(table(test$Purchase, oj_pred))
##     oj_pred
##       CH  MM
##   CH 132  36
##   MM  14  88


(f) Apply the cv.tree() function to the training set in order to determine the optimal tree size.


Train model and apply cross-validation

set.seed(22)

cv_train_tree <- cv.tree(train_tree)
summary(cv_train_tree)
##        Length Class  Mode     
## size   6      -none- numeric  
## dev    6      -none- numeric  
## k      6      -none- numeric  
## method 1      -none- character


(g) Produce a plot with tree size on the x-axis and cross-validated classification error rate on the y-axis.


plot(cv_train_tree$size, cv_train_tree$dev, xlab = 'Size', ylab = 'Dev', col.lab = 'Dark Blue', col = 'Dark Blue', col.axis = 'Dark Blue')
box(col = 'Dark Blue')
abline(h = min(cv_train_tree$dev) + .2*sd(cv_train_tree$dev), col = 'Red')


(h) Which tree size corresponds to the lowest cross-validated classification error rate?

A tree size of 6 had the smallest CV classification error rate, which means pruning is not necessary for my tree, I will use a best rate of 5 for question 9i..


Print cross validation object

(cv_train_tree)
## $size
## [1] 6 5 4 3 2 1
## 
## $dev
## [1]  699.9453  725.8748  722.5750  733.4394  791.1645 1074.3597
## 
## $k
## [1]      -Inf  17.97797  20.11905  32.82322  71.43514 306.43464
## 
## $method
## [1] "deviance"
## 
## attr(,"class")
## [1] "prune"         "tree.sequence"


(i) Produce a pruned tree corresponding to the optimal tree size obtained using cross-validation. If cross-validation does not lead to selection of a pruned tree, then create a pruned tree with five terminal nodes.


Prune the tree to 5

set.seed(22)

prune_tree <- prune.tree(train_tree, best = 5)
plot(prune_tree, col = 'Light Blue')
text(prune_tree,pretty=1, col = 'Dark Blue', cex = .9)

(j) Compare the training error rates between the pruned and unpruned trees. Which is higher?

The pruned tree had a slightly higher error rate of .1713 when compared to the original tree’s error rate of .1638.


Print summary of the pruned tree

summary(prune_tree)
## 
## Classification tree:
## snip.tree(tree = train_tree, nodes = 13L)
## Variables actually used in tree construction:
## [1] "LoyalCH"       "ListPriceDiff"
## Number of terminal nodes:  5 
## Residual mean deviance:  0.8073 = 641.8 / 795 
## Misclassification error rate: 0.1713 = 137 / 800


(k) Compare the test error rates between the pruned and unpruned trees. Which is higher?

The pruned tree predicted 54 of the observations incorrectly for an error rate of 20% (54/270); higher than the original tree’s error rate of 18.5%


Create confusion matrix for the pruned tree

prune_pred <-  predict(prune_tree, test, type = 'class')
(table(test$Purchase, prune_pred))
##     prune_pred
##       CH  MM
##   CH 132  36
##   MM  18  84