1. (Exercise 8.4.3) Consider the Gini index, classification
error, and entropy in a simple classification setting with two classes.
Create a single plot that displays each of these quantities as a
function of \(\hat{p}_{m1}\). The \(x\)-axis should display \(\hat{p}_{m1}\), ranging from 0 to 1, and
the \(y\)-axis should display the value
of the Gini index, classification error, and entropy. Hint:
In a setting with two classes, \(\hat{p}_{m1}
= 1 - \hat{p}_{m2}\). You could make this plot by hand, but it
will be much easier to make in R
.
#define a sequence of values, p, between 0 and 1 with 10,000 increments
p <- seq(0, 1, 0.001)
#define our metrics
g.index <- 2*p*(1-p)
entr <- -(p*log(p) + (1-p)*log(1-p))
c.e <- 1 - pmax(p, 1-p)
plot(x=NA, y=NA, xlim = c(0,1), ylim = c(0,1), xlab='pmi', ylab = 'value of metric')
lines(p, g.index, col = 'lightpink')
lines(p, entr, col='firebrick4')
lines(p, c.e, col='mediumpurple')
legend(x = 'topleft', legend=c('gini index','entropy','classification error'),
col=c('lightpink','firebrick4','mediumpurple'),lty=1)

2. (Exercise 8.4.8) In the lab, a classification tree was
applied to the Carseats
data set after converting
Sales
into a qualitative response variable. Now we will
seek to predict Sales
using regression trees and related
approaches, treating the response as a quantitative
variable.
- Split the data set into a training set and a test set.
library(ISLR2)
data("Carseats")
set.seed(848)
library(caret)
index.carseats <- createDataPartition(Carseats$Sales, p = 0.7, list = FALSE)
train.carseats <- Carseats[index.carseats,]
test.carseats <- Carseats[-index.carseats,]
- Fit a regression tree to the training set. Plot the tree, and
interpret the results. What test MSE do you obtain?
library(rpart)
library(rpart.plot)
car.tree <- rpart(Sales ~., data = train.carseats)
summary(car.tree)
rpart.plot(car.tree)
car.preds <- predict(car.tree, test.carseats)
MSE <- mean((car.preds - test.carseats$Sales)^2)
MSE
The most important variable is ShelveLoc
with an
importance factor of 36, followed by Price
with a factor of
23, while the least important variable is Education
with a
value of 1. The plot also shows that 6% of all observations in the
sample with a ShelveLoc
class of good
AND a
Price
\(< \$100.00\)
will have a predicted Sales
value of 12,000 units.
The test MSE obtained is 5.05543.
- Use cross-validation in order to determine the optimal level of tree
complexity. Does pruning the tree improve the test MSE?
plotcp(car.tree)
#get min cv error and corresponding cp & tree size
optimal.cp.car <- car.tree$cptable[which.min(car.tree$cptable[,"xerror"]), "CP"]
car.tree$cptable[which.min(car.tree$cptable[,"xerror"]), "nsplit"] +1
#prune and test
car.prune <- prune(car.tree, cp = optimal.cp.car)
car.preds.prune <- predict(car.prune, test.carseats)
MSE.prune <- mean((car.preds.prune - test.carseats$Sales)^2)
MSE.prune
The test MSE of the pruned tree does not change from the un-pruned
tree, this is to be expected, since the original model employs
cross-validation to obtain the optimal tree size. Had we not performed
cross-validation initially, we would expect the pruned MSE to be
slightly lower than the original.
- Use the bagging approach in order to analyze this data. What test
MSE do you obtain? Use the
importance()
function to
determine which variables are most important.
library(ipred)
set.seed(8482)
bag.car <- bagging(Sales ~., data = train.carseats, coob = TRUE)
bag.car
bag.preds <- predict(bag.car, test.carseats)
MSE.bag <- mean((bag.preds - test.carseats$Sales)^2)
MSE.bag
varImp(bag.car)
The test MSE obtained is 2.935237 - lower than that obtained using
the decision tree method. Additionally, the most important variables to
this model are Age
and Price
.
- Use random forests to analyze this data. What test MSE do you
obtain? Use the
importance()
function to determine which
variables are most important. Describe the effect of \(m\), the number of variables considered at
each split, on the error rate obtained.
library(randomForest)
set.seed(8483)
car.rf <- train(Sales ~., data = train.carseats, method = 'rf',
trControl = trainControl("cv", number = 10), importance = TRUE)
car.rf
rf.preds <- predict(car.rf, test.carseats)
MSE.rf <- mean((rf.preds - test.carseats$Sales)^2)
MSE.rf
varImp(car.rf)
plot(varImp(car.rf))
The test MSE obtained is 2.442664, the lowest obtained so far. The
most important variables to this model are ShelveLoc
when
class is Good
and Price
. The plot above shows
the ranked variable importance.
We note that as \(m\), the number of
variables considered at each split, increases, both the RMSE and the MAE
decrease but the \(R^2\) value
increases. This indicates that, as \(m\) increases, our model approaches the
optimal balance point between lowering bias and increasing variance
(bias-variance trade-off) and that the data contained within our model
is adequately accounting for that increased variance.
- Now analyze the data using BART, and report your results.
library(BART)
x.tr <- train.carseats[,2:11]
y.tr <- train.carseats[,1]
x.te <- test.carseats[,2:11]
y.te <- test.carseats[,1]
set.seed(8484)
car.bart <- gbart(x.tr, y.tr, x.test = x.te)
bart.preds <- car.bart$yhat.test.mean
MSE.bart <- mean((bart.preds - y.te)^2)
MSE.bart
The test MSE obtained using BART is 1.547553. This is the lowest test
MSE obtained using any of the above methods.
3. (Exercise 8.4.9) This problem involves the OJ
data set which is part of the ISLR2
package.
- Create a training set containing a random sample of 800
observations, and a test set containing the remaining observations.
library(ISLR2)
data(OJ)
set.seed(849)
index.p9 <- sample(1:nrow(OJ), size = 800, replace = FALSE)
train.oj <- OJ[index.p9,]
test.oj <- OJ[-index.p9,]
- Fit a tree to the training data, with
Purchase
as the
response and the other variables as predictors. Use the
summary()
function to produce summary statistics about the
tree, and describe the results obtained. What is the training error
rate? How many terminal nodes does the tree have?
#get tree model & overall summary
library(rpart)
# get model, note: cp = 0.007 increases model parsimony, less pruning
oj.tree <- rpart(Purchase ~ ., data = train.oj, method = "class", cp = 0.007)
summary(oj.tree)
#compute training error rate
oj.train.preds <- predict(oj.tree, type = 'class')
table(oj.train.preds, train.oj$Purchase)
train.error <- mean(oj.train.preds != train.oj$Purchase)
train.error
From the model summary we have that LoyalCH
ranks
highest in variable importance with a value of 58, where the variable
ListPriceDiff
is the next highest in importance with a
value of 6, and STORE
, SpecialMM
, and
SalePriceCH
are the least important, each with a value of
1. The training error rate is 0.151 and the tree has 10 terminal
nodes.
- Type in the name of the tree object in order to get a detailed text
output. Pick one of the terminal nodes, and interpret the information
displayed.
oj.tree
Looking at terminal node 10, we have a split criterion of
ListPriceDiff
\(\geq
0.235\), meaning at this terminal node, the best indicator of
Purchase
is whether or not the list price of Minute Maid
brand orange juice is at least $0.24 more expensive than Citrus Hill
brand orange juice; if the list price difference is more than $0.24, the
consumer will purchase Citrus Hill. At this node, there are 122
observations and the mean predicted Purchase
outcome is
CH
with a loss of 22. This indicates that if we were to
assign this prediction value (CH
) to all 122 observations
at this node, 22 of them would be misclassified. Finally, we have
(0.81967213 0.18032787), which gives the proportions of observations
predicted to be in each class at this node. In other words, 81.97% of
observations at node 10 have a predicted Purchase
value of
CH
, while the remaining 18.03% have a predicted
Purchase
value of MM
.
- Create a plot of the tree, and interpret the results.
library(rpart.plot)
rpart.plot(oj.tree, extra = 108)
From this plot we can see that:
- 62% of observations have a predicted
Purchase
value of
CH
to 38% predicted MM
,
- 37% of all observations in the sample have
LoyalCH
\(\geq 0.71\), for which the model will
predict CH
with probability 0.95,
- 21% of all observations in the sample have
LoyalCH
\(< 0.28\), for which the model will
predict MM
with probability 0.88,
- 15% of all observations in the sample have \(0.48 \leq\)
LoyalCH
\(< 0.71\) AND ListPriceDiff
\(\geq 0.24\), for which the model will
predict CH
with probability 0.82, etc.
- Predict the response on the test data, and produce a confusion
matrix comparing the test labels to the predicted test labels. What is
the test error rate?
preds.oj <- predict(oj.tree, test.oj, type = "class")
table(preds.oj, test.oj$Purchase)
error.oj <- mean(preds.oj != test.oj$Purchase)
cat("The test error rate is:",error.oj,"\n")
The test error rate is just slightly higher than the training error
rate (0.15125), which indicates that our model is a good fit to the
data.
- Apply the
cv.tree()
function to the training set in
order to determine the optimal tree size.
#First using 'tree' model
fit.tree <- tree(Purchase ~., data = train.oj, method = "class")
oj.cv <- cv.tree(fit.tree)
oj.cv
# Using complexity parameter with rpart instead of 'tree' object
printcp(oj.tree)
- Produce a plot with tree size on the \(x\)-axis and cross-validated classification
error rate on the \(y\)-axis.
#plot using model fit with 'tree'
plot(oj.cv$size, oj.cv$dev, type = 'b')
#plot using model fit with 'rpart'
plotcp(oj.tree)
- Which tree size corresponds to the lowest cross-validated
classification error rate?
#get minimum deviance value from 'tree' and corresponding tree size
oj.cv$size[which.min(oj.cv$dev)]
#get min cv error from 'rpart' and corresponding cp & tree size
optimal.cp.oj <- oj.tree$cptable[which.min(oj.tree$cptable[,"xerror"]), "CP"]
oj.tree$cptable[which.min(oj.tree$cptable[,"xerror"]), "nsplit"] +1
Based on the first plot, a tree size of 8 leaf (terminal) nodes
corresponds to the lowest cross-validated classification error rate.
The second plot shows that a tree size of 5 corresponds to the lowest
cross-validated classification error rate.
- Produce a pruned tree corresponding to the optimal tree size
obtained using cross-validation. If cross-validation does not lead to
selection of a pruned tree, then create a pruned tree with five terminal
nodes.
# We will use the model obtained from rpart
oj.prune <- prune(oj.tree, cp = optimal.cp.oj)
- Compare the training error rates between the pruned and un-pruned
trees. Which is higher?
oj.prune.preds <- predict(oj.prune, type = 'class')
table(oj.prune.preds, train.oj$Purchase)
prune.error <- mean(oj.prune.preds != train.oj$Purchase)
prune.error
The training error rate of the pruned tree is slightly higher than
the un-pruned tree (0.16625 vs. 0.15125).
- Compare the test error rates between the pruned and unpruned trees.
Which is higher?
test.prune.preds <- predict(oj.prune, test.oj, type = 'class')
table(test.prune.preds, test.oj$Purchase)
prune.test.error <- mean(test.prune.preds != test.oj$Purchase)
prune.test.error
The test error rate of the pruned tree is slightly higher than the
un-pruned tree (0.1888889 vs. 0.1814815).
---
title: "STA 6543 Assignment 7"
author: "Allyssa Weinbrecht"
date: "2025-05-02"
output:
  html_notebook:
    toc: true
    toc_float: true
  html_document:
    toc: true
    df_print: paged
---

```{r echo=FALSE, warning=FALSE, include=FALSE}
knitr::opts_chunk$set(
  collapse = TRUE,
  fig.align="center",
  fig.pos="b",
  strip.white = TRUE
)
```

**1. (Exercise 8.4.3) Consider the Gini index, classification error, and entropy in a simple classification setting with two classes. Create a single plot that displays each of these quantities as a function of $\hat{p}_{m1}$. The $x$-axis should display $\hat{p}_{m1}$, ranging from 0 to 1, and the $y$-axis should display the value of the Gini index, classification error, and entropy.**
*Hint: In a setting with two classes, $\hat{p}_{m1} = 1 - \hat{p}_{m2}$. You could make this plot by hand, but it will be much easier to make in `R`.*

``` {r Prob1}
#define a sequence of values, p, between 0 and 1 with 10,000 increments
p <- seq(0, 1, 0.001)

#define our metrics
g.index <- 2*p*(1-p)
entr <- -(p*log(p) + (1-p)*log(1-p))
c.e <- 1 - pmax(p, 1-p)

plot(x=NA, y=NA, xlim = c(0,1), ylim = c(0,1), xlab='pmi', ylab = 'value of metric')

lines(p, g.index, col = 'lightpink')
lines(p, entr, col='firebrick4')
lines(p, c.e, col='mediumpurple')
legend(x = 'topleft', legend=c('gini index','entropy','classification error'),
       col=c('lightpink','firebrick4','mediumpurple'),lty=1)

```


**2. (Exercise 8.4.8) In the lab, a classification tree was applied to the `Carseats` data set after converting `Sales` into a qualitative response variable. Now we will seek to predict `Sales` using regression trees and related approaches, treating the response as a quantitative variable.**

(a) Split the data set into a training set and a test set.

``` {r Prob2a, message = FALSE}
library(ISLR2)
data("Carseats")

set.seed(848)
library(caret)
index.carseats <- createDataPartition(Carseats$Sales, p = 0.7, list = FALSE)
train.carseats <- Carseats[index.carseats,]
test.carseats <- Carseats[-index.carseats,]
```

(b) Fit a regression tree to the training set. Plot the tree, and interpret the results. What test MSE do you obtain?

``` {r Prob2b}
library(rpart)
library(rpart.plot)
car.tree <- rpart(Sales ~., data = train.carseats)
summary(car.tree)
rpart.plot(car.tree)

car.preds <- predict(car.tree, test.carseats)
MSE <- mean((car.preds - test.carseats$Sales)^2)
MSE
```
The most important variable is `ShelveLoc` with an importance factor of 36, followed by `Price` with a factor of 23, while the least important variable is `Education` with a value of 1. The plot also shows that 6\% of all observations in the sample with a `ShelveLoc` class of `good` AND a `Price` $< \$100.00$ will have a predicted `Sales` value of 12,000 units.

The test MSE obtained is 5.05543.

(c) Use cross-validation in order to determine the optimal level of tree complexity. Does pruning the tree improve the test MSE?

```{r Prob2c}
plotcp(car.tree)

#get min cv error and corresponding cp & tree size
optimal.cp.car <- car.tree$cptable[which.min(car.tree$cptable[,"xerror"]), "CP"]
car.tree$cptable[which.min(car.tree$cptable[,"xerror"]), "nsplit"] +1

#prune and test
car.prune <- prune(car.tree, cp = optimal.cp.car)
car.preds.prune <- predict(car.prune, test.carseats)
MSE.prune <- mean((car.preds.prune - test.carseats$Sales)^2)
MSE.prune
```
The test MSE of the pruned tree does not change from the un-pruned tree, this is to be expected, since the original model employs cross-validation to obtain the optimal tree size. Had we not performed cross-validation initially, we would expect the pruned MSE to be slightly lower than the original.

(d) Use the bagging approach in order to analyze this data. What test MSE do you obtain? Use the `importance()` function to determine which variables are most important.

```{r Prob2d}
library(ipred)
set.seed(8482)
bag.car <- bagging(Sales ~., data = train.carseats, coob = TRUE)
bag.car

bag.preds <- predict(bag.car, test.carseats)
MSE.bag <- mean((bag.preds - test.carseats$Sales)^2)
MSE.bag

varImp(bag.car)
```
The test MSE obtained is 2.935237 - lower than that obtained using the decision tree method. Additionally, the most important variables to this model are `Age` and `Price`.

(e) Use random forests to analyze this data. What test MSE do you obtain? Use the `importance()` function to determine which variables are most important. Describe the effect of $m$, the number of variables considered at each split, on the error rate obtained.

```{r Prob2e}
library(randomForest)
set.seed(8483)
car.rf <- train(Sales ~., data = train.carseats, method = 'rf', 
                trControl = trainControl("cv", number = 10), importance = TRUE)
car.rf

rf.preds <- predict(car.rf, test.carseats)
MSE.rf <- mean((rf.preds - test.carseats$Sales)^2)
MSE.rf

varImp(car.rf)
plot(varImp(car.rf))
```
The test MSE obtained is 2.442664, the lowest obtained so far. The most important variables to this model are `ShelveLoc` when class is `Good` and `Price`. The plot above shows the ranked variable importance.

We note that as $m$, the number of variables considered at each split, increases, both the RMSE and the MAE decrease but the $R^2$ value increases. This indicates that, as $m$ increases, our model approaches the optimal balance point between lowering bias and increasing variance (bias-variance trade-off) and that the data contained within our model is adequately accounting for that increased variance.

(f) Now analyze the data using BART, and report your results.

```{r Prob2f, message=FALSE}
library(BART)
x.tr <- train.carseats[,2:11]
y.tr <- train.carseats[,1]
x.te <- test.carseats[,2:11]
y.te <- test.carseats[,1]
set.seed(8484)
car.bart <- gbart(x.tr, y.tr, x.test = x.te)

bart.preds <- car.bart$yhat.test.mean
MSE.bart <- mean((bart.preds - y.te)^2)
MSE.bart
```
The test MSE obtained using BART is 1.547553. This is the lowest test MSE obtained using any of the above methods.

**3. (Exercise 8.4.9) This problem involves the `OJ` data set which is part of the `ISLR2` package.**

(a) Create a training set containing a random sample of 800 observations, and a test set containing the remaining observations.

```{r Prob9a, message = FALSE}
library(ISLR2)
data(OJ)

set.seed(849)
index.p9 <- sample(1:nrow(OJ), size = 800, replace = FALSE)
train.oj <- OJ[index.p9,]
test.oj <- OJ[-index.p9,]
    
```

(b) Fit a tree to the training data, with `Purchase` as the response and the other variables as predictors. Use the `summary()` function to produce summary statistics about the tree, and describe the results obtained. What is the training error rate? How many terminal nodes does the tree have?

```{r Prob9b}
#get tree model & overall summary
library(rpart)
# get model, note: cp = 0.007 increases model parsimony, less pruning
oj.tree <- rpart(Purchase ~ ., data = train.oj, method = "class", cp = 0.007)
summary(oj.tree)

#compute training error rate
oj.train.preds <- predict(oj.tree, type = 'class')
table(oj.train.preds, train.oj$Purchase)
train.error <- mean(oj.train.preds != train.oj$Purchase)
train.error
```

From the model summary we have that `LoyalCH` ranks highest in variable importance with a value of 58, where the variable `ListPriceDiff` is the next highest in importance with a value of 6, and `STORE`, `SpecialMM`, and `SalePriceCH` are the least important, each with a value of 1. The training error rate is 0.151 and the tree has 10 terminal nodes.

(c) Type in the name of the tree object in order to get a detailed text output. Pick one of the terminal nodes, and interpret the information displayed.

``` {r Prob9c}
oj.tree
```

Looking at terminal node 10, we have a split criterion of `ListPriceDiff` $\geq 0.235$, meaning at this terminal node, the best indicator of `Purchase` is whether or not the list price of Minute Maid brand orange juice is at least \$0.24 more expensive than Citrus Hill brand orange juice; if the list price difference is more than \$0.24, the consumer will purchase Citrus Hill. At this node, there are 122 observations and the mean predicted `Purchase` outcome is `CH` with a loss of 22. This indicates that if we were to assign this prediction value (`CH`) to all 122 observations at this node, 22 of them would be misclassified. Finally, we have (0.81967213 0.18032787), which gives the proportions of observations predicted to be in each class at this node. In other words, 81.97\% of observations at node 10 have a predicted `Purchase` value of `CH`, while the remaining 18.03\% have a predicted `Purchase` value of `MM`.

(d) Create a plot of the tree, and interpret the results.

``` {r Prob9d}
library(rpart.plot)
rpart.plot(oj.tree, extra = 108)
```
From this plot we can see that:

* 62\% of observations have a predicted `Purchase` value of `CH` to 38\% predicted `MM`,
* 37\% of all observations in the sample have `LoyalCH` $\geq 0.71$, for which the model will predict `CH` with probability 0.95,
* 21\% of all observations in the sample have `LoyalCH` $< 0.28$, for which the model will predict `MM` with probability 0.88, 
* 15\% of all observations in the sample have $0.48 \leq$ `LoyalCH` $< 0.71$ AND `ListPriceDiff` $\geq 0.24$, for which the model will predict `CH` with probability 0.82, etc.


(e) Predict the response on the test data, and produce a confusion matrix comparing the test labels to the predicted test labels. What is the test error rate?

```{r Prob9e}
preds.oj <- predict(oj.tree, test.oj, type = "class")
table(preds.oj, test.oj$Purchase)
error.oj <- mean(preds.oj != test.oj$Purchase)
cat("The test error rate is:",error.oj,"\n")
```
The test error rate is just slightly higher than the training error rate (0.15125), which indicates that our model is a good fit to the data.

(f) Apply the `cv.tree()` function to the training set in order to determine the optimal tree size.

``` {r Prob9f}
#First using 'tree' model
fit.tree <- tree(Purchase ~., data = train.oj, method = "class")
oj.cv <- cv.tree(fit.tree)
oj.cv

# Using complexity parameter with rpart instead of 'tree' object
printcp(oj.tree)
```


(g) Produce a plot with tree size on the $x$-axis and cross-validated classification error rate on the $y$-axis.

``` {r Prob9g}
#plot using model fit with 'tree'
plot(oj.cv$size, oj.cv$dev, type = 'b')

#plot using model fit with 'rpart'
plotcp(oj.tree)
```

(h) Which tree size corresponds to the lowest cross-validated classification error rate?

``` {r Prob9h}
#get minimum deviance value from 'tree' and corresponding tree size
oj.cv$size[which.min(oj.cv$dev)]

#get min cv error from 'rpart' and corresponding cp & tree size
optimal.cp.oj <- oj.tree$cptable[which.min(oj.tree$cptable[,"xerror"]), "CP"]
oj.tree$cptable[which.min(oj.tree$cptable[,"xerror"]), "nsplit"] +1
```
Based on the first plot, a tree size of 8 leaf (terminal) nodes corresponds to the lowest cross-validated classification error rate.

The second plot shows that a tree size of 5 corresponds to the lowest cross-validated classification error rate.

(i) Produce a pruned tree corresponding to the optimal tree size obtained using cross-validation. If cross-validation does not lead to selection of a pruned tree, then create a pruned tree with five terminal nodes.

```{r Prob9i}
# We will use the model obtained from rpart
oj.prune <- prune(oj.tree, cp = optimal.cp.oj)
```

(j) Compare the training error rates between the pruned and un-pruned trees. Which is higher?

``` {r Prob9j}
oj.prune.preds <- predict(oj.prune, type = 'class')
table(oj.prune.preds, train.oj$Purchase)
prune.error <- mean(oj.prune.preds != train.oj$Purchase)
prune.error
```
The training error rate of the pruned tree is slightly higher than the un-pruned tree (0.16625 vs. 0.15125).

(k) Compare the test error rates between the pruned and unpruned trees. Which is higher?

``` {r Prob9k}
test.prune.preds <- predict(oj.prune, test.oj, type = 'class')
table(test.prune.preds, test.oj$Purchase)
prune.test.error <- mean(test.prune.preds != test.oj$Purchase)
prune.test.error
```
The test error rate of the pruned tree is slightly higher than the un-pruned tree (0.1888889 vs. 0.1814815).