Question 3.

Consider the Gini index, classification error, and entropy in a simple classification setting with two classes. Create a single plot that displays each of these quantities as a function of ˆpm1. The xaxis should display ˆpm1, ranging from 0 to 1, and the y-axis should display the value of the Gini index, classification error, and entropy. Hint: In a setting with two classes, pˆm1 = 1 − pˆm2. You could make this plot by hand, but it will be much easier to make in R.

prob<-c(0:100)/100
Gini_index<-prob*(1-prob)+prob*(1-prob)
Classify_error<-1-pmax(prob,1-prob)
Cross_entropy<-(prob*log(prob)+(1-prob)*log(1-prob))
matplot(prob,cbind(Gini_index,Classify_error,Cross_entropy), 
        col=c("purple","maroon","darkgreen"),
        xlab="Probability", ylab="Gini_index,Classify_error,Cross_entropy",
        main="Different Indices vs probability",
        cex=1,pch=c(20,21,22))
legend("bottom", cex=0.75,inset=0.05, legend=c("Gini", "C_error", "Entropy"),
       pch=c(20,21,22), col=c("purple","maroon","darkgreen"), horiz=TRUE)

Question 8.

In the lab, a classification tree was applied to the Carseats data set after converting Sales into a qualitative response variable. Now we will seek to predict Sales using regression trees and related approaches, treating the response as a quantitative variable.

  1. Split the data set into a training set and a test set.
library(ISLR)
library(tree)
## Warning: package 'tree' was built under R version 4.1.3
## Registered S3 method overwritten by 'tree':
##   method     from
##   print.tree cli
library(MASS)
library(randomForest)
## Warning: package 'randomForest' was built under R version 4.1.3
## randomForest 4.7-1
## Type rfNews() to see new features/changes/bug fixes.
set.seed(1)
train <- sample(1 : nrow(Carseats), nrow(Carseats) / 2)
tree.carseats <- tree(Sales ~ ., data = Carseats, subset = train)
  1. Fit a regression tree to the training set. Plot the tree, and interpret the results. What test MSE do you obtain?
plot(tree.carseats)
text(tree.carseats, pretty = 0)

tree.pred <- predict(tree.carseats, Carseats[-train, ])
mean((Carseats[-train, 'Sales'] - tree.pred) ^ 2)
## [1] 4.922039
  1. Use cross-validation in order to determine the optimal level of tree complexity. Does pruning the tree improve the test MSE?
set.seed(21)
cv.carseats=cv.tree(tree.carseats) 
plot(cv.carseats, type = "b")
# Best size = 8
abline(h = min(cv.carseats$dev) + 0.2 * sd(cv.carseats$dev), col = "red", lty = 2)
points(cv.carseats$size[which.min(cv.carseats$dev)], min(cv.carseats$dev), 
       col = "#BC3C29FF", cex = 2, pch = 20)

prune.carseats = prune.tree(tree.carseats, best = 8)
plot(prune.carseats)
text(prune.carseats, pretty = 0)

tree.pred=predict(prune.carseats,Carseats[-train,])
mean((tree.pred-Carseats[-train,'Sales'])^2)
## [1] 5.113254

In this case, pruning the tree increase the test MSE.

  1. Use the bagging approach in order to analyze this data. What test MSE do you obtain? Use the importance() function to determine which variables are most important.
library(randomForest)
set.seed(1)
bag.carseats <- randomForest(Sales ~ ., data = Carseats, subset = train, mtry = ncol(Carseats) - 1, importance = TRUE)
yhat.bag <- predict(bag.carseats, newdata = Carseats[-train, ])
mean((yhat.bag - Carseats[-train, 'Sales']) ^ 2)
## [1] 2.605253
importance(bag.carseats)
##                %IncMSE IncNodePurity
## CompPrice   24.8888481    170.182937
## Income       4.7121131     91.264880
## Advertising 12.7692401     97.164338
## Population  -1.8074075     58.244596
## Price       56.3326252    502.903407
## ShelveLoc   48.8886689    380.032715
## Age         17.7275460    157.846774
## Education    0.5962186     44.598731
## Urban        0.1728373      9.822082
## US           4.2172102     18.073863

the most 2 important predictors are: Price and ShelveLoc. The test MSE is 2.60.

  1. Use random forests to analyze this data. What test MSE do you obtain? Use the importance() function to determine which variables are most important. Describe the effect of m, the number of variables considered at each split, on the error rate obtained.
rf.carseats <- randomForest(Sales ~ ., data = Carseats, subset = train, importance = TRUE)
yhat.rf <- predict(rf.carseats, Carseats[-train, ])
mean((yhat.rf - Carseats[-train, 'Sales']) ^ 2)
## [1] 3.054306
importance(rf.carseats)
##                %IncMSE IncNodePurity
## CompPrice   12.9540442     157.53376
## Income       2.1683293     129.18612
## Advertising  8.7289900     111.38250
## Population  -2.5290493     102.78681
## Price       33.9482500     393.61313
## ShelveLoc   34.1358807     289.28756
## Age         12.0804387     172.03776
## Education    0.2213600      72.02479
## Urban        0.9793293      14.73763
## US           4.1072742      33.91622

he default m for a regression problem is \(p/3\), where p is number of predictors. In this setting, the test MSE is 3.321, higher than bagging method, lower than complete and pruning regression tree.

The most 2 important predictors are the same with bagging method above: Price and ShelveLoc.

Question 9.

This problem involves the OJ data set which is part of the ISLR package. (a) Create a training set containing a random sample of 800 observations, and a test set containing the remaining observations.

library(ISLR)
library(tree)
set.seed(1)
train <- sample(1 : nrow(OJ), 800)
oj.train <- OJ[train, ]
oj.test <- OJ[-train, ]
  1. Fit a tree to the training data, with Purchase as the response and the other variables as predictors. Use the summary() function to produce summary statistics about the tree, and describe the results obtained. What is the training error rate? How many terminal nodes does the tree have?
oj.tree <- tree(Purchase ~ ., OJ, subset = train)
summary(oj.tree)
## 
## Classification tree:
## tree(formula = Purchase ~ ., data = OJ, subset = train)
## Variables actually used in tree construction:
## [1] "LoyalCH"       "PriceDiff"     "SpecialCH"     "ListPriceDiff"
## [5] "PctDiscMM"    
## Number of terminal nodes:  9 
## Residual mean deviance:  0.7432 = 587.8 / 791 
## Misclassification error rate: 0.1588 = 127 / 800

The tree contains 5 variables: LoyalCH, DiscMM, PriceDiff,ListPriceDiff,PctDiscMM. The training error rate is 0.1588. The tree contains 9 terminal nodes.

  1. Type in the name of the tree object in order to get a detailed text output. Pick one of the terminal nodes, and interpret the information displayed.
oj.tree
## node), split, n, deviance, yval, (yprob)
##       * denotes terminal node
## 
##  1) root 800 1073.00 CH ( 0.60625 0.39375 )  
##    2) LoyalCH < 0.5036 365  441.60 MM ( 0.29315 0.70685 )  
##      4) LoyalCH < 0.280875 177  140.50 MM ( 0.13559 0.86441 )  
##        8) LoyalCH < 0.0356415 59   10.14 MM ( 0.01695 0.98305 ) *
##        9) LoyalCH > 0.0356415 118  116.40 MM ( 0.19492 0.80508 ) *
##      5) LoyalCH > 0.280875 188  258.00 MM ( 0.44149 0.55851 )  
##       10) PriceDiff < 0.05 79   84.79 MM ( 0.22785 0.77215 )  
##         20) SpecialCH < 0.5 64   51.98 MM ( 0.14062 0.85938 ) *
##         21) SpecialCH > 0.5 15   20.19 CH ( 0.60000 0.40000 ) *
##       11) PriceDiff > 0.05 109  147.00 CH ( 0.59633 0.40367 ) *
##    3) LoyalCH > 0.5036 435  337.90 CH ( 0.86897 0.13103 )  
##      6) LoyalCH < 0.764572 174  201.00 CH ( 0.73563 0.26437 )  
##       12) ListPriceDiff < 0.235 72   99.81 MM ( 0.50000 0.50000 )  
##         24) PctDiscMM < 0.196197 55   73.14 CH ( 0.61818 0.38182 ) *
##         25) PctDiscMM > 0.196197 17   12.32 MM ( 0.11765 0.88235 ) *
##       13) ListPriceDiff > 0.235 102   65.43 CH ( 0.90196 0.09804 ) *
##      7) LoyalCH > 0.764572 261   91.20 CH ( 0.95785 0.04215 ) *

In node 7 There are 261 samples in the subtree below this node. The deviance for all samples below this node is 91.20. If LoyalCH>0.50 and LoyalCH>0.76, the prediction of Purchase by this node is CH because about 95.8% of samples take Purchase as CH.

  1. Create a plot of the tree, and interpret the results.
plot(oj.tree)
text(oj.tree, pretty = 0)

The variable LoyalCH is the most decisive. If LoyalCH<0.50, the predictions are all MM (I do not know why the nodes are further divided. It may be due to the different prediction probabilities). And if LoyalCH>0.76, the prediction is CH. If LoyalCH<0.76, there are subtrees predicted by PriceDiff and DiscMM.

  1. Predict the response on the test data, and produce a confusion matrix comparing the test labels to the predicted test labels. What is the test error rate?
oj.pred <- predict(oj.tree, oj.test, type = 'class')
table(oj.pred, oj.test$Purchase)
##        
## oj.pred  CH  MM
##      CH 160  38
##      MM   8  64
  1. Apply the cv.tree() function to the training set in order to determine the optimal tree size.
set.seed(1)
oj.cv <- cv.tree(oj.tree, FUN = prune.misclass)
oj.cv
## $size
## [1] 9 8 7 4 2 1
## 
## $dev
## [1] 145 145 146 146 167 315
## 
## $k
## [1]       -Inf   0.000000   3.000000   4.333333  10.500000 151.000000
## 
## $method
## [1] "misclass"
## 
## attr(,"class")
## [1] "prune"         "tree.sequence"
  1. Produce a plot with tree size on the x-axis and cross-validated classification error rate on the y-axis.
plot(oj.cv$size, oj.cv$dev, xlab="Size of the Tree", ylab="Cross validated error rate", type = "b")
points(2, oj.cv$dev[2], col = "red", cex = 2, pch = 20)

The tree size 9 corresponds to the lowest cross-validated classification error rate.

  1. Which tree size corresponds to the lowest cross-validated classification error rate?
s <- oj.cv$size[which.min(oj.cv$dev)]
s
## [1] 9
  1. Produce a pruned tree corresponding to the optimal tree size obtained using cross-validation. If cross-validation does not lead to selection of a pruned tree, then create a pruned tree with five terminal nodes.
prune.OJ = prune.tree(oj.tree, best = 5)
plot(prune.OJ)
text(prune.OJ, pretty = 0)

  1. Compare the training error rates between the pruned and unpruned trees. Which is higher?
summary("oj.pruned")
##    Length     Class      Mode 
##         1 character character

The training error for pruned tree becomes higher.

  1. Compare the test error rates between the pruned and unpruned trees. Which is higher?
oj.pred <- predict(oj.tree, oj.test, type = 'class')
table(oj.pred, oj.test$Purchase)
##        
## oj.pred  CH  MM
##      CH 160  38
##      MM   8  64

0.1740741

oj.pruned.pred <- predict(oj.tree, oj.test, type = 'class')
table(oj.pruned.pred, oj.test$Purchase)
##               
## oj.pruned.pred  CH  MM
##             CH 160  38
##             MM   8  64

0.1851852

In this case the error rate for pruned tree is also higher