Exercise 3

Consider the Gini index, classification error, and cross-entropy in a simple classification setting with two classes. Create a single plot that displays each of these quantities as a function of ˆpm1. The xaxis should display ˆpm1, ranging from 0 to 1, and the y-axis should display the value of the Gini index, classification error, and entropy. Hint: In a setting with two classes, ˆpm1 = 1− ˆpm2. You could make this plot by hand, but it will be much easier to make in R.

p <- seq(0, 1, 0.01)
gini = 2*p*(1-p)
classerror = 1 - pmax(p, 1-p)
crossentropy = -(p*log(p)+(1-p)*log(1-p))
plot(NA,NA,xlim=c(0,1),ylim=c(0,1),xlab='p',ylab='f')

lines(p,gini,type='l')
lines(p,classerror,col='blue')
lines(p,crossentropy,col='red')

legend(x='top',legend=c('gini','classification error','cross entropy'),
       col=c('black','blue','red'),lty=1,text.width = 0.22)

Exercise 8

In the lab, a classification tree was applied to the Carseats data set after converting Sales into a qualitative response variable. Now we will seek to predict Sales using regression trees and related approaches, treating the response as a quantitative variable.

(a) Split the data set into a training set and a test set.

attach(Carseats)
str(Carseats)
## 'data.frame':    400 obs. of  11 variables:
##  $ Sales      : num  9.5 11.22 10.06 7.4 4.15 ...
##  $ CompPrice  : num  138 111 113 117 141 124 115 136 132 132 ...
##  $ Income     : num  73 48 35 100 64 113 105 81 110 113 ...
##  $ Advertising: num  11 16 10 4 3 13 0 15 0 0 ...
##  $ Population : num  276 260 269 466 340 501 45 425 108 131 ...
##  $ Price      : num  120 83 80 97 128 72 108 120 124 124 ...
##  $ ShelveLoc  : Factor w/ 3 levels "Bad","Good","Medium": 1 2 3 3 1 1 3 2 3 3 ...
##  $ Age        : num  42 65 59 55 38 78 71 67 76 76 ...
##  $ Education  : num  17 10 12 14 13 16 15 10 10 17 ...
##  $ Urban      : Factor w/ 2 levels "No","Yes": 2 2 2 2 2 1 2 2 1 1 ...
##  $ US         : Factor w/ 2 levels "No","Yes": 2 2 2 2 1 2 1 2 1 2 ...
set.seed(1)
train.Index <- createDataPartition(Sales, p=0.8, list = FALSE)
train <- Carseats[train.Index,]
test <- Carseats[-train.Index,]

(b) Fit a regression tree to the training set. Plot the tree, and interpret the results. What test error rate do you obtain?

tree.fit <- tree(Sales ~ ., data = train)
summary(tree.fit)
## 
## Regression tree:
## tree(formula = Sales ~ ., data = train)
## Variables actually used in tree construction:
## [1] "ShelveLoc"   "Price"       "CompPrice"   "Age"         "Advertising"
## Number of terminal nodes:  18 
## Residual mean deviance:  2.39 = 724.3 / 303 
## Distribution of residuals:
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
## -4.9380 -0.9822  0.1214  0.0000  0.9908  3.3760
plot(tree.fit)
text(tree.fit, pretty = 0, cex = 0.55)

tree.pred <- predict(tree.fit, newdata = test)
(mse <- mean((test$Sales - tree.pred) ^2))
## [1] 3.947858

The mean square error form the above model is 3.95 with ShelvLoc and Price being the most important predictors as the predictors with highest important predictors divide at the root of the tree.

(c) Use cross-validation in order to determine the optimal level of tree complexity. Does pruning the tree improve the test error rate?

set.seed(1)
cv_tree_model <- cv.tree(tree.fit, K = 10)

data.frame(n_leaves = cv_tree_model$size,
           CV_RSS = cv_tree_model$dev) %>% 
  mutate(min_CV_RSS = as.numeric(min(CV_RSS) == CV_RSS)) %>%
  ggplot(aes(x = n_leaves, y = CV_RSS)) +
  geom_line(col = "grey55") +
  geom_point(size = 2, aes(col = factor(min_CV_RSS))) +
  scale_x_continuous(breaks = seq(1, 17, 2)) +
  scale_y_continuous(labels = scales::comma_format()) +
  scale_color_manual(values = c("deepskyblue3", "green")) +
  theme(legend.position = "none") +
  labs(title = "Carseats Dataset - Regression Tree",
       subtitle = "Selecting the complexity parameter with cross-validation",
       x = "Terminal Nodes",
       y = "CV RSS")

From the plot, we can see that the optimal tree is the fully grown tree without pruning, since the best number of terminal nodes is 18. We verify that below in the following code.

which.min(cv_tree_model$dev)
## [1] 1
cv_tree_model$size[1]
## [1] 18

Now we check how the MSE differs by specifying best=18.

prune.model = prune.tree(tree.fit, best = 18)

prune.pred <- predict(prune.model, test)
mean((prune.pred - test$Sales)^2)
## [1] 3.947858

There is no difference in the test MSE between unpruned and pruned trees. The fully grown tree is the optimal tree in this case.

(d) Use the bagging approach in order to analyze this data. What test error rate do you obtain? Use the importance() function to determine which variables are most important.

Bagged trees can be implemented using randomForest() function by yusing mtry and ntree options. and importance is True to interpret variable importance.

set.seed(1)
rf.model <- randomForest(Sales ~ ., data = train, mtry = 10, ntree = 500, importance = T)
rf.model
## 
## Call:
##  randomForest(formula = Sales ~ ., data = train, mtry = 10, ntree = 500,      importance = T) 
##                Type of random forest: regression
##                      Number of trees: 500
## No. of variables tried at each split: 10
## 
##           Mean of squared residuals: 2.369798
##                     % Var explained: 70.16
rf.pred <- predict(rf.model, test)
(rf.mse = mean((test$Sales - rf.pred)^2))
## [1] 2.582276
importance(rf.model)
##                %IncMSE IncNodePurity
## CompPrice   31.3603728    275.790941
## Income       8.7053235    119.085007
## Advertising 28.1912322    230.285218
## Population   0.8051243     79.085278
## Price       76.3883650    760.305034
## ShelveLoc   76.8909144    736.914127
## Age         21.3180134    213.014350
## Education   -0.2134194     58.939705
## Urban        1.4416115      9.827247
## US           4.6036647     11.115611

Interpretation: The test error obtained after bagging is 2.58 which is less than what we obtained from the trees. The most important variables are Price, ShelveLoc and CompPrice.

(e) Use random forests to analyze this data. What test error rate do you obtain? Use the importance() function to determine which variables are most important. Describe the effect of m, the number of variables considered at each split, on the error rate obtained.

set.seed(1)
rf.model.1 <- randomForest(Sales ~ ., data = train, mtry = sqrt(10), importance = T)
rf.model.1
## 
## Call:
##  randomForest(formula = Sales ~ ., data = train, mtry = sqrt(10),      importance = T) 
##                Type of random forest: regression
##                      Number of trees: 500
## No. of variables tried at each split: 3
## 
##           Mean of squared residuals: 2.835411
##                     % Var explained: 64.29
rf.pred.1 <- predict(rf.model.1, test)
(rf.mse.1 = mean((test$Sales - rf.pred.1)^2))
## [1] 2.852874
importance(rf.model.1)
##                %IncMSE IncNodePurity
## CompPrice   15.6293440     235.11380
## Income       4.8484597     194.86910
## Advertising 22.2381062     228.48286
## Population   0.5125705     147.98453
## Price       46.1057561     616.33231
## ShelveLoc   48.9926391     582.05252
## Age         15.4368685     243.69660
## Education    3.2150359     102.48429
## Urban       -0.8527220      20.24973
## US           5.8796360      38.82538

The test error from bagging is still the least test error and ShelveLoc and Price being the most important variables.

Exercise 9

This problem involves the OJ data set which is part of the ISLR package.

(a) Create a training set containing a random sample of 800 observations, and a test set containing the remaining observations.

detach(Carseats)
attach(OJ)
str(OJ)
## 'data.frame':    1070 obs. of  18 variables:
##  $ Purchase      : Factor w/ 2 levels "CH","MM": 1 1 1 2 1 1 1 1 1 1 ...
##  $ WeekofPurchase: num  237 239 245 227 228 230 232 234 235 238 ...
##  $ StoreID       : num  1 1 1 1 7 7 7 7 7 7 ...
##  $ PriceCH       : num  1.75 1.75 1.86 1.69 1.69 1.69 1.69 1.75 1.75 1.75 ...
##  $ PriceMM       : num  1.99 1.99 2.09 1.69 1.69 1.99 1.99 1.99 1.99 1.99 ...
##  $ DiscCH        : num  0 0 0.17 0 0 0 0 0 0 0 ...
##  $ DiscMM        : num  0 0.3 0 0 0 0 0.4 0.4 0.4 0.4 ...
##  $ SpecialCH     : num  0 0 0 0 0 0 1 1 0 0 ...
##  $ SpecialMM     : num  0 1 0 0 0 1 1 0 0 0 ...
##  $ LoyalCH       : num  0.5 0.6 0.68 0.4 0.957 ...
##  $ SalePriceMM   : num  1.99 1.69 2.09 1.69 1.69 1.99 1.59 1.59 1.59 1.59 ...
##  $ SalePriceCH   : num  1.75 1.75 1.69 1.69 1.69 1.69 1.69 1.75 1.75 1.75 ...
##  $ PriceDiff     : num  0.24 -0.06 0.4 0 0 0.3 -0.1 -0.16 -0.16 -0.16 ...
##  $ Store7        : Factor w/ 2 levels "No","Yes": 1 1 1 1 2 2 2 2 2 2 ...
##  $ PctDiscMM     : num  0 0.151 0 0 0 ...
##  $ PctDiscCH     : num  0 0 0.0914 0 0 ...
##  $ ListPriceDiff : num  0.24 0.24 0.23 0 0 0.3 0.3 0.24 0.24 0.24 ...
##  $ STORE         : num  1 1 1 1 0 0 0 0 0 0 ...
set.seed(1)
train.Index <- sample(nrow(OJ), 800)
train.OJ <- OJ[train.Index,]
test.OJ <- OJ[-train.Index,]

(b) Fit a tree to the training data, with Purchase as the response and the other variables as predictors. Use the summary() function to produce summary statistics about the tree, and describe the results obtained. What is the training error rate? How many terminal nodes does the tree have?

tree.fit.OJ <- tree(Purchase ~ ., data = train.OJ)
summary(tree.fit.OJ)
## 
## Classification tree:
## tree(formula = Purchase ~ ., data = train.OJ)
## Variables actually used in tree construction:
## [1] "LoyalCH"       "PriceDiff"     "SpecialCH"     "ListPriceDiff"
## [5] "PctDiscMM"    
## Number of terminal nodes:  9 
## Residual mean deviance:  0.7432 = 587.8 / 791 
## Misclassification error rate: 0.1588 = 127 / 800

The tree uses only five variables LoyalCH, PriceDiff, SpecialCH, ListPriceDiff and PctDiscMM despite teh datset having 17 predictors. The training error is 15.88% with the 9 terminal nodes.

(c) Type in the name of the tree object in order to get a detailed text output. Pick one of the terminal nodes, and interpret the information displayed.

tree.fit.OJ
## node), split, n, deviance, yval, (yprob)
##       * denotes terminal node
## 
##  1) root 800 1073.00 CH ( 0.60625 0.39375 )  
##    2) LoyalCH < 0.5036 365  441.60 MM ( 0.29315 0.70685 )  
##      4) LoyalCH < 0.280875 177  140.50 MM ( 0.13559 0.86441 )  
##        8) LoyalCH < 0.0356415 59   10.14 MM ( 0.01695 0.98305 ) *
##        9) LoyalCH > 0.0356415 118  116.40 MM ( 0.19492 0.80508 ) *
##      5) LoyalCH > 0.280875 188  258.00 MM ( 0.44149 0.55851 )  
##       10) PriceDiff < 0.05 79   84.79 MM ( 0.22785 0.77215 )  
##         20) SpecialCH < 0.5 64   51.98 MM ( 0.14062 0.85938 ) *
##         21) SpecialCH > 0.5 15   20.19 CH ( 0.60000 0.40000 ) *
##       11) PriceDiff > 0.05 109  147.00 CH ( 0.59633 0.40367 ) *
##    3) LoyalCH > 0.5036 435  337.90 CH ( 0.86897 0.13103 )  
##      6) LoyalCH < 0.764572 174  201.00 CH ( 0.73563 0.26437 )  
##       12) ListPriceDiff < 0.235 72   99.81 MM ( 0.50000 0.50000 )  
##         24) PctDiscMM < 0.196197 55   73.14 CH ( 0.61818 0.38182 ) *
##         25) PctDiscMM > 0.196197 17   12.32 MM ( 0.11765 0.88235 ) *
##       13) ListPriceDiff > 0.235 102   65.43 CH ( 0.90196 0.09804 ) *
##      7) LoyalCH > 0.764572 261   91.20 CH ( 0.95785 0.04215 ) *

The node I picked to interpret is 11). PriceDiff > 0.05 109 147.00 CH ( 0.59633 0.40367 ) *

(d) Create a plot of the tree, and interpret the results.

plot(tree.fit.OJ)
text(tree.fit.OJ, pretty = 0, cex = 0.55)

The top 3 nodes contain LoyalCH. If LoyalCH < 0.28, the tree predicts MM. If LoyalCH > 0.76, the tree predicts CH. Thus, it being the important variable. On the summary people go with the brand which they are more loyal towards and there are some edge cases like discounts or relative prices that can attract people for the other brand.

(e) Predict the response on the test data, and produce a confusion matrix comparing the test labels to the predicted test labels. What is the test error rate?

pred.oj <- predict(tree.fit.OJ, test.OJ, type = 'class')
confusionMatrix(test.OJ$Purchase, pred.oj)
## Confusion Matrix and Statistics
## 
##           Reference
## Prediction  CH  MM
##         CH 160   8
##         MM  38  64
##                                           
##                Accuracy : 0.8296          
##                  95% CI : (0.7794, 0.8725)
##     No Information Rate : 0.7333          
##     P-Value [Acc > NIR] : 0.0001259       
##                                           
##                   Kappa : 0.6154          
##                                           
##  Mcnemar's Test P-Value : 1.904e-05       
##                                           
##             Sensitivity : 0.8081          
##             Specificity : 0.8889          
##          Pos Pred Value : 0.9524          
##          Neg Pred Value : 0.6275          
##              Prevalence : 0.7333          
##          Detection Rate : 0.5926          
##    Detection Prevalence : 0.6222          
##       Balanced Accuracy : 0.8485          
##                                           
##        'Positive' Class : CH              
## 

(f) Apply the cv.tree() function to the training set in order to determine the optimal tree size.

cv_oj = cv.tree(tree.fit.OJ, FUN = prune.tree)

(g) Produce a plot with tree size on the x-axis and cross-validated classification error rate on the y-axis.

plot(cv_oj$size, cv_oj$dev, type = "b", xlab = "Tree Size", ylab = "Deviance")

(h) Which tree size corresponds to the lowest cross-validated classification error rate?

which.min(cv_oj$dev)
## [1] 1
cv_oj$size[1]
## [1] 9

The tree size of 9 gives the lowest cross-validation error.

(i) Produce a pruned tree corresponding to the optimal tree size obtained using cross-validation. If cross-validation does not lead to selection of a pruned tree, then create a pruned tree with five terminal nodes.

prune_oj = prune.tree(tree.fit.OJ, best = 9)

(j) Compare the training error rates between the pruned and unpruned trees. Which is higher?

summary(prune_oj)
## 
## Classification tree:
## tree(formula = Purchase ~ ., data = train.OJ)
## Variables actually used in tree construction:
## [1] "LoyalCH"       "PriceDiff"     "SpecialCH"     "ListPriceDiff"
## [5] "PctDiscMM"    
## Number of terminal nodes:  9 
## Residual mean deviance:  0.7432 = 587.8 / 791 
## Misclassification error rate: 0.1588 = 127 / 800

The tarining error is same for the pruned and unpruned trees which is about 15.88%

(k) Compare the test error rates between the pruned and unpruned trees. Which is higher?

unpruned_pred = predict(tree.fit.OJ, test.OJ, type = "class")
unpruned_error = sum(test.OJ$Purchase != unpruned_pred)
unpruned_error/length(unpruned_pred)
## [1] 0.1703704
pruned_pred = predict(prune_oj, test.OJ, type = "class")
pruned_error = sum(test.OJ$Purchase != pruned_pred)
pruned_error/length(pruned_pred)
## [1] 0.1703704

There is no difference between the test error for the pruned and unpruned tree.