Q.3

Consider the Gini index, classification error, and entropy in a simple classification setting with two classes. Create a single plot that displays each of these quantities as a function of \(\hat{p}_{m1}\). The xaxis should display \(\hat{p}_{m1}\), ranging from 0 to 1, and the y-axis should display the value of the Gini index, classification error, and entropy.

Hint: In a setting with two classes, \(\hat{p}_{m1}\) = 1 − \(\hat{p}_{m2}\). You could make this plot by hand, but it will be much easier to make in R.

p = seq(0, 1, 0.01)
gini = p * (1 - p) * 2
entropy = -(p * log(p) + (1 - p) * log(1 - p))
class.err = 1 - pmax(p, 1 - p)
matplot(p, cbind(gini, entropy, class.err), col = c("pink", "red", "purple"))

Q.8

data(Carseats)

## Converting Sales into a Qualitative Response Variable
# Yes if Sales > 8 and No otherwise
Carseats$Sales=as.factor(ifelse(Carseats$Sales > 8,"Yes","No"))

(a) Split the data set into a training set and a test set.

inTrain = createDataPartition(y=Carseats$Sales,p=0.75,list = FALSE)
train = Carseats[inTrain,]
test = Carseats[-inTrain,]

(b) Fit a regression tree to the training set. Plot the tree, and interpret the results. What test MSE do you obtain?

# fit a regression tree
tree.carseats=rpart(Sales ~., data=train, method="anova", control=rpart.control(minsplit=15, cp=0.01))

Plot the tree

library(rattle)
fancyRpartPlot(tree.carseats, main = "Tree Plot of Carseats", sub="", cex=0.5, palettes=c("Blues")) 

The most important indicator of Sales appears to be shelving location, since the first branch differentiates Good locations from Bad and Medium locations.

Test MSE

pred1=predict(tree.carseats, newdata=test)
TestMSE1=mse(test$Sales,pred1)
TestMSE1
## [1] 0.2144515

(c) Use cross-validation in order to determine the optimal level of tree complexity. Does pruning the tree improve the test MSE?

printcp(tree.carseats)
## 
## Regression tree:
## rpart(formula = Sales ~ ., data = train, method = "anova", control = rpart.control(minsplit = 15, 
##     cp = 0.01))
## 
## Variables actually used in tree construction:
## [1] Advertising Age         CompPrice   Income      Price       ShelveLoc  
## [7] US         
## 
## Root node error: 72.57/300 = 0.2419
## 
## n= 300 
## 
##          CP nsplit rel error  xerror     xstd
## 1  0.151618      0   1.00000 1.01014 0.021501
## 2  0.081840      1   0.84838 0.86950 0.047181
## 3  0.071033      2   0.76654 0.90486 0.056551
## 4  0.051328      3   0.69551 0.87484 0.061682
## 5  0.034360      4   0.64418 0.82117 0.065181
## 6  0.033913      5   0.60982 0.84569 0.068591
## 7  0.029488      8   0.50808 0.81937 0.068484
## 8  0.027188      9   0.47859 0.81004 0.068948
## 9  0.024043     10   0.45141 0.80325 0.071034
## 10 0.023488     12   0.40332 0.79430 0.072405
## 11 0.018603     13   0.37983 0.78512 0.071709
## 12 0.011024     14   0.36123 0.77119 0.073577
## 13 0.010000     15   0.35021 0.77862 0.073729
plotcp(tree.carseats, col = 4, pch=16, cex=1)

Lowest Cross-validated Error

tree.carseats$cptable[which.min(tree.carseats$cptable[,"xerror"]),"xerror"]
## [1] 0.7711895

The tree with 3 nodes seems to be the best. It has the lowest cross-validated error of all.

Optimal Level of Tree Complexity

tree.carseats$cptable[which.min(tree.carseats$cptable[,"xerror"]),"CP"]
## [1] 0.01102384

Pruning the Tree

carseats.prune=prune(tree.carseats,cp=tree.carseats$cptable[which.min(tree.carseats$cptable[,"xerror"]),"CP"])

fancyRpartPlot(carseats.prune, uniform=TRUE, main="Pruned Tree", sub="", palettes=c("Oranges"))

Test MSE

pred2=predict(carseats.prune, newdata=test)
TestMSE2=mse(test$Sales,pred2)
TestMSE2
## [1] 0.2168515

Pruning the tree improved the test MSE.

(d) Use the bagging approach in order to analyze this data. What test MSE do you obtain? Use the importance() function to determine which variables are most important.

library(randomForest)

Carseats.bag = randomForest(Sales~.,data=train,mtry = 10,importance = TRUE)

Test MSE

pred3 = predict(Carseats.bag,newdata=test)
TestMSE3 = mse(test$Sales, pred3)
TestMSE3
## [1] 0.25
# important variables
importance(Carseats.bag)
##                     No        Yes MeanDecreaseAccuracy MeanDecreaseGini
## CompPrice    7.8394961 10.2935689          12.83924013       16.2478329
## Income       3.3928095  2.7049537           4.14660222       11.6397918
## Advertising 14.5436419 16.7306440          21.61109016       20.1478430
## Population  -0.9800967 -5.0741421          -4.01906273        8.1456945
## Price       31.1404945 33.7299992          43.33675112       38.6462901
## ShelveLoc   33.3149468 28.8125463          39.86600448       27.3276185
## Age          7.5351530  8.8450096          11.12405479       14.2695766
## Education    1.0633392 -1.0829264           0.08616897        5.2471448
## Urban        1.2487407 -0.6639458           0.41999398        0.7714795
## US           5.4930214  6.8601409           8.70186544        2.1589283
library(colorspace)

barplot(importance(Carseats.bag),axisnames = TRUE,  col = rainbow_hcl(10), beside = TRUE, legend.text = rownames(importance(Carseats.bag)),border = TRUE, args.legend = list(x = "topright"), width=1, xlim = c(3, 52))

(e) Use random forests to analyze this data. What test MSE do you obtain? Use the importance() function to determine which variables are most important. Describe the effect of m, the number of variables considered at each split, on the error rate obtained

# fit random forest
Carseats.rf=train(Sales ~ .,data=train,method='rf',trControl = trainControl("cv", number = 10),importance = TRUE)

Carseats.rf
## Random Forest 
## 
## 300 samples
##  10 predictor
##   2 classes: 'No', 'Yes' 
## 
## No pre-processing
## Resampling: Cross-Validated (10 fold) 
## Summary of sample sizes: 270, 270, 270, 270, 270, 271, ... 
## Resampling results across tuning parameters:
## 
##   mtry  Accuracy   Kappa    
##    2    0.8002929  0.5710437
##    6    0.8001928  0.5764407
##   11    0.7834112  0.5423412
## 
## Accuracy was used to select the optimal model using the largest value.
## The final value used for the model was mtry = 2.

By default, 500 trees are trained. The optimal number of variables sampled at each split is 6 so m=6.

Test MSE

pred4=predict(Carseats.rf, newdata=test)
TestMSE4=mse(test$Sales,pred4)
TestMSE4
## [1] 0.21
# important variables
varImp(Carseats.rf)
## rf variable importance
## 
##                 Importance
## ShelveLocGood      100.000
## Price               91.465
## Advertising         51.791
## ShelveLocMedium     32.042
## Age                 25.728
## USYes               23.342
## CompPrice           14.377
## Education           12.188
## Income              10.536
## UrbanYes             8.718
## Population           0.000
plot(varImp(Carseats.rf))

The results indicate that across all of the trees considered in the random forest, the Price company charges for car seats at each site (Price) and the quality of the shelving location (ShelveLoc) are by far the two most important variables

Q.9

library(ISLR)
data(OJ)

(a) Create a training set containing a random sample of 800 observations, and a test set containing the remaining observations.

inTrain2 = sample(1070,800)
train2 = OJ[inTrain2,]
test2 = OJ[-inTrain2,]

(b) Fit a tree to the training data, with Purchase as the response and the other variables as predictors. Use the summary() function to produce summary statistics about the tree, and describe the results obtained. What is the training error rate? How many terminal nodes does the tree have?

# fit a tree
tree.OJ=rpart(Purchase ~., data=train2, method="class", control=rpart.control(minsplit=15, cp=0.01))

summary(tree.OJ, cp=1)
## Call:
## rpart(formula = Purchase ~ ., data = train2, method = "class", 
##     control = rpart.control(minsplit = 15, cp = 0.01))
##   n= 800 
## 
##          CP nsplit rel error    xerror       xstd
## 1 0.5096154      0 1.0000000 1.0000000 0.04421683
## 2 0.0224359      1 0.4903846 0.5128205 0.03626189
## 3 0.0100000      4 0.4230769 0.5064103 0.03609080
## 
## Variable importance
##        LoyalCH  ListPriceDiff      PriceDiff    SalePriceMM        PriceMM 
##             60             10              6              5              5 
##        PriceCH    SalePriceCH        StoreID WeekofPurchase      PctDiscMM 
##              3              2              2              2              1 
##         DiscMM      SpecialMM 
##              1              1 
## 
## Node number 1: 800 observations
##   predicted class=CH  expected loss=0.39  P(node) =1
##     class counts:   488   312
##    probabilities: 0.610 0.390

It has 4 terminal nodes.

The most important indicator of Purchase appears to be Customer brand loyalty for Citrus Hill(LoyalCH).

Training Error Rate

pred_mat1=predict(tree.OJ,train2, type = "class")
TrainErr = mean(pred_mat1 != train2$Purchase)
TrainErr
## [1] 0.165

(c) Type in the name of the tree object in order to get a detailed text output. Pick one of the terminal nodes, and interpret the information displayed.

tree.OJ
## n= 800 
## 
## node), split, n, loss, yval, (yprob)
##       * denotes terminal node
## 
##  1) root 800 312 CH (0.61000000 0.39000000)  
##    2) LoyalCH>=0.48285 497  81 CH (0.83702213 0.16297787)  
##      4) LoyalCH>=0.740621 264  11 CH (0.95833333 0.04166667) *
##      5) LoyalCH< 0.740621 233  70 CH (0.69957082 0.30042918)  
##       10) ListPriceDiff>=0.235 139  19 CH (0.86330935 0.13669065) *
##       11) ListPriceDiff< 0.235 94  43 MM (0.45744681 0.54255319)  
##         22) PriceDiff>=0.015 43  15 CH (0.65116279 0.34883721) *
##         23) PriceDiff< 0.015 51  15 MM (0.29411765 0.70588235) *
##    3) LoyalCH< 0.48285 303  72 MM (0.23762376 0.76237624) *

(d) Create a plot of the tree, and interpret the results.

library(rattle)

fancyRpartPlot(tree.OJ, main = "Tree Plot of Orange Juice Data", sub="", cex=0.6, palettes=c("Blues")) 

The most important indicator of Purchase appears to be Customer brand loyalty for Citrus Hill(LoyalCH), since the first branch differentiates from LoyalCH >=0.48.

(e) Predict the response on the test data, and produce a confusion matrix comparing the test labels to the predicted test labels. What is the test error rate?

pred_mat2=predict(tree.OJ, test2, type = "class")

# confusion matrix
table(pred_mat2, test2$Purchase)
##          
## pred_mat2  CH  MM
##        CH 139  21
##        MM  26  84

Test Error Rate

TestErr = mean(pred_mat2 != test2$Purchase)
TestErr
## [1] 0.1740741

(f) Apply the cv.tree() function to the training set in order to determine the optimal tree size.

library(tree)

tree.cv.OJ=cv.tree(tree(Purchase ~ ., data = train2), FUN = prune.tree, K = 10)
tree.cv.OJ
## $size
## [1] 7 6 5 4 3 2 1
## 
## $dev
## [1]  681.6301  688.3083  693.0921  695.5025  728.5934  793.6519 1071.8759
## 
## $k
## [1]      -Inf  12.22040  13.79713  33.47481  44.30497  65.62322 295.80418
## 
## $method
## [1] "deviance"
## 
## attr(,"class")
## [1] "prune"         "tree.sequence"

(g) Produce a plot with tree size on the x-axis and cross-validated classification error rate on the y-axis.

plot(tree.cv.OJ$size, tree.cv.OJ$dev, type = "b", xlab = "Tree size", ylab = "Cross Validation Error", col="purple", pch=16)

(h) Which tree size corresponds to the lowest cross-validated classification error rate?

tree.cv.OJ$dev[5]
## [1] 728.5934

Tree Size of 5 has the lowest cross-validated classification error rate.

(i) Produce a pruned tree corresponding to the optimal tree size obtained using cross-validation. If cross-validation does not lead to selection of a pruned tree, then create a pruned tree with five terminal nodes.

OJ.prune = prune.misclass(tree(Purchase ~ ., data = train2), best = 5)
plot(OJ.prune, main="Pruned Tree", col = "limegreen")
text(OJ.prune, pretty=0, col = "darkgreen")

(j) Compare the training error rates between the pruned and unpruned trees. Which is higher?

Training Error Rate of Pruned Tree

pred_mat3=predict(OJ.prune,train2, type = "class")
TrainErr2 = mean(pred_mat3 != train2$Purchase)
TrainErr2
## [1] 0.165

Training Error Rate of Unpruned Tree

TrainErr
## [1] 0.165

The training error rate of unpruned tree is higher.

(k) Compare the test error rates between the pruned and unpruned trees. Which is higher?

Test Error Rate of Pruned Tree

pred_mat4=predict(OJ.prune,test2, type = "class")
TestErr2 = mean(pred_mat4 != test2$Purchase)
TestErr2
## [1] 0.1740741

Test Error Rate of Unpruned Tree

TestErr
## [1] 0.1740741

The test error rate of pruned tree is higher.