Assignment 7

3. Consider the Gini index, classification error, and cross-entropy in a simple classification setting with two classes. Create a single plot that displays each of these quantities as a function of pm1. The x-axis should display pm1., ranging from 0 to 1, and the y-axis should display the value of the Gini index, classification error, and entropy.

pm1 = seq(0, 1, 0.01)

gini_index= pm1 * (1 - pm1) * 2
entropy = -(pm1 * log(pm1) + (1 - pm1) * log(1 - pm1))
classification_err = 1 - pmax(pm1, 1 - pm1)

matplot(pm1, cbind(entropy, gini_index, classification_err), col = c("orange", "blue", "darkgreen"))
legend("topright", pch = 20, col = c("orange", "blue", "darkgreen"), legend=c("Cross-entropy", "Gini_Index", "Classification_Error_Rate"))

9. This problem involves the OJ data set which is part of the ISLR package.

(a) Create a training set containing a random sample of 800 observations, and a test set containing the remaining observations.

set.seed(456)
data("OJ")

trainrows = createDataPartition(OJ$Purchase, p = (799/1070), list = FALSE)

traind = OJ[trainrows,]
testd = OJ[-trainrows,]

(b) Fit a tree to the training data, with Purchase as the response and the other variables as predictors. Use the summary() function to produce summary statistics about the tree, and describe the results obtained. What is the training error rate? How many terminal nodes does the tree have?

tree.oj = tree(Purchase~., traind)
summary(tree.oj)
## 
## Classification tree:
## tree(formula = Purchase ~ ., data = traind)
## Variables actually used in tree construction:
## [1] "LoyalCH"   "PriceDiff"
## Number of terminal nodes:  8 
## Residual mean deviance:  0.7763 = 614.8 / 792 
## Misclassification error rate: 0.1825 = 146 / 800

The model created is a tree with 8 nodes. It had a training misclassification error rate of 18.25.

(c) Type in the name of the tree object in order to get a detailed text output. Pick one of the terminal nodes, and interpret the information displayed.

tree.oj
## node), split, n, deviance, yval, (yprob)
##       * denotes terminal node
## 
##  1) root 800 1070.000 CH ( 0.61000 0.39000 )  
##    2) LoyalCH < 0.5036 360  432.800 MM ( 0.28889 0.71111 )  
##      4) LoyalCH < 0.277977 173  146.400 MM ( 0.15029 0.84971 )  
##        8) LoyalCH < 0.0356415 54    9.959 MM ( 0.01852 0.98148 ) *
##        9) LoyalCH > 0.0356415 119  122.300 MM ( 0.21008 0.78992 ) *
##      5) LoyalCH > 0.277977 187  254.100 MM ( 0.41711 0.58289 )  
##       10) PriceDiff < 0.065 75   75.060 MM ( 0.20000 0.80000 ) *
##       11) PriceDiff > 0.065 112  153.500 CH ( 0.56250 0.43750 ) *
##    3) LoyalCH > 0.5036 440  335.400 CH ( 0.87273 0.12727 )  
##      6) LoyalCH < 0.705699 142  168.900 CH ( 0.71831 0.28169 )  
##       12) PriceDiff < 0.265 79  108.500 CH ( 0.55696 0.44304 ) *
##       13) PriceDiff > 0.265 63   34.930 CH ( 0.92063 0.07937 ) *
##      7) LoyalCH > 0.705699 298  124.700 CH ( 0.94631 0.05369 )  
##       14) PriceDiff < -0.39 13   17.320 CH ( 0.61538 0.38462 ) *
##       15) PriceDiff > -0.39 285   93.170 CH ( 0.96140 0.03860 ) *

Terminal node 14 takes the observations PriceDiff value and checks if its less than -0.39. There are 13 observations from the training data in this node. There is a deviance of 17.32 and these final results are designated to be CH.

(d) Create a plot of the tree, and interpret the results.

plot(tree.oj)
text(tree.oj, pretty = 1)

The decision tree is now layed out in an easy to read way. One can see that the first variable is LoyalCH and depending on that variable, the observation is further split by LoyalCH again, and then lastly by the final tier of LoyalCH and PriceDiff. LoyalCH does the lion share of predicting for both variables, but PriceDiff is important in splitting MM from CH.

(e) Predict the response on the test data, and produce a confusion matrix comparing the test labels to the predicted test labels. What is the test error rate?

preds = predict(tree.oj, newdata = testd, type = 'class' )
table(testd$Purchase, preds)
##     preds
##       CH  MM
##   CH 157   8
##   MM  37  68
mean(preds != testd$Purchase)
## [1] 0.1666667

Creating predictions for test data provides us with a test error rate of 16.67%, where that much of the test data was misclassified as either CH or MM when they were the opposite.

(f) Apply the cv.tree() function to the training set in order to determine the optimal tree size.

set.seed(456)
cvtree.oj = cv.tree(tree.oj, K = 10)
cvtree.oj
## $size
## [1] 8 7 6 5 4 3 2 1
## 
## $dev
## [1]  760.4799  710.2332  710.2621  747.6609  747.6609  768.8216  778.4092
## [8] 1072.7824
## 
## $k
## [1]      -Inf  14.12318  14.21512  25.43116  25.50346  32.32585  41.86952
## [8] 301.73730
## 
## $method
## [1] "deviance"
## 
## attr(,"class")
## [1] "prune"         "tree.sequence"

(g) Produce a plot with tree size on the x-axis and cross-validated classification error rate on the y-axis.

plot(cvtree.oj$size, cvtree.oj$dev, type = "b")

cvtree.oj$size[which.min(cvtree.oj$dev)]
## [1] 7

6 and 7 have the lowest deviance, but since 6 is a simpler model, that will be the chosen size.

(i) Produce a pruned tree corresponding to the optimal tree size obtained using cross-validation. If cross-validation does not lead to selection of a pruned tree, then create a pruned tree with five terminal nodes.

prune.oj = prune.misclass(tree.oj, best = 6)
plot(prune.oj)
text(prune.oj, pretty = 1)

prune.oj = prune.misclass(tree.oj, best = 4)
plot(prune.oj)
text(prune.oj, pretty = 1)

Neither a prune of 5 or 6 returned a different tree than the one originally created so I decided to use a prune of 4 for comparing

Compare the training error rates between the pruned and unpruned trees. Which is higher?

summary(prune.oj)
## 
## Classification tree:
## snip.tree(tree = tree.oj, nodes = 3:4)
## Variables actually used in tree construction:
## [1] "LoyalCH"   "PriceDiff"
## Number of terminal nodes:  4 
## Residual mean deviance:  0.8925 = 710.4 / 796 
## Misclassification error rate: 0.1825 = 146 / 800

Even though the tree plot is different, the nodes retained all point to the same classifications as the unpruned tree. Since the trees are the same, the returned error rate is also the same across all trees.

(k) Compare the test error rates between the pruned and unpruned trees. Which is higher?

preds.unpruned = predict(tree.oj, newdata = testd, type = 'class' )

mean(preds.unpruned != testd$Purchase)
## [1] 0.1666667
preds.pruned = predict(prune.oj, newdata = testd, type = 'class' )

mean(preds.unpruned != testd$Purchase)
## [1] 0.1666667

Since they are the same tree, the error rates on the testing data are also identical. I wrote out the functions anyways to demonstrate that if there was a difference, that is how I would find it.