3. Consider the Gini index, classification error, and cross-entropy in a simple classification setting with two classes. Create a single plot that displays each of these quantities as a function of pm1. The x-axis should display pm1., ranging from 0 to 1, and the y-axis should display the value of the Gini index, classification error, and entropy.
pm1 = seq(0, 1, 0.01)
gini_index= pm1 * (1 - pm1) * 2
entropy = -(pm1 * log(pm1) + (1 - pm1) * log(1 - pm1))
classification_err = 1 - pmax(pm1, 1 - pm1)
matplot(pm1, cbind(entropy, gini_index, classification_err), col = c("orange", "blue", "darkgreen"))
legend("topright", pch = 20, col = c("orange", "blue", "darkgreen"), legend=c("Cross-entropy", "Gini_Index", "Classification_Error_Rate"))

8. In the lab, a classification tree was applied to the Carseats data set after converting Sales into a qualitative response variable. Now we will seek to predict Sales using regression trees and related approaches, treating the response as a quantitative variable.
(a) Split the data set into a training set and a test set.
data("Carseats")
trainrows = createDataPartition(Carseats$Sales, p = 0.8, list = FALSE)
traind = Carseats[trainrows,]
testd = Carseats[-trainrows,]
(b) Fit a regression tree to the training set. Plot the tree, and interpret the results. What test MSE do you obtain?
set.seed(456)
tree.carseats = tree(Sales ~ ., traind)
summary(tree.carseats)
##
## Regression tree:
## tree(formula = Sales ~ ., data = traind)
## Variables actually used in tree construction:
## [1] "ShelveLoc" "Price" "Age" "Advertising" "Income"
## [6] "CompPrice"
## Number of terminal nodes: 20
## Residual mean deviance: 2.38 = 716.3 / 301
## Distribution of residuals:
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## -6.47100 -0.90400 -0.07129 0.00000 0.94600 4.10800
plot(tree.carseats)
text(tree.carseats, pretty = 1, srt = 90)

preds = predict(tree.carseats, newdata = testd)
mean((preds - testd$Sales)^2)
## [1] 4.809271
The model created with the tree funciton has 19 nodes or leaves, each of which represent a cut off for predicting sales.The leaves consist of cuts made from “ShelveLoc”, “Price”, “Income”, “Age”, “Advertising”, “CompPrice”, and “Population”. The tree is first seperated by shelve location (B or M) it can be tricky to tell because I rotated the text to make the other leaves more readable. The MSE is 3.699962.
(c) Use cross-validation in order to determine the optimal level of tree complexity. Does pruning the tree improve the test MSE?
set.seed(456)
cv.carseats = cv.tree(tree.carseats, K = 5)
plot(cv.carseats$size, cv.carseats$dev, type = "b")

prune.carseats = prune.tree(tree.carseats, best = 10)
plot(prune.carseats)
text(prune.carseats, pretty=1)

preds = predict(prune.carseats, newdata = testd)
mean((preds - testd$Sales)^2)
## [1] 4.996586
The complexity of the model has decreased, but the MSE has seena slight increase from the full tree to the pruned one of 4.17508.
(d) Use the bagging approach in order to analyze this data. What test MSE do you obtain? Use the importance() function to determine which variables are most important.
set.seed(456)
bag.carseats = randomForest(Sales ~ ., data = Carseats, subset = trainrows, mtry = ncol(Carseats) - 1, importance = TRUE)
preds = predict(bag.carseats, newdata = testd)
mean((preds - testd$Sales)^2)
## [1] 2.202808
importance(bag.carseats)
## %IncMSE IncNodePurity
## CompPrice 32.3771981 260.40403
## Income 9.9178785 130.09126
## Advertising 24.4926440 188.08162
## Population -2.6160140 70.93482
## Price 71.6154878 728.02591
## ShelveLoc 79.7456785 759.48725
## Age 28.3865073 273.54064
## Education -1.3471112 58.34743
## Urban 0.5736216 11.66990
## US 2.6362673 11.62332
The MSE is the lowest yet at 2.204832. Using the importance function, we can see that Shelveloc, Price, CompPrice, and Age are the most important factors. We can also see this reflected in the decision trees created prior as those are the major nodes.
(e) Use random forests to analyze this data. What test MSE do you obtain? Use the importance() function to determine which variables aremost important. Describe the effect of m, the number of variables considered at each split, on the error rate obtained.
set.seed(456)
rf.carseats = randomForest(Sales~., data = traind, mtry = 10, importance = TRUE)
preds = predict(rf.carseats, newdata = testd)
mean((preds - testd$Sales)^2)
## [1] 2.202808
importance(rf.carseats)
## %IncMSE IncNodePurity
## CompPrice 32.3771981 260.40403
## Income 9.9178785 130.09126
## Advertising 24.4926440 188.08162
## Population -2.6160140 70.93482
## Price 71.6154878 728.02591
## ShelveLoc 79.7456785 759.48725
## Age 28.3865073 273.54064
## Education -1.3471112 58.34743
## Urban 0.5736216 11.66990
## US 2.6362673 11.62332
The MSE is 2.204832. The important variables in this scenario are Shelveloc, Price, and Age. An m of 10 is used because that is number of input variables. Reducing the amount of variables based on the high rated variables from the last question nets an MSEof 2.448436. In this case, reducing m from 10 to 4 increases the MSE by 0.2.
9. This problem involves the OJ data set which is part of the ISLR package.
(a) Create a training set containing a random sample of 800 observations, and a test set containing the remaining observations.
set.seed(456)
data("OJ")
trainrows = createDataPartition(OJ$Purchase, p = (799/1070), list = FALSE)
traind = OJ[trainrows,]
testd = OJ[-trainrows,]
(b) Fit a tree to the training data, with Purchase as the response and the other variables as predictors. Use the summary() function to produce summary statistics about the tree, and describe the results obtained. What is the training error rate? How many terminal nodes does the tree have?
tree.oj = tree(Purchase~., traind)
summary(tree.oj)
##
## Classification tree:
## tree(formula = Purchase ~ ., data = traind)
## Variables actually used in tree construction:
## [1] "LoyalCH" "PriceDiff"
## Number of terminal nodes: 8
## Residual mean deviance: 0.7763 = 614.8 / 792
## Misclassification error rate: 0.1825 = 146 / 800
The model created is a tree with 8 nodes. It had a training misclassification error rate of 18.25.
(c) Type in the name of the tree object in order to get a detailed text output. Pick one of the terminal nodes, and interpret the information displayed.
tree.oj
## node), split, n, deviance, yval, (yprob)
## * denotes terminal node
##
## 1) root 800 1070.000 CH ( 0.61000 0.39000 )
## 2) LoyalCH < 0.5036 360 432.800 MM ( 0.28889 0.71111 )
## 4) LoyalCH < 0.277977 173 146.400 MM ( 0.15029 0.84971 )
## 8) LoyalCH < 0.0356415 54 9.959 MM ( 0.01852 0.98148 ) *
## 9) LoyalCH > 0.0356415 119 122.300 MM ( 0.21008 0.78992 ) *
## 5) LoyalCH > 0.277977 187 254.100 MM ( 0.41711 0.58289 )
## 10) PriceDiff < 0.065 75 75.060 MM ( 0.20000 0.80000 ) *
## 11) PriceDiff > 0.065 112 153.500 CH ( 0.56250 0.43750 ) *
## 3) LoyalCH > 0.5036 440 335.400 CH ( 0.87273 0.12727 )
## 6) LoyalCH < 0.705699 142 168.900 CH ( 0.71831 0.28169 )
## 12) PriceDiff < 0.265 79 108.500 CH ( 0.55696 0.44304 ) *
## 13) PriceDiff > 0.265 63 34.930 CH ( 0.92063 0.07937 ) *
## 7) LoyalCH > 0.705699 298 124.700 CH ( 0.94631 0.05369 )
## 14) PriceDiff < -0.39 13 17.320 CH ( 0.61538 0.38462 ) *
## 15) PriceDiff > -0.39 285 93.170 CH ( 0.96140 0.03860 ) *
Terminal node 14 takes the observations PriceDiff value and checks if its less than -0.39. There are 13 observations from the training data in this node. There is a deviance of 17.32 and these final results are designated to be CH.
(d) Create a plot of the tree, and interpret the results.
plot(tree.oj)
text(tree.oj, pretty = 1)
The decision tree is now layed out in an easy to read way. One can see that the first variable is LoyalCH and depending on that variable, the observation is further split by LoyalCH again, and then lastly by the final tier of LoyalCH and PriceDiff. LoyalCH does the lion share of predicting for both variables, but PriceDiff is important in splitting MM from CH.
(e) Predict the response on the test data, and produce a confusion matrix comparing the test labels to the predicted test labels. What is the test error rate?
preds = predict(tree.oj, newdata = testd, type = 'class' )
table(testd$Purchase, preds)
## preds
## CH MM
## CH 157 8
## MM 37 68
mean(preds != testd$Purchase)
## [1] 0.1666667
Creating predictions for test data provides us with a test error rate of 16.67%, where that much of the test data was misclassified as either CH or MM when they were the opposite.
(f) Apply the cv.tree() function to the training set in order to determine the optimal tree size.
set.seed(456)
cvtree.oj = cv.tree(tree.oj, K = 10)
cvtree.oj
## $size
## [1] 8 7 6 5 4 3 2 1
##
## $dev
## [1] 760.4799 710.2332 710.2621 747.6609 747.6609 768.8216 778.4092
## [8] 1072.7824
##
## $k
## [1] -Inf 14.12318 14.21512 25.43116 25.50346 32.32585 41.86952
## [8] 301.73730
##
## $method
## [1] "deviance"
##
## attr(,"class")
## [1] "prune" "tree.sequence"
(g) Produce a plot with tree size on the x-axis and cross-validated classification error rate on the y-axis.
plot(cvtree.oj$size, cvtree.oj$dev, type = "b")

cvtree.oj$size[which.min(cvtree.oj$dev)]
## [1] 7
6 and 7 have the lowest deviance, but since 6 is a simpler model, that will be the chosen size.
(i) Produce a pruned tree corresponding to the optimal tree size obtained using cross-validation. If cross-validation does not lead to selection of a pruned tree, then create a pruned tree with five terminal nodes.
prune.oj = prune.misclass(tree.oj, best = 6)
plot(prune.oj)
text(prune.oj, pretty = 1)

prune.oj = prune.misclass(tree.oj, best = 4)
plot(prune.oj)
text(prune.oj, pretty = 1)
Neither a prune of 5 or 6 returned a different tree than the one originally created so I decided to use a prune of 4 for comparing
Compare the training error rates between the pruned and unpruned trees. Which is higher?
summary(prune.oj)
##
## Classification tree:
## snip.tree(tree = tree.oj, nodes = 3:4)
## Variables actually used in tree construction:
## [1] "LoyalCH" "PriceDiff"
## Number of terminal nodes: 4
## Residual mean deviance: 0.8925 = 710.4 / 796
## Misclassification error rate: 0.1825 = 146 / 800
Even though the tree plot is different, the nodes retained all point to the same classifications as the unpruned tree. Since the trees are the same, the returned error rate is also the same across all trees.
(k) Compare the test error rates between the pruned and unpruned trees. Which is higher?
preds.unpruned = predict(tree.oj, newdata = testd, type = 'class' )
mean(preds.unpruned != testd$Purchase)
## [1] 0.1666667
preds.pruned = predict(prune.oj, newdata = testd, type = 'class' )
mean(preds.unpruned != testd$Purchase)
## [1] 0.1666667
Since they are the same tree, the error rates on the testing data are also identical. I wrote out the functions anyways to demonstrate that if there was a difference, that is how I would find it.