Problem 3

Consider the Gini index, classification error, and entropy in a simple classification setting with two classes. Create a single plot that displays each of these quantities as a function of ˆpm1. The x-axis should display ˆpm1, ranging from 0 to 1, and the y-axis should display the value of the Gini index, classification error, and entropy. Hint: In a setting with two classes, pˆm1 = 1 − pˆm2. You could make this plot by hand, but it will be much easier to make in R.

p = seq(0, 1, 0.01)
gini = p * (1 - p) * 2
entropy = -(p * log(p) + (1 - p) * log(1 - p))
class.err = 1 - pmax(p, 1 - p)
matplot(p, cbind(gini, entropy, class.err), col = c("red", "green", "blue"), pch=20)
legend(x='bottom', legend=c('gini','entropy','classification error'), col=c("red","green","blue"), lty=1, text.width=0.25)

Problem 8

In the lab, a classification tree was applied to the Carseats data set after converting Sales into a qualitative response variable. Now we will seek to predict Sales using regression trees and related approaches, treating the response as a quantitative variable.

library(ISLR)
## Warning: package 'ISLR' was built under R version 3.6.3
set.seed(1)
summary(Carseats)
##      Sales          CompPrice       Income        Advertising    
##  Min.   : 0.000   Min.   : 77   Min.   : 21.00   Min.   : 0.000  
##  1st Qu.: 5.390   1st Qu.:115   1st Qu.: 42.75   1st Qu.: 0.000  
##  Median : 7.490   Median :125   Median : 69.00   Median : 5.000  
##  Mean   : 7.496   Mean   :125   Mean   : 68.66   Mean   : 6.635  
##  3rd Qu.: 9.320   3rd Qu.:135   3rd Qu.: 91.00   3rd Qu.:12.000  
##  Max.   :16.270   Max.   :175   Max.   :120.00   Max.   :29.000  
##    Population        Price        ShelveLoc        Age       
##  Min.   : 10.0   Min.   : 24.0   Bad   : 96   Min.   :25.00  
##  1st Qu.:139.0   1st Qu.:100.0   Good  : 85   1st Qu.:39.75  
##  Median :272.0   Median :117.0   Medium:219   Median :54.50  
##  Mean   :264.8   Mean   :115.8                Mean   :53.32  
##  3rd Qu.:398.5   3rd Qu.:131.0                3rd Qu.:66.00  
##  Max.   :509.0   Max.   :191.0                Max.   :80.00  
##    Education    Urban       US     
##  Min.   :10.0   No :118   No :142  
##  1st Qu.:12.0   Yes:282   Yes:258  
##  Median :14.0                      
##  Mean   :13.9                      
##  3rd Qu.:16.0                      
##  Max.   :18.0
str(Carseats)
## 'data.frame':    400 obs. of  11 variables:
##  $ Sales      : num  9.5 11.22 10.06 7.4 4.15 ...
##  $ CompPrice  : num  138 111 113 117 141 124 115 136 132 132 ...
##  $ Income     : num  73 48 35 100 64 113 105 81 110 113 ...
##  $ Advertising: num  11 16 10 4 3 13 0 15 0 0 ...
##  $ Population : num  276 260 269 466 340 501 45 425 108 131 ...
##  $ Price      : num  120 83 80 97 128 72 108 120 124 124 ...
##  $ ShelveLoc  : Factor w/ 3 levels "Bad","Good","Medium": 1 2 3 3 1 1 3 2 3 3 ...
##  $ Age        : num  42 65 59 55 38 78 71 67 76 76 ...
##  $ Education  : num  17 10 12 14 13 16 15 10 10 17 ...
##  $ Urban      : Factor w/ 2 levels "No","Yes": 2 2 2 2 2 1 2 2 1 1 ...
##  $ US         : Factor w/ 2 levels "No","Yes": 2 2 2 2 1 2 1 2 1 2 ...

(a) Split the data set into a training set and a test set.

set.seed(123)
inTrain=sample(1:nrow(Carseats), 0.75*nrow(Carseats))
carseat.train <- Carseats[inTrain,]
carseat.test <- Carseats[-inTrain,]
dim(carseat.train)
## [1] 300  11
dim(carseat.test)
## [1] 100  11

(b) Fit a regression tree to the training set. Plot the tree, and interpret the results. What test MSE do you obtain? This regression tree returns a Test MSE of 3.893676. According to the summary below, the tree uses the variables ShleveLoc, Price, CompPrice, Age and Advertising. It appears the from the tree plot below that the most important variable is ShelveLoc which initally splits the data based on whether a location is considered good or not, followed by Price that splits the data in the second tier.

library(tree)
## Warning: package 'tree' was built under R version 3.6.3
carseat.tree=tree(Sales~., data=carseat.train)
summary(carseat.tree)
## 
## Regression tree:
## tree(formula = Sales ~ ., data = carseat.train)
## Variables actually used in tree construction:
## [1] "ShelveLoc"   "Price"       "CompPrice"   "Age"         "Advertising"
## Number of terminal nodes:  19 
## Residual mean deviance:  2.359 = 662.9 / 281 
## Distribution of residuals:
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
## -4.1290 -0.9993  0.0563  0.0000  0.9134  5.3040
plot(carseat.tree)
text(carseat.tree, pretty=0)

carseat.tree
## node), split, n, deviance, yval
##       * denotes terminal node
## 
##   1) root 300 2412.000  7.522  
##     2) ShelveLoc: Bad,Medium 235 1370.000  6.777  
##       4) Price < 105.5 79  430.600  8.193  
##         8) CompPrice < 123.5 56  272.600  7.497  
##          16) Price < 92.5 27   97.640  8.711  
##            32) ShelveLoc: Bad 10   25.790  7.149 *
##            33) ShelveLoc: Medium 17   33.090  9.630 *
##          17) Price > 92.5 29   98.190  6.367  
##            34) Age < 43 5    9.507  8.582 *
##            35) Age > 43 24   59.040  5.906 *
##         9) CompPrice > 123.5 23   64.820  9.887 *
##       5) Price > 105.5 156  701.000  6.060  
##        10) ShelveLoc: Bad 46  164.400  4.780  
##          20) CompPrice < 148 41  132.400  4.513  
##            40) Price < 143.5 35   96.940  4.836 *
##            41) Price > 143.5 6   10.530  2.632 *
##          21) CompPrice > 148 5    5.316  6.964 *
##        11) ShelveLoc: Medium 110  429.600  6.595  
##          22) CompPrice < 124.5 40  132.900  5.478  
##            44) Price < 135.5 34   82.330  5.883 *
##            45) Price > 135.5 6   13.530  3.187 *
##          23) CompPrice > 124.5 70  218.300  7.233  
##            46) Price < 127 31   62.110  8.092 *
##            47) Price > 127 39  115.200  6.551  
##              94) CompPrice < 147.5 29   74.240  6.073  
##               188) Advertising < 13.5 24   39.860  5.630 *
##               189) Advertising > 13.5 5    7.041  8.200 *
##              95) CompPrice > 147.5 10   15.070  7.938 *
##     3) ShelveLoc: Good 65  439.800 10.220  
##       6) Price < 109.5 23   66.440 12.420 *
##       7) Price > 109.5 42  200.300  9.009  
##        14) Advertising < 13.5 36  134.100  8.555  
##          28) Price < 156.5 31   93.010  8.933  
##            56) CompPrice < 133.5 22   37.870  8.160 *
##            57) CompPrice > 133.5 9    9.962 10.820 *
##          29) Price > 156.5 5    9.347  6.216 *
##        15) Advertising > 13.5 6   14.300 11.730 *
tree.preds=predict(carseat.tree, newdata=carseat.test)
mean((tree.preds-carseat.test$Sales)^2)
## [1] 3.893676

(c) Use cross-validation in order to determine the optimal level of tree complexity. Does pruning the tree improve the test MSE?

The Test MSE prior to pruning was 3.893676. Pruning the tree improves the Test MSE only very slightly to 3.892908.

carseat.cv.tree = cv.tree(carseat.tree)
carseat.cv.tree
## $size
##  [1] 19 18 16 15 14 13 11 10  9  8  7  6  5  4  3  2  1
## 
## $dev
##  [1] 1541.477 1527.935 1509.825 1517.322 1535.306 1530.867 1534.266
##  [8] 1534.266 1537.162 1515.482 1570.532 1570.532 1605.395 1586.251
## [15] 1723.024 1852.035 2418.791
## 
## $k
##  [1]      -Inf  24.88800  26.60495  26.76306  29.63532  37.07647  38.47564
##  [8]  38.75598  41.00933  51.83414  76.81405  78.39425  93.13437 106.90886
## [15] 173.10852 238.64847 602.31650
## 
## $method
## [1] "deviance"
## 
## attr(,"class")
## [1] "prune"         "tree.sequence"
#we can see from the results below that best=16
plot(carseat.cv.tree$size, carseat.cv.tree$dev, type = "b")

tree.min=which.min(carseat.cv.tree$dev)
tree.min
## [1] 3
carseat.cv.tree$dev
##  [1] 1541.477 1527.935 1509.825 1517.322 1535.306 1530.867 1534.266
##  [8] 1534.266 1537.162 1515.482 1570.532 1570.532 1605.395 1586.251
## [15] 1723.024 1852.035 2418.791
carseat.cv.tree$size
##  [1] 19 18 16 15 14 13 11 10  9  8  7  6  5  4  3  2  1
#pruning the tree with best=16
carseat.prune.tree = prune.tree(carseat.tree, best = 16)
plot(carseat.prune.tree)
text(carseat.prune.tree, pretty = 0)

prune.pred = predict(carseat.prune.tree, newdata = carseat.test)
mean((prune.pred - carseat.test$Sales)^2)
## [1] 3.892908

(d) Use the bagging approach in order to analyze this data. What test MSE do you obtain? Use the importance() function to determine which variables are most important.
The bagging approach obtained a Test MSE of 2.253202 which is an improvement from the tree method and pruning. The most important variables are ShelveLoc and Price.

#bagging
library(randomForest)
## Warning: package 'randomForest' was built under R version 3.6.3
## randomForest 4.6-14
## Type rfNews() to see new features/changes/bug fixes.
set.seed(123)
car.bag=randomForest(Sales~.,data=carseat.train, mtry=10, importance=TRUE)
car.bag
## 
## Call:
##  randomForest(formula = Sales ~ ., data = carseat.train, mtry = 10,      importance = TRUE) 
##                Type of random forest: regression
##                      Number of trees: 500
## No. of variables tried at each split: 10
## 
##           Mean of squared residuals: 2.482232
##                     % Var explained: 69.13
importance(car.bag)
##                %IncMSE IncNodePurity
## CompPrice   37.1354164    281.825832
## Income      10.1378593    128.955945
## Advertising 21.5736372    151.157032
## Population  -0.1362661     71.539003
## Price       72.3036488    698.000814
## ShelveLoc   77.3615535    736.724942
## Age         20.2693017    187.902438
## Education    1.9415069     68.272112
## Urban        0.4779681     10.140546
## US           0.5347757      8.054079
bag.pred=predict(car.bag,newdata=carseat.test)
mean((bag.pred-carseat.test$Sales)^2)
## [1] 2.253202

(e) Use random forests to analyze this data. What test MSE do you obtain? Use the importance() function to determine which variables are most important. Describe the effect of m, the number of variables considered at each split, on the error rate obtained.

With the Random Forest Model, we obtain a Test MSE of 2.698657. This higher than the Test MSE that we obtained when running the Bagging Method. The bagging method considered 10 variables at each split where the random forest method considered only 3 at each split. The importance function showed that the two most important vairables wehn using random forest were the same as the bagging method, ShelveLoc and Price.

set.seed(123)
car.rf=randomForest(Sales~., data=carseat.train)
car.rf
## 
## Call:
##  randomForest(formula = Sales ~ ., data = carseat.train) 
##                Type of random forest: regression
##                      Number of trees: 500
## No. of variables tried at each split: 3
## 
##           Mean of squared residuals: 2.800753
##                     % Var explained: 65.17
importance(car.rf)
##             IncNodePurity
## CompPrice       242.51537
## Income          182.84952
## Advertising     187.74748
## Population      134.21978
## Price           546.89278
## ShelveLoc       562.96792
## Age             271.17906
## Education       110.34937
## Urban            20.60962
## US               24.55086
rf.pred= predict(car.rf, newdata = carseat.test)
mean((rf.pred - carseat.test$Sales)^2)
## [1] 2.698657

Problem 9

This problem involves the OJ data set which is part of the ISLR package.

library(ISLR)
summary(OJ)
##  Purchase WeekofPurchase     StoreID        PriceCH         PriceMM     
##  CH:653   Min.   :227.0   Min.   :1.00   Min.   :1.690   Min.   :1.690  
##  MM:417   1st Qu.:240.0   1st Qu.:2.00   1st Qu.:1.790   1st Qu.:1.990  
##           Median :257.0   Median :3.00   Median :1.860   Median :2.090  
##           Mean   :254.4   Mean   :3.96   Mean   :1.867   Mean   :2.085  
##           3rd Qu.:268.0   3rd Qu.:7.00   3rd Qu.:1.990   3rd Qu.:2.180  
##           Max.   :278.0   Max.   :7.00   Max.   :2.090   Max.   :2.290  
##      DiscCH            DiscMM         SpecialCH        SpecialMM     
##  Min.   :0.00000   Min.   :0.0000   Min.   :0.0000   Min.   :0.0000  
##  1st Qu.:0.00000   1st Qu.:0.0000   1st Qu.:0.0000   1st Qu.:0.0000  
##  Median :0.00000   Median :0.0000   Median :0.0000   Median :0.0000  
##  Mean   :0.05186   Mean   :0.1234   Mean   :0.1477   Mean   :0.1617  
##  3rd Qu.:0.00000   3rd Qu.:0.2300   3rd Qu.:0.0000   3rd Qu.:0.0000  
##  Max.   :0.50000   Max.   :0.8000   Max.   :1.0000   Max.   :1.0000  
##     LoyalCH          SalePriceMM     SalePriceCH      PriceDiff      
##  Min.   :0.000011   Min.   :1.190   Min.   :1.390   Min.   :-0.6700  
##  1st Qu.:0.325257   1st Qu.:1.690   1st Qu.:1.750   1st Qu.: 0.0000  
##  Median :0.600000   Median :2.090   Median :1.860   Median : 0.2300  
##  Mean   :0.565782   Mean   :1.962   Mean   :1.816   Mean   : 0.1465  
##  3rd Qu.:0.850873   3rd Qu.:2.130   3rd Qu.:1.890   3rd Qu.: 0.3200  
##  Max.   :0.999947   Max.   :2.290   Max.   :2.090   Max.   : 0.6400  
##  Store7      PctDiscMM        PctDiscCH       ListPriceDiff  
##  No :714   Min.   :0.0000   Min.   :0.00000   Min.   :0.000  
##  Yes:356   1st Qu.:0.0000   1st Qu.:0.00000   1st Qu.:0.140  
##            Median :0.0000   Median :0.00000   Median :0.240  
##            Mean   :0.0593   Mean   :0.02731   Mean   :0.218  
##            3rd Qu.:0.1127   3rd Qu.:0.00000   3rd Qu.:0.300  
##            Max.   :0.4020   Max.   :0.25269   Max.   :0.440  
##      STORE      
##  Min.   :0.000  
##  1st Qu.:0.000  
##  Median :2.000  
##  Mean   :1.631  
##  3rd Qu.:3.000  
##  Max.   :4.000
str(OJ)
## 'data.frame':    1070 obs. of  18 variables:
##  $ Purchase      : Factor w/ 2 levels "CH","MM": 1 1 1 2 1 1 1 1 1 1 ...
##  $ WeekofPurchase: num  237 239 245 227 228 230 232 234 235 238 ...
##  $ StoreID       : num  1 1 1 1 7 7 7 7 7 7 ...
##  $ PriceCH       : num  1.75 1.75 1.86 1.69 1.69 1.69 1.69 1.75 1.75 1.75 ...
##  $ PriceMM       : num  1.99 1.99 2.09 1.69 1.69 1.99 1.99 1.99 1.99 1.99 ...
##  $ DiscCH        : num  0 0 0.17 0 0 0 0 0 0 0 ...
##  $ DiscMM        : num  0 0.3 0 0 0 0 0.4 0.4 0.4 0.4 ...
##  $ SpecialCH     : num  0 0 0 0 0 0 1 1 0 0 ...
##  $ SpecialMM     : num  0 1 0 0 0 1 1 0 0 0 ...
##  $ LoyalCH       : num  0.5 0.6 0.68 0.4 0.957 ...
##  $ SalePriceMM   : num  1.99 1.69 2.09 1.69 1.69 1.99 1.59 1.59 1.59 1.59 ...
##  $ SalePriceCH   : num  1.75 1.75 1.69 1.69 1.69 1.69 1.69 1.75 1.75 1.75 ...
##  $ PriceDiff     : num  0.24 -0.06 0.4 0 0 0.3 -0.1 -0.16 -0.16 -0.16 ...
##  $ Store7        : Factor w/ 2 levels "No","Yes": 1 1 1 1 2 2 2 2 2 2 ...
##  $ PctDiscMM     : num  0 0.151 0 0 0 ...
##  $ PctDiscCH     : num  0 0 0.0914 0 0 ...
##  $ ListPriceDiff : num  0.24 0.24 0.23 0 0 0.3 0.3 0.24 0.24 0.24 ...
##  $ STORE         : num  1 1 1 1 0 0 0 0 0 0 ...

(a) Create a training set containing a random sample of 800 observations, and a test set containing the remaining observations.

set.seed(123)
inTrain=sample(1:nrow(OJ), 800)
oj.train = OJ[inTrain,]
oj.test = OJ[-inTrain,]
dim(oj.train)
## [1] 800  18
dim(oj.test)
## [1] 270  18

(b) Fit a tree to the training data, with Purchase as the response and the other variables as predictors. Use the summary() function to produce summary statistics about the tree, and describe the results obtained. What is the training error rate? How many terminal nodes does the tree have?

In this tree regression, as seen below in the summary, only two of the variables, LoyalCH and PriceDiff, were used, and 8 terminal nodes were used. The MSE for this model on the training data is 16.5%

#tree regression
oj.tree=tree(Purchase~., data=oj.train)
summary(oj.tree)
## 
## Classification tree:
## tree(formula = Purchase ~ ., data = oj.train)
## Variables actually used in tree construction:
## [1] "LoyalCH"   "PriceDiff"
## Number of terminal nodes:  8 
## Residual mean deviance:  0.7625 = 603.9 / 792 
## Misclassification error rate: 0.165 = 132 / 800

(c) Type in the name of the tree object in order to get a detailed text output. Pick one of the terminal nodes, and interpret the information displayed.

Terminal nodes are noted by lines that have an asteriscks at the end of them. If we look at terminal node 6, we can see that this node was created by the first split as a result of LoyalCH > 0.5036 followed by a second split where PriceDiff < -0.39. There are a total of 27 observations in this node where the classification is denoted as MM. Of these 27 variables, 29% have a true classification of CH and 70% of MM.

oj.tree
## node), split, n, deviance, yval, (yprob)
##       * denotes terminal node
## 
##  1) root 800 1071.00 CH ( 0.60875 0.39125 )  
##    2) LoyalCH < 0.5036 350  415.10 MM ( 0.28000 0.72000 )  
##      4) LoyalCH < 0.276142 170  131.00 MM ( 0.12941 0.87059 )  
##        8) LoyalCH < 0.0356415 56   10.03 MM ( 0.01786 0.98214 ) *
##        9) LoyalCH > 0.0356415 114  108.90 MM ( 0.18421 0.81579 ) *
##      5) LoyalCH > 0.276142 180  245.20 MM ( 0.42222 0.57778 )  
##       10) PriceDiff < 0.05 74   74.61 MM ( 0.20270 0.79730 ) *
##       11) PriceDiff > 0.05 106  144.50 CH ( 0.57547 0.42453 ) *
##    3) LoyalCH > 0.5036 450  357.10 CH ( 0.86444 0.13556 )  
##      6) PriceDiff < -0.39 27   32.82 MM ( 0.29630 0.70370 ) *
##      7) PriceDiff > -0.39 423  273.70 CH ( 0.90071 0.09929 )  
##       14) LoyalCH < 0.705326 130  135.50 CH ( 0.78462 0.21538 )  
##         28) PriceDiff < 0.145 43   58.47 CH ( 0.58140 0.41860 ) *
##         29) PriceDiff > 0.145 87   62.07 CH ( 0.88506 0.11494 ) *
##       15) LoyalCH > 0.705326 293  112.50 CH ( 0.95222 0.04778 ) *

(d) Create a plot of the tree, and interpret the results.
This tree is used to classify the the observations as either CH or MM. The three main variables used to make these classifications are LoyalCH and PriceDiff. The first split is created by looking whether LoyalCH is greater or less than 0.5036, with the left sied being less and the right side being greater. The left side of the tree is split again on LoyalCH when compared to 0.276142 while the right side has it’s next split when compared to PriceDiff to -0.39. The left side of the tree is split once more on both LoyalCH and PriceDiff, while the right side has two more splits on one of it’s banches. There are a total of 8 terminal nodes, four of which are classified as CH and 4 classified as MM. In relation to the first split, most of the terminal nodes are classified as MM while the majority of the nodes on the right side are CH.

plot(oj.tree)
text(oj.tree, pretty = 0)

(e) Predict the response on the test data, and produce a confusion matrix comparing the test labels to the predicted test labels. What is the test error rate?
The test error rate for this model is 18.5%.

oj.pred=predict(oj.tree, newdata=oj.test, type="class")
table(oj.pred,oj.test$Purchase)
##        
## oj.pred  CH  MM
##      CH 150  34
##      MM  16  70
(34+16)/270
## [1] 0.1851852

(f) Apply the cv.tree() function to the training set in order to determine the optimal tree size.

set.seed(123)
oj.cv.tree = cv.tree(oj.tree, FUN = prune.misclass)
oj.cv.tree
## $size
## [1] 8 5 3 2 1
## 
## $dev
## [1] 139 139 157 167 313
## 
## $k
## [1] -Inf    0    8   11  154
## 
## $method
## [1] "misclass"
## 
## attr(,"class")
## [1] "prune"         "tree.sequence"

(g) Produce a plot with tree size on the x-axis and cross-validated classification error rate on the y-axis.

plot(oj.cv.tree$size, oj.cv.tree$dev, type = "b", xlab = "Tree Size", ylab = "Cross-Validation Error Rate")

(h) Which tree size corresponds to the lowest cross-validated classification error rate? Based on the plot and summary of oj.cv.tree above, tree sizes of 5 and 8 have the same lowest error rate of 139. I will chose to use a tree size of 5.

(i) Produce a pruned tree corresponding to the optimal tree size obtained using cross-validation. If cross-validation does not lead to selection of a pruned tree, then create a pruned tree with five terminal nodes.

oj.prune.tree = prune.misclass(oj.tree, best = 5)
summary(oj.prune.tree)
## 
## Classification tree:
## snip.tree(tree = oj.tree, nodes = c(4L, 7L))
## Variables actually used in tree construction:
## [1] "LoyalCH"   "PriceDiff"
## Number of terminal nodes:  5 
## Residual mean deviance:  0.826 = 656.6 / 795 
## Misclassification error rate: 0.165 = 132 / 800
plot(oj.prune.tree)
text(oj.prune.tree, pretty = 0)

(j) Compare the training error rates between the pruned and unpruned trees. Which is higher?
The training error rates between the pruned and unpruned trees are identical at 16.5% and both used the same 2 variables. The key difference between the models is the number of terminal nodes. The unpruned tree has 8, whereas the pruned tree has only 5.

(k) Compare the test error rates between the pruned and unpruned trees. Which is higher?
There is no diffference between the test error rates the the pruned and unpruned trees at 18.5%.

oj.prune.tree.pred = predict(oj.prune.tree, newdata = oj.test, type = "class")
table(oj.prune.tree.pred, oj.test$Purchase)
##                   
## oj.prune.tree.pred  CH  MM
##                 CH 150  34
##                 MM  16  70
(34+16)/270
## [1] 0.1851852