Chapter 08 (page 332): 3, 8, 9

Problem 3

Consider the Gini index, classification error, and entropy in a simple classification setting with two classes. Create a single plot that displays each of these quantities as a function of ˆpm1. The xaxis should display ˆpm1, ranging from 0 to 1, and the y-axis should display the value of the Gini index, classification error, and entropy.

p <- seq(0, 1, 0.001)
gini.index <- 2 * p * (1 - p)
class.error <- 1 - pmax(p, 1 - p)
cross.entropy <- - (p * log(p) + (1 - p) * log(1 - p))
matplot(p, cbind(gini.index, class.error, cross.entropy), col = c("#fde0dd", "#fa9fb5", "#c51b8a"))

Problem 8

In the lab, a classification tree was applied to the Carseats data set after converting Sales into a qualitative response variable. Now we will seek to predict Sales using regression trees and related approaches, treating the response as a quantitative variable.
(a) Split the data set into a training set and a test set.

library(ISLR)
attach(Carseats)
The following object is masked from package:googleVis:

    Population

The following objects are masked from Carseats (pos = 11):

    Advertising, Age, CompPrice, Education, Income, Population,
    Price, Sales, ShelveLoc, Urban, US

The following objects are masked from Carseats (pos = 12):

    Advertising, Age, CompPrice, Education, Income, Population,
    Price, Sales, ShelveLoc, Urban, US

The following objects are masked from Carseats (pos = 13):

    Advertising, Age, CompPrice, Education, Income, Population,
    Price, Sales, ShelveLoc, Urban, US

The following objects are masked from Carseats (pos = 14):

    Advertising, Age, CompPrice, Education, Income, Population,
    Price, Sales, ShelveLoc, Urban, US
set.seed(123)
train <- sample(1:nrow(Carseats), nrow(Carseats) / 2)
Car.train <- Carseats[train, ]
Car.test <- Carseats[-train,]

(b) Fit a regression tree to the training set. Plot the tree, and interpret the results. What test MSE do you obtain?

library(tree)

reg.tree = tree(Sales~.,data = Carseats, subset=train)

summary(reg.tree)

Regression tree:
tree(formula = Sales ~ ., data = Carseats, subset = train)
Variables actually used in tree construction:
[1] "ShelveLoc"   "Price"       "Advertising" "Population"  "Age"        
[6] "CompPrice"  
Number of terminal nodes:  15 
Residual mean deviance:  2.624 = 485.4 / 185 
Distribution of residuals:
     Min.   1st Qu.    Median      Mean   3rd Qu.      Max. 
-3.863000 -1.166000  0.000105  0.000000  1.070000  4.177000 
# plot the tree
plot(reg.tree)
text(reg.tree, pretty =0)


# MSE
yhat = predict(reg.tree,newdata = Car.test)
mean((yhat - Car.test$Sales)^2)
[1] 4.427724

Notice that the output of summary() indicates that only 6 of the variables have been used in constructing the tree.
MSE of 4.427724

(c) Use cross-validation in order to determine the optimal level of tree complexity. Does pruning the tree improve the test MSE?

set.seed(1)
cv.car <- cv.tree(reg.tree)
plot(cv.car$size, cv.car$dev, type = "b")

cv.car
$size
 [1] 15 14 13 12 11 10  9  8  7  6  5  4  3  2  1

$dev
 [1] 1004.865 1037.569 1036.574 1044.420 1117.076 1129.020 1127.200 1141.886
 [9] 1163.688 1167.178 1258.156 1253.592 1193.044 1231.505 1568.735

$k
 [1]      -Inf  18.02620  21.27734  25.61440  29.30120  33.75840  34.39053
 [8]  39.98526  44.49292  47.11262  63.49470  78.31714 106.92311 164.25364
[15] 358.55742

$method
[1] "deviance"

attr(,"class")
[1] "prune"         "tree.sequence"

In this case, the most complex tree is selected by cross-validation. However, if we wish to prune the tree, we could do so as follows, using the prune.tree() function:

prune.car <- prune.tree(reg.tree, best = 8)
plot(prune.car)
text(prune.car,pretty=0)


yhat<-predict(prune.car, newdata= Car.test)
mean((yhat-Car.test$Sales)^2)
[1] 5.314472

(d) Use the bagging approach in order to analyze this data. What test MSE do you obtain? Use the importance() function to determine which variables are most important.

library(randomForest)
set.seed(1)
bag.car = randomForest(Sales~.,data=Car.train,mtry = 10, importance = TRUE)
yhat.bag = predict(bag.car,newdata=Car.test)
mean((yhat.bag-Car.test$Sales)^2)
[1] 2.539716
varImpPlot(bag.car)

(e) Use random forests to analyze this data. What test MSE do you obtain? Use the importance() function to determine which variables are most important. Describe the effect of m, the number of variables considered at each split, on the error rate obtained.

library(randomForest)
set.seed(1)
rf.car = randomForest(Sales~.,data=Car.train,mtry = 3, importance = TRUE)
yhat.rf = predict(rf.car,newdata=Car.test)
mean((yhat.rf-Car.test$Sales)^2)
[1] 3.164063

Problem 9

(a) Create a training set containing a random sample of 800 observations, and a test set containing the remaining observations.

library(ISLR)
set.seed(1)
train = sample(dim(OJ)[1],800)
OJ.train = OJ[train,]
OJ.test = OJ[-train,]

(b) Fit a tree to the training data, with Purchase as the response and the other variables as predictors. Use the summary() function to produce summary statistics about the tree, and describe the results obtained. What is the training error rate? How many terminal nodes does the tree have?

OJ.tree = tree(Purchase~., data=OJ.train)
summary(OJ.tree)

Classification tree:
tree(formula = Purchase ~ ., data = OJ.train)
Variables actually used in tree construction:
[1] "LoyalCH"       "PriceDiff"     "SpecialCH"     "ListPriceDiff"
Number of terminal nodes:  8 
Residual mean deviance:  0.7305 = 578.6 / 792 
Misclassification error rate: 0.165 = 132 / 800 

(c) Type in the name of the tree object in order to get a detailed text output. Pick one of the terminal nodes, and interpret the information displayed.

OJ.tree
node), split, n, deviance, yval, (yprob)
      * denotes terminal node

 1) root 800 1064.00 CH ( 0.61750 0.38250 )  
   2) LoyalCH < 0.508643 350  409.30 MM ( 0.27143 0.72857 )  
     4) LoyalCH < 0.264232 166  122.10 MM ( 0.12048 0.87952 )  
       8) LoyalCH < 0.0356415 57   10.07 MM ( 0.01754 0.98246 ) *
       9) LoyalCH > 0.0356415 109  100.90 MM ( 0.17431 0.82569 ) *
     5) LoyalCH > 0.264232 184  248.80 MM ( 0.40761 0.59239 )  
      10) PriceDiff < 0.195 83   91.66 MM ( 0.24096 0.75904 )  
        20) SpecialCH < 0.5 70   60.89 MM ( 0.15714 0.84286 ) *
        21) SpecialCH > 0.5 13   16.05 CH ( 0.69231 0.30769 ) *
      11) PriceDiff > 0.195 101  139.20 CH ( 0.54455 0.45545 ) *
   3) LoyalCH > 0.508643 450  318.10 CH ( 0.88667 0.11333 )  
     6) LoyalCH < 0.764572 172  188.90 CH ( 0.76163 0.23837 )  
      12) ListPriceDiff < 0.235 70   95.61 CH ( 0.57143 0.42857 ) *
      13) ListPriceDiff > 0.235 102   69.76 CH ( 0.89216 0.10784 ) *
     7) LoyalCH > 0.764572 278   86.14 CH ( 0.96403 0.03597 ) *

(d) Create a plot of the tree, and interpret the results.

plot(OJ.tree)
text(OJ.tree,pretty=TRUE)

(e) Predict the response on the test data, and produce a confusion matrix comparing the test labels to the predicted test labels. What is the test error rate?

tree.pred = predict(OJ.tree, newdata = OJ.test, type = "class")
table(tree.pred,OJ.test$Purchase)
         
tree.pred  CH  MM
       CH 147  49
       MM  12  62

(f) Apply the cv.tree() function to the training set in order to determine the optimal tree size.

cv.OJ = cv.tree(OJ.tree, FUN = prune.misclass)
cv.OJ
$size
[1] 8 5 2 1

$dev
[1] 145 145 155 306

$k
[1]       -Inf   0.000000   4.666667 160.000000

$method
[1] "misclass"

attr(,"class")
[1] "prune"         "tree.sequence"

(g) Produce a plot with tree size on the x-axis and cross-validated classification error rate on the y-axis.

plot(cv.OJ$size,cv.OJ$dev,type='b', xlab = "Tree size", ylab = "Deviance")

(i) Produce a pruned tree corresponding to the optimal tree size obtained using cross-validation. If cross-validation does not lead to selection of a pruned tree, then create a pruned tree with five terminal nodes.

prune.OJ = prune.misclass(OJ.tree, best=5)
plot(prune.OJ)
text(prune.OJ,pretty=0)

(j) Compare the training error rates between the pruned and unpruned trees. Which is higher?

summary(OJ.tree)

Classification tree:
tree(formula = Purchase ~ ., data = OJ.train)
Variables actually used in tree construction:
[1] "LoyalCH"       "PriceDiff"     "SpecialCH"     "ListPriceDiff"
Number of terminal nodes:  8 
Residual mean deviance:  0.7305 = 578.6 / 792 
Misclassification error rate: 0.165 = 132 / 800 

(k) Compare the test error rates between the pruned and unpruned trees. Which is higher?

tree.pred = predict(prune.OJ, newdata = OJ.test, type = "class")
table(tree.pred,OJ.test$Purchase)
         
tree.pred  CH  MM
       CH 147  49
       MM  12  62
LS0tCnRpdGxlOiAiSFc3IERNIgphdXRob3I6ICJFLiBGYWxjb24iCmRhdGU6ICI1LzMvMjAyMCIKb3V0cHV0OgogIGh0bWxfbm90ZWJvb2s6CiAgICB0b2M6IHllcwogICAgdG9jX2Zsb2F0OiB5ZXMKICBodG1sX2RvY3VtZW50OgogICAgZGZfcHJpbnQ6IHBhZ2VkCiAgICB0b2M6IHllcwotLS0KCgojIyMgQ2hhcHRlciAwOCAocGFnZSAgMzMyKTogMywgOCwgOQoKIyMgUHJvYmxlbSAzCioqQ29uc2lkZXIgdGhlIEdpbmkgaW5kZXgsIGNsYXNzaWZpY2F0aW9uIGVycm9yLCBhbmQgZW50cm9weSBpbiBhIHNpbXBsZSBjbGFzc2lmaWNhdGlvbiBzZXR0aW5nIHdpdGggdHdvIGNsYXNzZXMuIENyZWF0ZSBhIHNpbmdsZSBwbG90IHRoYXQgZGlzcGxheXMgZWFjaCBvZiB0aGVzZSBxdWFudGl0aWVzIGFzIGEgZnVuY3Rpb24gb2Ygy4ZwbTEuIFRoZSB4YXhpcyBzaG91bGQgZGlzcGxheSDLhnBtMSwgcmFuZ2luZyBmcm9tIDAgdG8gMSwgYW5kIHRoZSB5LWF4aXMgc2hvdWxkIGRpc3BsYXkgdGhlIHZhbHVlIG9mIHRoZSBHaW5pIGluZGV4LCBjbGFzc2lmaWNhdGlvbiBlcnJvciwgYW5kIGVudHJvcHkuKiogIApgYGB7cn0KcCA8LSBzZXEoMCwgMSwgMC4wMDEpCmdpbmkuaW5kZXggPC0gMiAqIHAgKiAoMSAtIHApCmNsYXNzLmVycm9yIDwtIDEgLSBwbWF4KHAsIDEgLSBwKQpjcm9zcy5lbnRyb3B5IDwtIC0gKHAgKiBsb2cocCkgKyAoMSAtIHApICogbG9nKDEgLSBwKSkKbWF0cGxvdChwLCBjYmluZChnaW5pLmluZGV4LCBjbGFzcy5lcnJvciwgY3Jvc3MuZW50cm9weSksIGNvbCA9IGMoIiNmZGUwZGQiLCAiI2ZhOWZiNSIsICIjYzUxYjhhIikpCmBgYAoKIyMgUHJvYmxlbSA4CioqSW4gdGhlIGxhYiwgYSBjbGFzc2lmaWNhdGlvbiB0cmVlIHdhcyBhcHBsaWVkIHRvIHRoZSBDYXJzZWF0cyBkYXRhIHNldCBhZnRlciBjb252ZXJ0aW5nIFNhbGVzIGludG8gYSBxdWFsaXRhdGl2ZSByZXNwb25zZSB2YXJpYWJsZS4gTm93IHdlIHdpbGwgc2VlayB0byBwcmVkaWN0IFNhbGVzIHVzaW5nIHJlZ3Jlc3Npb24gdHJlZXMgYW5kIHJlbGF0ZWQgYXBwcm9hY2hlcywgdHJlYXRpbmcgdGhlIHJlc3BvbnNlIGFzIGEgcXVhbnRpdGF0aXZlIHZhcmlhYmxlLioqICAKKiooYSkgU3BsaXQgdGhlIGRhdGEgc2V0IGludG8gYSB0cmFpbmluZyBzZXQgYW5kIGEgdGVzdCBzZXQuKioKYGBge3J9CmxpYnJhcnkoSVNMUikKYXR0YWNoKENhcnNlYXRzKQpzZXQuc2VlZCgxMjMpCnRyYWluIDwtIHNhbXBsZSgxOm5yb3coQ2Fyc2VhdHMpLCBucm93KENhcnNlYXRzKSAvIDIpCkNhci50cmFpbiA8LSBDYXJzZWF0c1t0cmFpbiwgXQpDYXIudGVzdCA8LSBDYXJzZWF0c1stdHJhaW4sXQpgYGAKCioqKGIpIEZpdCBhIHJlZ3Jlc3Npb24gdHJlZSB0byB0aGUgdHJhaW5pbmcgc2V0LiBQbG90IHRoZSB0cmVlLCBhbmQgaW50ZXJwcmV0IHRoZSByZXN1bHRzLiBXaGF0IHRlc3QgTVNFIGRvIHlvdSBvYnRhaW4/KiogIApgYGB7cn0KbGlicmFyeSh0cmVlKQoKcmVnLnRyZWUgPSB0cmVlKFNhbGVzfi4sZGF0YSA9IENhcnNlYXRzLCBzdWJzZXQ9dHJhaW4pCgpzdW1tYXJ5KHJlZy50cmVlKQoKIyBwbG90IHRoZSB0cmVlCnBsb3QocmVnLnRyZWUpCnRleHQocmVnLnRyZWUsIHByZXR0eSA9MCkKCiMgTVNFCnloYXQgPSBwcmVkaWN0KHJlZy50cmVlLG5ld2RhdGEgPSBDYXIudGVzdCkKbWVhbigoeWhhdCAtIENhci50ZXN0JFNhbGVzKV4yKQpgYGAKTm90aWNlIHRoYXQgdGhlIG91dHB1dCBvZiBzdW1tYXJ5KCkgaW5kaWNhdGVzIHRoYXQgb25seSA2IG9mIHRoZSB2YXJpYWJsZXMgaGF2ZSBiZWVuIHVzZWQgaW4gY29uc3RydWN0aW5nIHRoZSB0cmVlLiAgCk1TRSBvZiA0LjQyNzcyNCAgCgoqKihjKSBVc2UgY3Jvc3MtdmFsaWRhdGlvbiBpbiBvcmRlciB0byBkZXRlcm1pbmUgdGhlIG9wdGltYWwgbGV2ZWwgb2YgdHJlZSBjb21wbGV4aXR5LiBEb2VzIHBydW5pbmcgdGhlIHRyZWUgaW1wcm92ZSB0aGUgdGVzdCBNU0U/KiogIApgYGB7cn0Kc2V0LnNlZWQoMSkKY3YuY2FyIDwtIGN2LnRyZWUocmVnLnRyZWUpCnBsb3QoY3YuY2FyJHNpemUsIGN2LmNhciRkZXYsIHR5cGUgPSAiYiIpCmN2LmNhcgpgYGAKSW4gdGhpcyBjYXNlLCB0aGUgbW9zdCBjb21wbGV4IHRyZWUgaXMgc2VsZWN0ZWQgYnkgY3Jvc3MtdmFsaWRhdGlvbi4KSG93ZXZlciwgaWYgd2Ugd2lzaCB0byBwcnVuZSB0aGUgdHJlZSwgd2UgY291bGQgZG8gc28gYXMgZm9sbG93cywgdXNpbmcgdGhlCnBydW5lLnRyZWUoKSBmdW5jdGlvbjogIAoKYGBge3J9CnBydW5lLmNhciA8LSBwcnVuZS50cmVlKHJlZy50cmVlLCBiZXN0ID0gOCkKcGxvdChwcnVuZS5jYXIpCnRleHQocHJ1bmUuY2FyLHByZXR0eT0wKQoKeWhhdDwtcHJlZGljdChwcnVuZS5jYXIsIG5ld2RhdGE9IENhci50ZXN0KQptZWFuKCh5aGF0LUNhci50ZXN0JFNhbGVzKV4yKQpgYGAKKiooZCkgVXNlIHRoZSBiYWdnaW5nIGFwcHJvYWNoIGluIG9yZGVyIHRvIGFuYWx5emUgdGhpcyBkYXRhLiBXaGF0IHRlc3QgTVNFIGRvIHlvdSBvYnRhaW4/IFVzZSB0aGUgaW1wb3J0YW5jZSgpIGZ1bmN0aW9uIHRvIGRldGVybWluZSB3aGljaCB2YXJpYWJsZXMgYXJlIG1vc3QgaW1wb3J0YW50LioqICAKYGBge3J9CmxpYnJhcnkocmFuZG9tRm9yZXN0KQpzZXQuc2VlZCgxKQpiYWcuY2FyID0gcmFuZG9tRm9yZXN0KFNhbGVzfi4sZGF0YT1DYXIudHJhaW4sbXRyeSA9IDEwLCBpbXBvcnRhbmNlID0gVFJVRSkKeWhhdC5iYWcgPSBwcmVkaWN0KGJhZy5jYXIsbmV3ZGF0YT1DYXIudGVzdCkKbWVhbigoeWhhdC5iYWctQ2FyLnRlc3QkU2FsZXMpXjIpCnZhckltcFBsb3QoYmFnLmNhcikKYGBgCgoqKihlKSBVc2UgcmFuZG9tIGZvcmVzdHMgdG8gYW5hbHl6ZSB0aGlzIGRhdGEuIFdoYXQgdGVzdCBNU0UgZG8geW91IG9idGFpbj8gVXNlIHRoZSBpbXBvcnRhbmNlKCkgZnVuY3Rpb24gdG8gZGV0ZXJtaW5lIHdoaWNoIHZhcmlhYmxlcyBhcmUgbW9zdCBpbXBvcnRhbnQuIERlc2NyaWJlIHRoZSBlZmZlY3Qgb2YgbSwgdGhlIG51bWJlciBvZiB2YXJpYWJsZXMgY29uc2lkZXJlZCBhdCBlYWNoIHNwbGl0LCBvbiB0aGUgZXJyb3IgcmF0ZSBvYnRhaW5lZC4qKiAgCmBgYHtyfQpsaWJyYXJ5KHJhbmRvbUZvcmVzdCkKc2V0LnNlZWQoMSkKcmYuY2FyID0gcmFuZG9tRm9yZXN0KFNhbGVzfi4sZGF0YT1DYXIudHJhaW4sbXRyeSA9IDMsIGltcG9ydGFuY2UgPSBUUlVFKQp5aGF0LnJmID0gcHJlZGljdChyZi5jYXIsbmV3ZGF0YT1DYXIudGVzdCkKbWVhbigoeWhhdC5yZi1DYXIudGVzdCRTYWxlcyleMikKYGBgCgojIyBQcm9ibGVtIDkKKiooYSkgQ3JlYXRlIGEgdHJhaW5pbmcgc2V0IGNvbnRhaW5pbmcgYSByYW5kb20gc2FtcGxlIG9mIDgwMCBvYnNlcnZhdGlvbnMsIGFuZCBhIHRlc3Qgc2V0IGNvbnRhaW5pbmcgdGhlIHJlbWFpbmluZyBvYnNlcnZhdGlvbnMuKioKYGBge3J9CmxpYnJhcnkoSVNMUikKc2V0LnNlZWQoMSkKdHJhaW4gPSBzYW1wbGUoZGltKE9KKVsxXSw4MDApCk9KLnRyYWluID0gT0pbdHJhaW4sXQpPSi50ZXN0ID0gT0pbLXRyYWluLF0KYGBgCgoqKihiKSBGaXQgYSB0cmVlIHRvIHRoZSB0cmFpbmluZyBkYXRhLCB3aXRoIFB1cmNoYXNlIGFzIHRoZSByZXNwb25zZSBhbmQgdGhlIG90aGVyIHZhcmlhYmxlcyBhcyBwcmVkaWN0b3JzLiBVc2UgdGhlIHN1bW1hcnkoKSBmdW5jdGlvbiB0byBwcm9kdWNlIHN1bW1hcnkgc3RhdGlzdGljcyBhYm91dCB0aGUgdHJlZSwgYW5kIGRlc2NyaWJlIHRoZSByZXN1bHRzIG9idGFpbmVkLiBXaGF0IGlzIHRoZSB0cmFpbmluZyBlcnJvciByYXRlPyBIb3cgbWFueSB0ZXJtaW5hbCBub2RlcyBkb2VzIHRoZSB0cmVlIGhhdmU/KiogIApgYGB7cn0KT0oudHJlZSA9IHRyZWUoUHVyY2hhc2V+LiwgZGF0YT1PSi50cmFpbikKc3VtbWFyeShPSi50cmVlKQpgYGAKCgoqKihjKSBUeXBlIGluIHRoZSBuYW1lIG9mIHRoZSB0cmVlIG9iamVjdCBpbiBvcmRlciB0byBnZXQgYSBkZXRhaWxlZCB0ZXh0IG91dHB1dC4gUGljayBvbmUgb2YgdGhlIHRlcm1pbmFsIG5vZGVzLCBhbmQgaW50ZXJwcmV0IHRoZSBpbmZvcm1hdGlvbiBkaXNwbGF5ZWQuKiogIApgYGB7cn0KT0oudHJlZQpgYGAKCioqKGQpIENyZWF0ZSBhIHBsb3Qgb2YgdGhlIHRyZWUsIGFuZCBpbnRlcnByZXQgdGhlIHJlc3VsdHMuKiogIApgYGB7cn0KcGxvdChPSi50cmVlKQp0ZXh0KE9KLnRyZWUscHJldHR5PVRSVUUpCmBgYAoKKiooZSkgUHJlZGljdCB0aGUgcmVzcG9uc2Ugb24gdGhlIHRlc3QgZGF0YSwgYW5kIHByb2R1Y2UgYSBjb25mdXNpb24gbWF0cml4IGNvbXBhcmluZyB0aGUgdGVzdCBsYWJlbHMgdG8gdGhlIHByZWRpY3RlZCB0ZXN0IGxhYmVscy4gV2hhdCBpcyB0aGUgdGVzdCBlcnJvciByYXRlPyoqICAKYGBge3J9CnRyZWUucHJlZCA9IHByZWRpY3QoT0oudHJlZSwgbmV3ZGF0YSA9IE9KLnRlc3QsIHR5cGUgPSAiY2xhc3MiKQp0YWJsZSh0cmVlLnByZWQsT0oudGVzdCRQdXJjaGFzZSkKYGBgCgoqKihmKSBBcHBseSB0aGUgY3YudHJlZSgpIGZ1bmN0aW9uIHRvIHRoZSB0cmFpbmluZyBzZXQgaW4gb3JkZXIgdG8gZGV0ZXJtaW5lIHRoZSBvcHRpbWFsIHRyZWUgc2l6ZS4qKiAgCmBgYHtyfQpjdi5PSiA9IGN2LnRyZWUoT0oudHJlZSwgRlVOID0gcHJ1bmUubWlzY2xhc3MpCmN2Lk9KCmBgYAoKKiooZykgUHJvZHVjZSBhIHBsb3Qgd2l0aCB0cmVlIHNpemUgb24gdGhlIHgtYXhpcyBhbmQgY3Jvc3MtdmFsaWRhdGVkIGNsYXNzaWZpY2F0aW9uIGVycm9yIHJhdGUgb24gdGhlIHktYXhpcy4qKiAgCmBgYHtyfQpwbG90KGN2Lk9KJHNpemUsY3YuT0okZGV2LHR5cGU9J2InLCB4bGFiID0gIlRyZWUgc2l6ZSIsIHlsYWIgPSAiRGV2aWFuY2UiKQoKYGBgCgoKKiooaSkgUHJvZHVjZSBhIHBydW5lZCB0cmVlIGNvcnJlc3BvbmRpbmcgdG8gdGhlIG9wdGltYWwgdHJlZSBzaXplIG9idGFpbmVkIHVzaW5nIGNyb3NzLXZhbGlkYXRpb24uIElmIGNyb3NzLXZhbGlkYXRpb24gZG9lcyBub3QgbGVhZCB0byBzZWxlY3Rpb24gb2YgYSBwcnVuZWQgdHJlZSwgdGhlbiBjcmVhdGUgYSBwcnVuZWQgdHJlZSB3aXRoIGZpdmUgdGVybWluYWwgbm9kZXMuKiogIApgYGB7cn0KcHJ1bmUuT0ogPSBwcnVuZS5taXNjbGFzcyhPSi50cmVlLCBiZXN0PTUpCnBsb3QocHJ1bmUuT0opCnRleHQocHJ1bmUuT0oscHJldHR5PTApCmBgYAoKKiooaikgQ29tcGFyZSB0aGUgdHJhaW5pbmcgZXJyb3IgcmF0ZXMgYmV0d2VlbiB0aGUgcHJ1bmVkIGFuZCB1bnBydW5lZCB0cmVlcy4gV2hpY2ggaXMgaGlnaGVyPyoqICAKYGBge3J9CnN1bW1hcnkoT0oudHJlZSkKYGBgCgoqKihrKSBDb21wYXJlIHRoZSB0ZXN0IGVycm9yIHJhdGVzIGJldHdlZW4gdGhlIHBydW5lZCBhbmQgdW5wcnVuZWQgdHJlZXMuIFdoaWNoIGlzIGhpZ2hlcj8qKiAgCmBgYHtyfQp0cmVlLnByZWQgPSBwcmVkaWN0KHBydW5lLk9KLCBuZXdkYXRhID0gT0oudGVzdCwgdHlwZSA9ICJjbGFzcyIpCnRhYmxlKHRyZWUucHJlZCxPSi50ZXN0JFB1cmNoYXNlKQpgYGAK