8.3.1 Fitting Classification Trees

2. Construct Classifiction with Tree

We now use the tree()function to fit a classification tree in order to predict High using all variables but Sales. The syntax of the tree() function is quite similar to that of the lm() function.

The summary() function lists the variables that are used as internal nodes in the tree, the number of terminal (ending) nodes and the training error rate. In this case, the training error rate is 9%. The residual mean deviance reported is simply the deviance divided by n-|T0| (T0 refers to the number of terminal nodes), which in this case is 400-27 = 373.



We use the plot() function to display the tree structure and the text() function to display the nodes labels. The argument pretty = 0 instructs R to include the category names for any qualitative predictors, rather than simply displaying a letter for each category.

The most important indicator of Sales appears to be shelving location, and the first branch differentiates Good locations (to the right) from Bad and Medium locations.


If we just type the name of the tree object, R prints output corresponding to each branch of the tree. R displays the split criterion, the number of observations in that branch, the deviance and the overall prediction for the branch(Yes or No). Branches that lead to terminal nodes are indicated using asterisks.



3. Estimate the Test Error

In this step, we split the observations into a training set and a test set, build the tree using the training set and evaluate its performance on the test data. The predict() function can be used for this purpose. The argument type = "class" instructs R to return the actual class prediction. This approach leads to correct predictions for 77% of the test data set.



4. Perform Cross-validation

cv.tree() determines the optimal level of tree complexity; cost complexity pruning is used in order to select a sequence of trees for consideration. We use the argument FUN = prune.misclass to indicate that we want the classification error rate to guide the cross-validation and pruning process, rather than the default for the cv.tree() function, which is deviance.

The cv.tree() function reports the number of terminal nodes of each tree considered (size) as well as the corresponding error rate and the value of the cost-complexity parameter used.
dev corresponds to the cross-validation error rate in this instance. The tree with nine terminal nodes results in the lowest cross-validation error rate, with 66 cross-validation errors.


5. Cross-validation Plot and Predict

Plot the error rate as a function of size


We now apply the prune.misclass() function in order to prune the tree to obtain its nine-node variant.

We then apply the predict() function - now 77.5% of the test observations are correct. The pruning process improved the classification accuracy. If we change the value of best to another number, however, we obtain a pruned tree with an inferior classification accuracy.

8.3.2 Fitting Regression Trees

2. Plot the Tree

The tree predicts a median house price of approximately $45,400 for larger homes (rm > 7.553).


3. Cross-Validation

Now we use the cv.tree() function to see whether pruning the tree will improve performance.

In this case, the most complex tree is selected by cross-validation. However, if we wish to prune the tree, we could do so as follows, using the prune.tree() function.

In keeping with the cross-validation results, we use the unpruned tree to make predictions on the test set.

The test set MSE associated with the regression tree is 35.29. The square root of the MSE is therefore around 5.94, indicating that this model leads to test predictions that are within $5940 of the true median home value for the suburb.