One of the earliest applications of the predictive analytics methods we have studied so far in this class was to automatically recognize letters, which post office machines use to sort mail. In this problem, we will build a model that uses statistics of images of four letters in the Roman alphabet – A, B, P, and R – to predict which letter a particular image corresponds to.
Note that this is a multiclass classification problem. We have mostly focused on binary classification problems (e.g., predicting whether an individual voted or not, whether the Supreme Court will affirm or reverse a case, whether or not a person is at risk for a certain disease, etc.). In this problem, we have more than two classifications that are possible for each observation, like in the D2Hawkeye lecture.
The file letters_ABPR.csv contains 3116 observations, each of which corresponds to a certain image of one of the four letters A, B, P and R. The images came from 20 different fonts, which were then randomly distorted to produce the final images; each such distorted image is represented as a collection of pixels, each of which is “on” or “off”. For each such distorted image, we have available certain statistics of the image in terms of these pixels, as well as which of the four letters the image is. This data comes from the UCI Machine Learning Repository.
This dataset contains the following 17 variables:
Let’s warm up by attempting to predict just whether a letter is B or not. To begin, load the file letters_ABPR.csv into R, and call it letters. Then, create a new variable isB in the dataframe, which takes the value “TRUE” if the observation corresponds to the letter B, and “FALSE” if it does not. You can do this by typing the following command into your R console:
# Load data
letters = read.csv("letters_ABPR.csv")
# Convert to factor
letters$isB = as.factor(letters$letter == "B")Now split the data set into a training and testing set, putting 50% of the data in the training set. Set the seed to 1000 before making the split. The first argument to sample.split should be the dependent variable “letters$isB”. Remember that TRUE values from sample.split should go in the training set.
# Split the data
set.seed(1000)
spl = sample.split(letters$isB, SplitRatio = 0.5)
train = subset(letters, spl == TRUE)
test = subset(letters, spl == FALSE)# Tabulate if not B
z = table(test$isB)
kable(z)| Var1 | Freq |
|---|---|
| FALSE | 1175 |
| TRUE | 383 |
# Compute Accuracy
z[1]/sum(z)
## FALSE
## 0.754172Accuracy = 0.754172
Now build a classification tree to predict whether a letter is a B or not, using the training set to build your model. Remember to remove the variable “letter” out of the model, as this is related to what we are trying to predict! To just remove one variable, you can either write out the other variables, or remember what we did in the Billboards problem in Week 3, and use the following notation:
# Cart Model
CARTb = rpart(isB ~ . - letter, data=train, method="class")
# Make predictions
predictions = predict(CARTb, newdata=test, type="class")
# Confusion matrix
z = table(test$isB, predictions)
kable(z)| FALSE | TRUE | |
|---|---|---|
| FALSE | 1118 | 57 |
| TRUE | 43 | 340 |
# Compute Accuracy
sum(diag(z))/(sum(z))
## [1] 0.9358151Accuracy = 0.9358151
Now, build a random forest model to predict whether the letter is a B or not (the isB variable) using the training set. You should use all of the other variables as independent variables, except letter (since it helped us define what we are trying to predict!). Use the default settings for ntree and nodesize (don’t include these arguments at all). Right before building the model, set the seed to 1000. (NOTE: You might get a slightly different answer on this problem, even if you set the random seed. This has to do with your operating system and the implementation of the random forest algorithm.)
# Random Forest model
set.seed(1000)
RFb = randomForest(isB ~ . - letter, data=train)
# Make predictions
predictions = predict(RFb, newdata=test)
# Confusion matrix
z = table(test$isB, predictions)
kable(z)| FALSE | TRUE | |
|---|---|---|
| FALSE | 1163 | 12 |
| TRUE | 9 | 374 |
# Compute accuracy
sum(diag(z))/sum(z)
## [1] 0.9865212Accuracy = 0.9878049
Let us now move on to the problem that we were originally interested in, which is to predict whether or not a letter is one of the four letters A, B, P or R.
As we saw in the D2Hawkeye lecture, building a multiclass classification CART model in R is no harder than building the models for binary classification problems. Fortunately, building a random forest model is just as easy.
The variable in our data frame which we will be trying to predict is “letter”. Start by converting letter in the original data set (letters) to a factor by running the following command in R:
# Convert to factor
letters$letter = as.factor( letters$letter ) Now, generate new training and testing sets of the letters data frame using letters$letter as the first input to the sample.split function. Before splitting, set your seed to 2000. Again put 50% of the data in the training set. (Why do we need to split the data again? Remember that sample.split balances the outcome variable in the training and testing sets. With a new outcome variable, we want to re-generate our split.)
# Split the data
set.seed(2000)
spl = sample.split(letters$letter, SplitRatio = 0.5)
train2 = subset(letters, spl == TRUE)
test2 = subset(letters, spl == FALSE)# Tabulate letters
z = table(test2$letter)
kable(z)| Var1 | Freq |
|---|---|
| A | 395 |
| B | 383 |
| P | 401 |
| R | 379 |
# Compute accuracy
sum(diag(z))/sum(z)
## [1] 1Accuracy = 0.2573813
Now build a classification tree to predict “letter”, using the training set to build your model. You should use all of the other variables as independent variables, except “isB”, since it is related to what we are trying to predict! Just use the default parameters in your CART model. Add the argument method=“class” since this is a classification problem. Even though we have multiple classes here, nothing changes in how we build the model from the binary case.
# CART Model
CARTletter = rpart(letter ~ . - isB, data=train2, method="class")
# Make predictions
predictLetter = predict(CARTletter, newdata=test2, type="class")
# Confusion Matrix
z = table(test2$letter, predictLetter)
kable(z)| A | B | P | R | |
|---|---|---|---|---|
| A | 348 | 4 | 0 | 43 |
| B | 8 | 318 | 12 | 45 |
| P | 2 | 21 | 363 | 15 |
| R | 10 | 24 | 5 | 340 |
# Compute accuracy
sum(diag(z))/sum(z)
## [1] 0.8786906Accuracy = 0.8786906
Now build a random forest model on the training data, using the same independent variables as in the previous problem – again, don’t forget to remove the isB variable. Just use the default parameter values for ntree and nodesize (you don’t need to include these arguments at all). Set the seed to 1000 right before building your model. (Remember that you might get a slightly different result even if you set the random seed.)
# Random Forest model
set.seed(1000)
RFletter = randomForest(letter ~ . - isB, data=train2)
# Make predictions
predictLetter = predict(RFletter, newdata=test2)
# Confusion matrix
z = table(test2$letter, predictLetter)
kable(z)| A | B | P | R | |
|---|---|---|---|---|
| A | 391 | 0 | 3 | 1 |
| B | 0 | 380 | 1 | 2 |
| P | 0 | 6 | 394 | 1 |
| R | 3 | 14 | 0 | 362 |
# Compute accuracy
sum(diag(z))/sum(z)
## [1] 0.9801027Accuracy = 0.9801027