It can be useful to be able to classify new “test” documents using already classified “training” documents. A common example is using a corpus of labeled spam and ham (non-spam) e-mails to predict whether or not a new document is spam.
For this project, you can start with a spam/ham data-set, then predict the class of new documents (either withheld from the training data-set or from another source such as your own spam folder). One example corpus: https://spamassassin.apache.org/old/publiccorpus/
The tm package will be used to create a corpus of data which will serve as the source of features and observations for the analysis. This will then be converted into a document-term matrix. Finally, The caret package will be used for the model fitting, validation, and testing.
The process of building a ham/spam filter is an oft-used pedagogical tool when teaching predictive modeling. Therefore, there is a multitude of information available on-line and in texts, of which we availed ourselves.
It should be noted that one of the more common packages in recent use for text mining, the RTextTools package was recently removed from CRAN, and personal communication by one of us with the author (who is now building the news feed at LinkedIn) confirmed that the package is abandonware.
Lastly, we understand that the object of this exercise is not to build an excellent predictor but to demonstrate the necessary knowledge required to build classification algorithms.
A document-term matrix (DTM) is the model matrix used in natural language processing (NLP). Its rows represent the documents in the corpus and its columns represent the selected terms or tokens which are treated as features. The values in each cell depends on the weighting schema selected. The simplest is term-frequency (tf). This is just the number of times the word is found in that document. A more sophisticated weighting scheme is term frequency–inverse document frequency (tf-idf). This measure increases with the frequency of the term, but offsets it by the number of documents in which it appears. This will lower the predictive power of words that naturally appear very often in all kinds of documents, and so do not shed much light on the type of document. This problem is also addressed by removing words so common as to have no predictive power at all like “and” or “the”. These are often called stop words.
In the following document, all user-created variables will be in snake_case and all user-created functions will be in CamelCase. Unfortunately, the tm packages uses camelCase for its functions. wE aPoLoGIze fOr anY IncoNVenIence.
# allows us to repeat analysis with same outcomes
set.seed(12)
# Enable parallel processing to speed up code
library(doParallel) # library to enable parallel processing to leverage multiple CPU's & Cores
num_cores <- detectCores() - 1
#registerDoParallel(cores=num_cores)
cl <- makeCluster(num_cores, type="FORK")
#cl <- makePSOCKcluster(6L)
registerDoParallel(cl)
library(tm) # tool to facilitate building corpus of data
library(SnowballC) # tools to find word stems
library(caret) # tools to run machine learning
library(wordcloud) # tool to help build vidual wordclouds
library(tidyverse)
The files were downloaded from the link above, and the spam_2 and easy_ham sets were selected for analysis. These were unzipped so that each email is its own file in the directory.
# Get a list of all the spam file names (each file is a single email message)
s_files <- list.files("./Data/spam_2", full.names = TRUE)
s_len <- length(s_files)
# Get a list of all the ham files names (each file is a single email message)
h_files <- list.files("./Data/easy_ham", full.names = TRUE)
h_len <- length(h_files)
We loaded {r} s_len spam email messages and {r} h_len ham (non-spam) email messages. The first thing to note is that we have an unbalanced data set with more good email messages (ham) than spam. This may affect our choice of models and/or force us to take extra steps to accomodate the difference in set sizes.
We will be focusing on email content, and not the meta information or doing reverse DNS lookups. Therefore, it makes sense to remove the email headers. According to the most recent RFC about email, RFC 5322, Section 2.2, the header should not contain any purely blank lines. Therefore, it is a very reasonable approach to look for the first blank line and only start ingesting the email from the next line. That is what is searched for by the regex pattern "^$" in the function below.
In the headers, some information that could be used to enhance a model might include: the Subject line, sender’s email address domain name (e.g. @gmail.com, @companyname.com, etc), whether the sender’s email domain matches the sender’s SMTP server domain name, the hour (UTC) when the email was sent, the origin country (based on SMTP server name or IP address lookup), and potentially information about the originating domain name (e.g. when was he domain registered). If this was a critical project, we could also download RBL (realtime blakc lists) and use that information to provide additional pattern matching.
The readLines function reads each line as a separate vector. To turn this into a single character vector, the paste function is used with the appropriate sep and collapse values. The class of the document is passed as a parameter to the BuildCorpus function.
#' Build a corpus from a list of file names
#'
#' @param files List of documents to load.
#' @param class The class to be applied to the loaded documents
#' @return A charater vector
BuildCorpus <- function(files, class) {
# loop thru files and process each one as we go
for (i in seq_along(files)) {
raw_text <- readLines(files[i])
em_length <- length(raw_text)
# Lets extract the Subject line (if present) and clean it
subject_line <- str_extract(raw_text, "^Subject: (.*)$")
subject_line <- subject_line[!is.na(subject_line)]
subject_line <- iconv(subject_line, to="UTF-8")
# let's scrub / clean up the subject line text
subject_line <- gsub("[^0-9A-Za-z///' ]","" , subject_line, ignore.case = TRUE, useBytes = TRUE)
subject_line <- tolower(subject_line)
subject_line <- str_replace_all(subject_line, "(\\[)|(\\])|(re )|(subject )", "")
# Lets extract the email body content
body_start <- min(grep("^$", raw_text, fixed = FALSE, useBytes = TRUE)) + 1L
em_body <- paste(raw_text[body_start:em_length], sep="", collapse=" ")
em_body <- iconv(em_body, to="UTF-8")
# make the text lower case
em_body <- tolower(em_body)
# remove HTML tags
em_body <- str_replace_all(em_body, "(<[^>]*>)", "")
em_body <- str_replace_all(em_body, "(&.*;)", "")
# remove any URL's
em_body <- str_replace_all(em_body, "http(s)?:(.*) ", " ")
# remove non alpha (leave lower case and apostrophe for contractions)
em_body <- str_replace_all(em_body, "[^a-z///' ]", "")
em_body <- str_replace_all(em_body, "''|' ", "")
# Since the subject line might have important info, lets concatenate it to the top of the email body
em_body <- paste(c(subject_line, em_body), sep="", collapse=" ")
if (i == 1L) {
ret_Corpus <- VCorpus(VectorSource(em_body))
} else {
tmp_Corpus <- VCorpus(VectorSource(em_body))
ret_Corpus <- c(ret_Corpus, tmp_Corpus)
}
}
meta(ret_Corpus, tag = "class", type = "indexed") <- class
return(ret_Corpus)
}
h_corp_raw <- BuildCorpus(h_files, "ham")
s_corp_raw <- BuildCorpus(s_files, "spam")
We used many of the default cleaning tools in the tm package to perform standard adjustments like lower-casing, removing numbers, etc. We made two non-native adjustments. First we stripped out anything that looked like a URL. This needed to be done prior to removing punctuation, of course. We also added a few words to the removal list which we think have little predictive power due to their overuse. We considered removing all punctuation, but decided to leave both intra-word contractions and internal punctuation.
Lastly, we used the SnowballC package to stem the document. This process tries to identify common roots shared by similar words and then treat them as one. For example:
wordStem(c('run', 'running', 'ran', 'runt'), language = 'porter')
## [1] "run" "run" "ran" "runt"
The complete cleaning rules are in the CleanCorpus function.
# https://stackoverflow.com/questions/47410866/r-inspect-document-term-matrix-results-in-error-repeated-indices-currently-not
#' Scrub the text in a corpus
#' @param corpus A text corpus prepared by tm
#' @return A sanitized corpus
CleanCorpus <- function(corpus){
overused_words <- c("ok", 'okay', 'day', "might", "bye", "hello", "hi",
"dear", "thank", "you", "please", "sorry")
# lower case everything
corpus <- tm_map(corpus, content_transformer(tolower))
# remove any HTML markup
removeHTMLTags <- function(x) {gsub("(<[^>]*>)", "", x)}
corpus <- tm_map(corpus, content_transformer(removeHTMLTags))
# remove any URL's
StripURL <- function(x) {gsub("(http[^ ]*)|(www\\.[^ ]*)", "", x)}
corpus <- tm_map(corpus, content_transformer(StripURL))
# remove anything not a simple letter
KeepAlpha <- function(x) {gsub("[^a-z///-///' ]", "", x, ignore.case = TRUE, useBytes = TRUE)}
corpus <- tm_map(corpus, content_transformer(KeepAlpha))
# remove any numbers
corpus <- tm_map(corpus, removeNumbers)
# remove punctuation
corpus <- tm_map(corpus, removePunctuation,
preserve_intra_word_contractions = TRUE,
preserve_intra_word_dashes = TRUE)
# remove any stop words
corpus <- tm_map(corpus, removeWords, stopwords("english"))
corpus <- tm_map(corpus, removeWords, overused_words)
# remove extra white space
corpus <- tm_map(corpus, stripWhitespace)
# use the SnowballC stem algorithm to find the root stem of similar words
corpus <- tm_map(corpus, stemDocument)
return(corpus)
}
Even with a cleaned corpus, the overwhelming majority of the terms are rare. There are two ways to address sparsity of terms in the tm package. The first is to generate a list of words that appear at least \(k\) times in the corpus. This is done using the findFreqTerms command. Then the document-term matrix (DTM) can be built using only those words.
The second way is to build the DTM with all words, and then remove the words that don’t appear in at least \(p\%\) of documents. This is done using the removeSparseTerms function in tm. Both methods make manual inspection of more than one line of the matrix impossible. The matrix is stored sparsely as a triplet, and once terms are removed, it becomes impossible for R to print properly.
The removeSparseTerms is intuitively more appealing as it measures frequency by document, and not across documents. However, applying that to three separate corpuses would result in the validation and testing sets not having the same words as the training set. Therefore, the build-up method will be used, but used by finding the remaining terms after calling remove.
However, before we do that, we need to discuss…
Hastie & Tibshirani, in their seminal work ESL, suggest breaking ones data into three parts: 50% training, 25% validation, and 25% testing. Confusingly, some literature uses “test” for the validation set and “holdout” for the test set. Regardless, the idea is that you train your model on 50% of the data, and use 25% of the data (the validation set) to refine any hyper-parameters of the model. You do this for each model, and then once all the models are tuned as best possible, they are compared with each other by their performance on the heretofore unused testing/holdout set. The SplitSample function was used to split the data at the start.
# https://stackoverflow.com/questions/47410866/r-inspect-document-term-matrix-results-in-error-repeated-indices-currently-not
#' Split a sample into Training, Validation and Test groups. Return a vector with the label for each sample using
#' the provided probabilities. Note: training, validation and test should be non-negative and, not all zero.
#' @param n The total number of samples in the set
#' @param n Desired training set size (percent)
#' @param n Desired validation set size (percent)
#' @param n Desired test set size (percent)
#' @return A sanitized corpus
SplitSample <- function(n, training=0.5, validation=0.25, test=0.25) {
if((training >= 0 && validation >= 0 && test >= 0) &&
((training + validation + test) > 0) &&
((training + validation + test) <= 1.0 )) {
n_split <- sample(x = c("train", "validate", "test"), size = n,
replace = TRUE, prob = c(0.5, 0.25, 0.25))
} else {
n_split <- FALSE
}
return(n_split)
}
# build vectors that identify which group each sample will be placed (training, validation or test)
h_split <- SplitSample(h_len)
s_split <- SplitSample(s_len)
Note that with machine learning, another popular approach is to setup K-fold Cross Validation. With this approach, we create a Training/Testing split as shown above, train a model, then repeat the process with a different random Training/Testing splits. By iterating (typically 5-10 times), we ensure that every observation has a chance of being included during Training or Testing and can appear in any split group. We then average the performance metrics and use that to evaluate the model. This helps reduce bias that might have been introduced by random chance with just a single Training/Testing split.
If there are limited number of samples to work with, thus limiting the information available during the training phase, it is common to compromise and use a 70%/30% or 80%/20% Training to Testing split and skip the third Validation set. If there are limited observations, Bootstrapping is one method for generating additional data and works well if the known samples provide sufficient reprentation of the expected distribution of possible values or datapoints.
When we have the possibility of multiple rows from the same source, there is the possibility of leakage between the training and test/validation sets such that the model performs better on the validation and/or test sets than expected. We are not going to consider this now, but a more rigorous model would tag each row with the sender’s email address and/or IP address and use groupKFold() or some other similar technique to ensures all rows from a given sender are kept together in the same data set (trainng, validation or test). See https://topepo.github.io/caret/data-splitting.html for more information. Note that this approach can lead to complexity … for further discussion, see https://towardsdatascience.com/the-story-of-a-bad-train-test-split-3343fcc33d2c.
As both training and validation are part of the model construction, we feel that the term list can be built from the combination of the two. The terms in the testing/holdout set will not be seen prior to testing. We will restrict the word list to words that appear in at least 100 of the combined 2922 documents. In a real world scenario, email messages may contain new terms not seen suring the training steps. By excluding the final validation terms, we better simulate a realworld implementation where new words are appearing that we didn’t have available during model training
# pull all terms from the training sets (both hame and spam)
raw_train <- c(h_corp_raw[h_split == "train"],
s_corp_raw[s_split == "train"])
# pull all terms from the validation sets (both hame and spam)
raw_val <- c(h_corp_raw[h_split == "validate"],
s_corp_raw[s_split == "validate"])
# pull all terms from the test sets (both hame and spam)
raw_test <- c(h_corp_raw[h_split == "test"],
s_corp_raw[s_split == "test"])
# combine both training and test terms into a master list
raw_term_corp <- c(raw_train, raw_val)
clean_term_corp <- CleanCorpus(raw_term_corp)
dtm_terms <- DocumentTermMatrix(clean_term_corp, control = list(bounds = list(global = c(100L, Inf))))
freq_terms <- Terms(dtm_terms)
Here are the top 20 stemmed terms out of the 273 terms we will use in the dictionary:
ft <- colSums(as.matrix(dtm_terms))
ft_df <- data.frame(term = names(ft), count = as.integer(ft))
knitr::kable(head(ft_df[order(ft, decreasing = TRUE), ], n = 20L),
row.names = FALSE)
| term | count |
|---|---|
| 1802 | |
| will | 1624 |
| use | 1406 |
| can | 1351 |
| get | 1340 |
| just | 996 |
| 986 | |
| one | 970 |
| list | 957 |
| messag | 928 |
| time | 924 |
| work | 921 |
| free | 889 |
| make | 842 |
| like | 835 |
| now | 789 |
| peopl | 781 |
| new | 740 |
| receiv | 716 |
| click | 628 |
Here is a histogram of word frequency using the Freedman-Diaconis rule for binwidth.
bw_fd <- 2 * IQR(ft_df$count) / (dim(ft_df)[[1]]) ^ (1/3)
ggplot(ft_df, aes(x = count)) + geom_histogram(binwidth = bw_fd) + xlab("Term")
Finally, a wordcloud of the stemmed terms appearing at least 250 times:
wordcloud(ft_df$term,ft_df$count, scale = c(3, 0.6), min.freq = 250L,
colors = brewer.pal(5, "Dark2"), random.color = TRUE,
random.order = TRUE, rot.per = 0, fixed.asp = FALSE)
# sample is to randomize the observations
clean_train <- sample(CleanCorpus(raw_train))
clean_train_type <- unlist(meta(clean_train, tag = "class"))
attributes(clean_train_type) <- NULL
dtm_train <- DocumentTermMatrix(clean_train,
control = list(dictionary = freq_terms))
dtm_train
## <<DocumentTermMatrix (documents: 1943, terms: 273)>>
## Non-/sparse entries: 37044/493395
## Sparsity : 93%
## Maximal term length: 20
## Weighting : term frequency (tf)
Compare the above with the sparsity of the cleaned training corpus without the limiting dictionary:
dtm_train_S <- DocumentTermMatrix(clean_train)
dtm_train_S
## <<DocumentTermMatrix (documents: 1943, terms: 18284)>>
## Non-/sparse entries: 105872/35419940
## Sparsity : 100%
## Maximal term length: 441
## Weighting : term frequency (tf)
clean_val <- sample(CleanCorpus(raw_val))
clean_val_type <- unlist(meta(clean_val, tag = "class"))
attributes(clean_val_type) <- NULL
dtm_val <- DocumentTermMatrix(clean_val,
control = list(dictionary = freq_terms))
clean_test <- sample(CleanCorpus(raw_test))
clean_test_type <- unlist(meta(clean_test, tag = "class"))
attributes(clean_test_type) <- NULL
dtm_test <- DocumentTermMatrix(clean_test,
control = list(dictionary = freq_terms))
The caret package requires its input to be a numeric matrix. As the DTM is a special form of sparse matrix, we need to convert it to something caret understands. The response vector must be a factor for classification, which is why all three clean_x_type vectors were created as factors.
train_m <- as.matrix(dtm_train)
clean_train_type <- factor(clean_train_type, levels = c("spam", "ham"))
val_m <- as.matrix(dtm_val)
clean_val_type <- factor(clean_val_type, levels = c("spam", "ham"))
test_m <- as.matrix(dtm_test)
clean_test_type <- factor(clean_test_type, levels = c("spam", "ham"))
Now we can train the models. The process will generally follow the following path:
caret package on the training set to pick “best” model given the supplied control, pre-processing, or other [hyper-]parameters. This may include some level of validationAs the caret package serves as an umbrella for over 230 model types living in different packages, we may select a less-sophisticated version of a family if it reduces code complexity and migraine propensity. Forgive us as well if we don’t explain every family and every selection. Below we create the model matrices which will be passed to caret.
Experimentation was done with many of the tuning parameters. However, most increases in accuracy came at an inordinate expense of time. Therefore, for the purposes of this exercise, many of the more advantageous options will be limited. For example, cross-validation will be limited to single-pass ten-fold. In production, one should be more vigorous, of course.
Usually, AUC, a function of ROC, is used for classification problems. However, for imbalanced data sets it is suggested to use one of precision, recall, or F1 instead. See here, here, or here for examples.
In our case, the data set is imbalanced, and the cost of a false positive (classifying ham as spam) is greater than a false negative. Originally, we selected precision as the metric, as hitting the “junk” button for something in your inbox is less annoying than having your boss’s email sit in your junk folder.
However, as we trained models, we found some fascinating results. In one of the random forest models, the algorithm found a better model with one less false positive, at the expense of 61 more false negatives. Therefore, we decided to redo the tests using the balanced F1 as the optimization metric.
This is the classic good-old logistic regression in R. There are no hyper/tuning parameters, so the only comparison can be between the method of cross-validation.
# 10-fold CV
tr_ctrl <- trainControl(method = "cv", number = 10L, classProbs = TRUE,
summaryFunction = prSummary)
LogR1 <- train(x = train_m, y = clean_train_type, method = "glm",
family = "binomial", trControl = tr_ctrl, metric = "F", model=TRUE)
LogR1
## Generalized Linear Model
##
## 1943 samples
## 273 predictor
## 2 classes: 'spam', 'ham'
##
## No pre-processing
## Resampling: Cross-Validated (10 fold)
## Summary of sample sizes: 1748, 1749, 1748, 1750, 1749, 1749, ...
## Resampling results:
##
## AUC Precision Recall F
## NaN 0.832728 0.8845135 0.8512476
LogR1v <- predict(LogR1, val_m)
confusionMatrix(LogR1v, clean_val_type, mode = "prec_recall", positive = "spam")
## Confusion Matrix and Statistics
##
## Reference
## Prediction spam ham
## spam 299 50
## ham 53 545
##
## Accuracy : 0.8912
## 95% CI : (0.8696, 0.9104)
## No Information Rate : 0.6283
## P-Value [Acc > NIR] : <2e-16
##
## Kappa : 0.7667
##
## Mcnemar's Test P-Value : 0.8438
##
## Precision : 0.8567
## Recall : 0.8494
## F1 : 0.8531
## Prevalence : 0.3717
## Detection Rate : 0.3157
## Detection Prevalence : 0.3685
## Balanced Accuracy : 0.8827
##
## 'Positive' Class : spam
##
# Monte-Carlo Cross validation using 75/25 and 5 iterations
tr_ctrl <- trainControl(method = "LGOCV", number = 10L, p = 0.75,
classProbs = TRUE, summaryFunction = prSummary)
LogR2 <- train(x = train_m, y = clean_train_type, method = "glm",
family = "binomial", trControl = tr_ctrl, metric = "F", model=TRUE)
LogR2
## Generalized Linear Model
##
## 1943 samples
## 273 predictor
## 2 classes: 'spam', 'ham'
##
## No pre-processing
## Resampling: Repeated Train/Test Splits Estimated (10 reps, 75%)
## Summary of sample sizes: 1458, 1458, 1458, 1458, 1458, 1458, ...
## Resampling results:
##
## AUC Precision Recall F
## NaN 0.8312224 0.8450867 0.8355922
LogR2v <- predict(LogR2, val_m)
confusionMatrix(LogR2v, clean_val_type, mode = "prec_recall", positive = "spam")
## Confusion Matrix and Statistics
##
## Reference
## Prediction spam ham
## spam 299 50
## ham 53 545
##
## Accuracy : 0.8912
## 95% CI : (0.8696, 0.9104)
## No Information Rate : 0.6283
## P-Value [Acc > NIR] : <2e-16
##
## Kappa : 0.7667
##
## Mcnemar's Test P-Value : 0.8438
##
## Precision : 0.8567
## Recall : 0.8494
## F1 : 0.8531
## Prevalence : 0.3717
## Detection Rate : 0.3157
## Detection Prevalence : 0.3685
## Balanced Accuracy : 0.8827
##
## 'Positive' Class : spam
##
Both versions performed the same on the validation set. As the first has a slightly better F-score, we will select that one.
Which terms had the most influence on ham/spam classification using Logistic Regression?
# estimate variable importance
importance <- varImp(LogR2)
# summarize importance
print(importance)
## glm variable importance
##
## only 20 most important variables shown (out of 273)
##
## Overall
## compani 100.0000
## new 58.5875
## like 36.8953
## subject 36.8953
## phone 0.3419
## list 0.3171
## lot 0.2861
## news 0.2845
## look 0.2753
## world 0.2709
## got 0.2641
## work 0.2596
## make 0.2414
## want 0.2363
## welcom 0.2314
## come 0.2077
## system 0.2050
## version 0.1977
## anoth 0.1934
## point 0.1823
The ranger package is used as the random forest engine due to its being optimized for higher dimensions.
tr_ctrl <- trainControl(method = "cv", number = 10L, classProbs = TRUE,
summaryFunction = prSummary)
RF1 <- train(x = train_m, y = clean_train_type, method = 'ranger', importance = 'impurity',
trControl = tr_ctrl, metric = "F", tuneLength = 5L)
RF1
## Random Forest
##
## 1943 samples
## 273 predictor
## 2 classes: 'spam', 'ham'
##
## No pre-processing
## Resampling: Cross-Validated (10 fold)
## Summary of sample sizes: 1748, 1748, 1749, 1749, 1748, 1749, ...
## Resampling results across tuning parameters:
##
## mtry splitrule AUC Precision Recall F
## 2 gini NaN 0.9788718 0.8013665 0.8809090
## 2 extratrees NaN 0.9817402 0.7768944 0.8670101
## 69 gini NaN 0.9086450 0.8978468 0.9026036
## 69 extratrees NaN 0.9124903 0.9050311 0.9079492
## 137 gini NaN 0.8978854 0.8935818 0.8951045
## 137 extratrees NaN 0.9057635 0.9007660 0.9024300
## 205 gini NaN 0.8884988 0.8964596 0.8917135
## 205 extratrees NaN 0.8960593 0.8993168 0.8969889
## 273 gini NaN 0.8820752 0.8950104 0.8877914
## 273 extratrees NaN 0.8965541 0.9007660 0.8978900
##
## Tuning parameter 'min.node.size' was held constant at a value of 1
## F was used to select the optimal model using the largest value.
## The final values used for the model were mtry = 69, splitrule =
## extratrees and min.node.size = 1.
RF1v <- predict(RF1, newdata=val_m)
confusionMatrix(RF1v, clean_val_type, mode = "prec_recall", positive = "spam")
## Confusion Matrix and Statistics
##
## Reference
## Prediction spam ham
## spam 313 28
## ham 39 567
##
## Accuracy : 0.9293
## 95% CI : (0.911, 0.9448)
## No Information Rate : 0.6283
## P-Value [Acc > NIR] : <2e-16
##
## Kappa : 0.8476
##
## Mcnemar's Test P-Value : 0.2218
##
## Precision : 0.9179
## Recall : 0.8892
## F1 : 0.9033
## Prevalence : 0.3717
## Detection Rate : 0.3305
## Detection Prevalence : 0.3601
## Balanced Accuracy : 0.9211
##
## 'Positive' Class : spam
##
Let’s do a bit wider search among tuning parameters.
rf_grid <- expand.grid(mtry = seq(8, 48, 4),
splitrule = c('gini', 'extratrees'),
min.node.size = c(1L, 10L))
RF2 <- train(x = train_m, y = clean_train_type, method = 'ranger', importance = 'impurity',
trControl = tr_ctrl, metric = "F", tuneGrid = rf_grid)
RF2
## Random Forest
##
## 1943 samples
## 273 predictor
## 2 classes: 'spam', 'ham'
##
## No pre-processing
## Resampling: Cross-Validated (10 fold)
## Summary of sample sizes: 1748, 1749, 1749, 1749, 1748, 1749, ...
## Resampling results across tuning parameters:
##
## mtry splitrule min.node.size AUC Precision Recall F
## 8 gini 1 NaN 0.9560836 0.8805797 0.9160525
## 8 gini 10 NaN 0.9515864 0.8791304 0.9131949
## 8 extratrees 1 NaN 0.9721391 0.8834576 0.9251636
## 8 extratrees 10 NaN 0.9705734 0.8790476 0.9218051
## 12 gini 1 NaN 0.9401049 0.8834783 0.9102516
## 12 gini 10 NaN 0.9506126 0.8878054 0.9174724
## 12 extratrees 1 NaN 0.9577979 0.8906211 0.9224052
## 12 extratrees 10 NaN 0.9578437 0.8878054 0.9207585
## 16 gini 1 NaN 0.9357096 0.8849068 0.9091086
## 16 gini 10 NaN 0.9372691 0.8849068 0.9098376
## 16 extratrees 1 NaN 0.9431809 0.8906418 0.9155021
## 16 extratrees 10 NaN 0.9534962 0.8892754 0.9195344
## 20 gini 1 NaN 0.9292837 0.8920911 0.9098575
## 20 gini 10 NaN 0.9307878 0.8892340 0.9090334
## 20 extratrees 1 NaN 0.9448985 0.8921325 0.9171801
## 20 extratrees 10 NaN 0.9451897 0.8935818 0.9179955
## 24 gini 1 NaN 0.9251642 0.8934990 0.9086789
## 24 gini 10 NaN 0.9280280 0.8877433 0.9069372
## 24 extratrees 1 NaN 0.9380455 0.8921325 0.9138337
## 24 extratrees 10 NaN 0.9427532 0.8935404 0.9166868
## 28 gini 1 NaN 0.9154855 0.8920911 0.9032447
## 28 gini 10 NaN 0.9210724 0.8920704 0.9059209
## 28 extratrees 1 NaN 0.9330126 0.9007453 0.9159902
## 28 extratrees 10 NaN 0.9398179 0.8935611 0.9153825
## 32 gini 1 NaN 0.9146543 0.8949482 0.9042545
## 32 gini 10 NaN 0.9172048 0.8920497 0.9039964
## 32 extratrees 1 NaN 0.9338833 0.8964182 0.9142360
## 32 extratrees 10 NaN 0.9380676 0.8906625 0.9131151
## 36 gini 1 NaN 0.9156898 0.8963561 0.9055929
## 36 gini 10 NaN 0.9171576 0.8906211 0.9032405
## 36 extratrees 1 NaN 0.9300194 0.9050311 0.9169262
## 36 extratrees 10 NaN 0.9374517 0.8992754 0.9173512
## 40 gini 1 NaN 0.9054875 0.8963561 0.9004662
## 40 gini 10 NaN 0.9171086 0.8934990 0.9047672
## 40 extratrees 1 NaN 0.9237390 0.8949689 0.9087357
## 40 extratrees 10 NaN 0.9341807 0.8963975 0.9142381
## 44 gini 1 NaN 0.9105831 0.8949275 0.9022434
## 44 gini 10 NaN 0.9127428 0.8920704 0.9019550
## 44 extratrees 1 NaN 0.9256656 0.9007453 0.9126551
## 44 extratrees 10 NaN 0.9324693 0.8964182 0.9136360
## 48 gini 1 NaN 0.9089833 0.8949275 0.9015684
## 48 gini 10 NaN 0.9110386 0.8963768 0.9031351
## 48 extratrees 1 NaN 0.9256536 0.9007039 0.9125853
## 48 extratrees 10 NaN 0.9264533 0.8920911 0.9085885
##
## F was used to select the optimal model using the largest value.
## The final values used for the model were mtry = 8, splitrule =
## extratrees and min.node.size = 1.
RF2v <- predict(RF2, val_m)
confusionMatrix(RF2v, clean_val_type, mode = "prec_recall", positive = "spam")
## Confusion Matrix and Statistics
##
## Reference
## Prediction spam ham
## spam 305 10
## ham 47 585
##
## Accuracy : 0.9398
## 95% CI : (0.9227, 0.9541)
## No Information Rate : 0.6283
## P-Value [Acc > NIR] : < 2.2e-16
##
## Kappa : 0.8683
##
## Mcnemar's Test P-Value : 1.858e-06
##
## Precision : 0.9683
## Recall : 0.8665
## F1 : 0.9145
## Prevalence : 0.3717
## Detection Rate : 0.3221
## Detection Prevalence : 0.3326
## Balanced Accuracy : 0.9248
##
## 'Positive' Class : spam
##
Interestingly, the first model performed better on the validation set despite performing more poorly on the training set. Possibly an example of overfitting.
Which terms had the most influence on ham/spam classification using Random Forest?
# estimate variable importance
importance <- varImp(RF2)
# summarize importance
print(importance)
## ranger variable importance
##
## only 20 most important variables shown (out of 273)
##
## Overall
## click 100.00
## url 87.88
## wrote 46.93
## remov 32.66
## free 28.51
## visit 27.96
## receiv 22.60
## contenttransferencod 20.69
## email 18.88
## offer 18.86
## guarante 18.61
## credit 18.52
## inform 15.03
## contenttyp 13.90
## use 13.55
## repli 13.23
## fill 12.23
## onlin 11.90
## unsubscrib 11.46
## sep 10.97
tr_ctrl <- trainControl(method = "cv", number = 10L, classProbs = TRUE,
summaryFunction = prSummary)
NB1 <- train(x = train_m, y = clean_train_type, method = "nb",
trControl = tr_ctrl, metric = "F")
NB1
## Naive Bayes
##
## 1943 samples
## 273 predictor
## 2 classes: 'spam', 'ham'
##
## No pre-processing
## Resampling: Cross-Validated (10 fold)
## Summary of sample sizes: 1748, 1748, 1748, 1750, 1749, 1749, ...
## Resampling results across tuning parameters:
##
## usekernel AUC Precision Recall F
## FALSE NaN NaN NaN NaN
## TRUE NaN 0.9466667 0.03451346 0.06611488
##
## Tuning parameter 'fL' was held constant at a value of 0
## Tuning
## parameter 'adjust' was held constant at a value of 1
## F was used to select the optimal model using the largest value.
## The final values used for the model were fL = 0, usekernel = TRUE
## and adjust = 1.
NB1v <- predict(NB1, val_m)
confusionMatrix(NB1v, clean_val_type, mode = "prec_recall", positive = "spam")
## Confusion Matrix and Statistics
##
## Reference
## Prediction spam ham
## spam 14 0
## ham 338 595
##
## Accuracy : 0.6431
## 95% CI : (0.6116, 0.6736)
## No Information Rate : 0.6283
## P-Value [Acc > NIR] : 0.1821
##
## Kappa : 0.0495
##
## Mcnemar's Test P-Value : <2e-16
##
## Precision : 1.00000
## Recall : 0.03977
## F1 : 0.07650
## Prevalence : 0.37170
## Detection Rate : 0.01478
## Detection Prevalence : 0.01478
## Balanced Accuracy : 0.51989
##
## 'Positive' Class : spam
##
This is an awfully performing model. Naive Bayes is known to be very sensitive to class imbalances. Let’s implement up-sampling and a wider search.
tr_ctrl <- trainControl(method = "cv", number = 10L, classProbs = TRUE,
summaryFunction = prSummary, sampling = 'up')
nb_grid <- expand.grid(usekernel = TRUE,
fL = seq(0.25, 0.75, 0.05),
adjust = 1)
NB2 <- train(x = train_m, y = clean_train_type, method = "nb",
trControl = tr_ctrl, metric = "F", tuneGrid = nb_grid)
NB2
## Naive Bayes
##
## 1943 samples
## 273 predictor
## 2 classes: 'spam', 'ham'
##
## No pre-processing
## Resampling: Cross-Validated (10 fold)
## Summary of sample sizes: 1749, 1748, 1749, 1749, 1748, 1748, ...
## Addtional sampling using up-sampling
##
## Resampling results across tuning parameters:
##
## fL AUC Precision Recall F
## 0.25 NaN 0.9541667 0.1166046 0.2037892
## 0.30 NaN 0.9666667 0.1137267 0.1994249
## 0.35 NaN 0.9088578 0.1051760 0.1841190
## 0.40 NaN 0.9333333 0.1081366 0.1881287
## 0.45 NaN 0.9394737 0.1194824 0.2057451
## 0.50 NaN 0.9155043 0.1412008 0.2309263
## 0.55 NaN 0.9173993 0.1238716 0.2092754
## 0.60 NaN 0.9375000 0.1109524 0.1908032
## 0.65 NaN 0.9416667 0.1153209 0.2004130
## 0.70 NaN 0.9266667 0.1123395 0.1935542
## 0.75 NaN 0.9604167 0.1166874 0.2039317
##
## Tuning parameter 'usekernel' was held constant at a value of TRUE
##
## Tuning parameter 'adjust' was held constant at a value of 1
## F was used to select the optimal model using the largest value.
## The final values used for the model were fL = 0.5, usekernel = TRUE
## and adjust = 1.
NB2v <- predict(NB2, val_m)
confusionMatrix(NB2v, clean_val_type, mode = "prec_recall", positive = "spam")
## Confusion Matrix and Statistics
##
## Reference
## Prediction spam ham
## spam 23 0
## ham 329 595
##
## Accuracy : 0.6526
## 95% CI : (0.6213, 0.6829)
## No Information Rate : 0.6283
## P-Value [Acc > NIR] : 0.06465
##
## Kappa : 0.0808
##
## Mcnemar's Test P-Value : < 2e-16
##
## Precision : 1.00000
## Recall : 0.06534
## F1 : 0.12267
## Prevalence : 0.37170
## Detection Rate : 0.02429
## Detection Prevalence : 0.02429
## Balanced Accuracy : 0.53267
##
## 'Positive' Class : spam
##
Results are still miserable. Naive Bayes also assumes Independence between all features - with engligh text, words/terms are likely to have correlations thus violating the core assumption of Naive Bayes. Since our current terms also some leakage of HTML tags and attributes, there are going to be correlations between terms we have selected. Naive Bayes would probably perform significantly better if we stipped all HTML terms and made a pass on reducing features by looking for correlations.
Which terms had the most influence on ham/spam classification using Naive Bayes?
# estimate variable importance
importance <- varImp(NB2)
# summarize importance
print(importance)
## ROC curve variable importance
##
## only 20 most important variables shown (out of 273)
##
## Importance
## click 100.00
## email 89.99
## wrote 74.99
## remov 71.61
## receiv 68.86
## free 61.91
## will 58.12
## url 57.92
## inform 50.18
## busi 44.29
## address 39.52
## offer 39.47
## money 34.79
## repli 34.78
## contenttransferencod 33.53
## month 31.43
## now 31.20
## contenttyp 30.01
## send 29.89
## form 29.85
tr_ctrl <- trainControl(method = "cv", number = 10L, classProbs = TRUE,
summaryFunction = prSummary)
NN1 <- train(x = train_m, y = clean_train_type, method = "nnet", trace = FALSE,
trControl = tr_ctrl, metric = "F", tuneLength=5L, maxit = 250L)
NN1
## Neural Network
##
## 1943 samples
## 273 predictor
## 2 classes: 'spam', 'ham'
##
## No pre-processing
## Resampling: Cross-Validated (10 fold)
## Summary of sample sizes: 1748, 1749, 1750, 1749, 1748, 1749, ...
## Resampling results across tuning parameters:
##
## size decay AUC Precision Recall F
## 1 0e+00 NaN 0.8561658 0.8735197 0.8605048
## 1 1e-04 NaN 0.8755199 0.8721118 0.8711027
## 1 1e-03 NaN 0.8923953 0.8778468 0.8832447
## 1 1e-02 NaN 0.8916082 0.8705590 0.8800400
## 1 1e-01 NaN 0.9033437 0.8720704 0.8857136
## 3 0e+00 NaN 0.8804541 0.8835818 0.8807774
## 3 1e-04 NaN 0.8903736 0.8719876 0.8801829
## 3 1e-03 NaN 0.8993386 0.8706418 0.8829952
## 3 1e-02 NaN 0.9003442 0.8619462 0.8795136
## 3 1e-01 NaN 0.9075439 0.8806625 0.8926490
## 5 0e+00 NaN NaN NaN NaN
## 5 1e-04 NaN NaN NaN NaN
## 5 1e-03 NaN NaN NaN NaN
## 5 1e-02 NaN NaN NaN NaN
## 5 1e-01 NaN NaN NaN NaN
## 7 0e+00 NaN NaN NaN NaN
## 7 1e-04 NaN NaN NaN NaN
## 7 1e-03 NaN NaN NaN NaN
## 7 1e-02 NaN NaN NaN NaN
## 7 1e-01 NaN NaN NaN NaN
## 9 0e+00 NaN NaN NaN NaN
## 9 1e-04 NaN NaN NaN NaN
## 9 1e-03 NaN NaN NaN NaN
## 9 1e-02 NaN NaN NaN NaN
## 9 1e-01 NaN NaN NaN NaN
##
## F was used to select the optimal model using the largest value.
## The final values used for the model were size = 3 and decay = 0.1.
NN1v <- predict(NN1, val_m)
confusionMatrix(NN1v, clean_val_type, mode = "prec_recall", positive = "spam")
## Confusion Matrix and Statistics
##
## Reference
## Prediction spam ham
## spam 312 30
## ham 40 565
##
## Accuracy : 0.9261
## 95% CI : (0.9075, 0.9419)
## No Information Rate : 0.6283
## P-Value [Acc > NIR] : <2e-16
##
## Kappa : 0.8408
##
## Mcnemar's Test P-Value : 0.2821
##
## Precision : 0.9123
## Recall : 0.8864
## F1 : 0.8991
## Prevalence : 0.3717
## Detection Rate : 0.3295
## Detection Prevalence : 0.3611
## Balanced Accuracy : 0.9180
##
## 'Positive' Class : spam
##
Some light tuning:
nn_grid <- expand.grid(size = 1L, decay = c(0.99, seq(0.95, 0.05, -0.05), 0.01))
NN2 <- train(x = train_m, y = clean_train_type, method = "nnet", trace = FALSE,
trControl = tr_ctrl, metric = "F", tuneGrid = nn_grid,
maxit = 250L)
NN2
## Neural Network
##
## 1943 samples
## 273 predictor
## 2 classes: 'spam', 'ham'
##
## No pre-processing
## Resampling: Cross-Validated (10 fold)
## Summary of sample sizes: 1749, 1749, 1748, 1749, 1749, 1748, ...
## Resampling results across tuning parameters:
##
## decay AUC Precision Recall F
## 0.01 NaN 0.8901851 0.8848447 0.8850931
## 0.05 NaN 0.8907265 0.8805383 0.8841007
## 0.10 NaN 0.9118240 0.8790683 0.8941087
## 0.15 NaN 0.9102083 0.8747826 0.8911191
## 0.20 NaN 0.9105165 0.8805590 0.8942741
## 0.25 NaN 0.9156290 0.8819462 0.8976071
## 0.30 NaN 0.9207427 0.8805176 0.8994246
## 0.35 NaN 0.9187761 0.8848861 0.9005812
## 0.40 NaN 0.9245709 0.8863147 0.9039597
## 0.45 NaN 0.9189044 0.8848654 0.9004538
## 0.50 NaN 0.9265699 0.8819876 0.9028062
## 0.55 NaN 0.9243469 0.8848654 0.9030637
## 0.60 NaN 0.9254402 0.8848654 0.9037857
## 0.65 NaN 0.9227472 0.8849068 0.9023392
## 0.70 NaN 0.9284551 0.8877640 0.9066254
## 0.75 NaN 0.9293481 0.8820083 0.9041183
## 0.80 NaN 0.9336852 0.8848861 0.9076063
## 0.85 NaN 0.9351193 0.8848861 0.9082099
## 0.90 NaN 0.9365496 0.8877847 0.9104182
## 0.95 NaN 0.9388448 0.8834576 0.9093721
## 0.99 NaN 0.9377932 0.8820290 0.9080236
##
## Tuning parameter 'size' was held constant at a value of 1
## F was used to select the optimal model using the largest value.
## The final values used for the model were size = 1 and decay = 0.9.
NN2v <- predict(NN2, val_m)
confusionMatrix(NN2v, clean_val_type, mode = "prec_recall", positive = "spam")
## Confusion Matrix and Statistics
##
## Reference
## Prediction spam ham
## spam 312 18
## ham 40 577
##
## Accuracy : 0.9388
## 95% CI : (0.9215, 0.9532)
## No Information Rate : 0.6283
## P-Value [Acc > NIR] : < 2.2e-16
##
## Kappa : 0.8672
##
## Mcnemar's Test P-Value : 0.005826
##
## Precision : 0.9455
## Recall : 0.8864
## F1 : 0.9150
## Prevalence : 0.3717
## Detection Rate : 0.3295
## Detection Prevalence : 0.3485
## Balanced Accuracy : 0.9281
##
## 'Positive' Class : spam
##
Both models performed the same on the validation set. As the second performed better on the training set too, we will use it.
Which terms had the most influence on ham/spam classification using a Neural Network?
# estimate variable importance
importance <- varImp(NN2)
# summarize importance
print(importance)
## nnet variable importance
##
## only 20 most important variables shown (out of 273)
##
## Overall
## url 100.00
## click 80.71
## wrote 67.61
## write 41.78
## use 39.23
## two 38.81
## spam 38.66
## visit 37.63
## guarante 36.31
## someth 35.82
## minut 35.67
## file 35.16
## origin 34.74
## price 34.09
## home 33.78
## credit 33.53
## old 33.21
## contenttransferencod 33.13
## satalk 33.10
## seem 32.31
tr_ctrl <- trainControl(method = "cv", number = 10L, classProbs = TRUE,
summaryFunction = prSummary)
GBM1 <- train(x = train_m, y = clean_train_type, method = "gbm", verbose = FALSE,
trControl = tr_ctrl, tuneLength = 5L, metric = "F")
GBM1v <- predict(GBM1, val_m)
confusionMatrix(GBM1v, clean_val_type, mode = "prec_recall", positive = "spam")
## Confusion Matrix and Statistics
##
## Reference
## Prediction spam ham
## spam 307 16
## ham 45 579
##
## Accuracy : 0.9356
## 95% CI : (0.918, 0.9504)
## No Information Rate : 0.6283
## P-Value [Acc > NIR] : < 2.2e-16
##
## Kappa : 0.8597
##
## Mcnemar's Test P-Value : 0.000337
##
## Precision : 0.9505
## Recall : 0.8722
## F1 : 0.9096
## Prevalence : 0.3717
## Detection Rate : 0.3242
## Detection Prevalence : 0.3411
## Balanced Accuracy : 0.9226
##
## 'Positive' Class : spam
##
This model looks really good. Let’s throw a little extra fine-tuning in. After running a wide-scale grid, the best option is selected below, so that the entire grid doesn’t have to rerun every time.
gbm_grid <- expand.grid(n.trees = 400L,
interaction.depth = 7L,
shrinkage = 0.1,
n.minobsinnode = 10L)
GBM2 <- train(x = train_m, y = clean_train_type, method = "gbm", verbose = FALSE,
trControl = tr_ctrl, tuneGrid = gbm_grid, metric = "F")
GBM2
## Stochastic Gradient Boosting
##
## 1943 samples
## 273 predictor
## 2 classes: 'spam', 'ham'
##
## No pre-processing
## Resampling: Cross-Validated (10 fold)
## Summary of sample sizes: 1748, 1749, 1749, 1749, 1749, 1748, ...
## Resampling results:
##
## AUC Precision Recall F
## NaN 0.9312035 0.8934783 0.9114161
##
## Tuning parameter 'n.trees' was held constant at a value of 400
## 7
## Tuning parameter 'shrinkage' was held constant at a value of 0.1
##
## Tuning parameter 'n.minobsinnode' was held constant at a value of 10
GBM2v <- predict(GBM2, val_m)
confusionMatrix(GBM2v, clean_val_type, mode = "prec_recall", positive = "spam")
## Confusion Matrix and Statistics
##
## Reference
## Prediction spam ham
## spam 313 19
## ham 39 576
##
## Accuracy : 0.9388
## 95% CI : (0.9215, 0.9532)
## No Information Rate : 0.6283
## P-Value [Acc > NIR] : <2e-16
##
## Kappa : 0.8673
##
## Mcnemar's Test P-Value : 0.0126
##
## Precision : 0.9428
## Recall : 0.8892
## F1 : 0.9152
## Prevalence : 0.3717
## Detection Rate : 0.3305
## Detection Prevalence : 0.3506
## Balanced Accuracy : 0.9286
##
## 'Positive' Class : spam
##
The second model performed better.
Which terms had the most influence on ham/spam classification using a Gradient Boosted Machines?
# estimate variable importance
summary(GBM2)
## var rel.inf
## click click 2.161020e+01
## wrote wrote 7.605447e+00
## contenttransferencod contenttransferencod 5.853304e+00
## url url 5.754147e+00
## email email 4.467909e+00
## free free 4.467148e+00
## credit credit 3.519458e+00
## use use 2.539157e+00
## inform inform 2.489711e+00
## remov remov 2.211366e+00
## visit visit 2.100716e+00
## tri tri 1.860417e+00
## will will 1.572677e+00
## receiv receiv 1.572550e+00
## money money 1.503986e+00
## repli repli 1.471839e+00
## guarante guarante 1.361171e+00
## satalk satalk 1.300317e+00
## file file 1.038014e+00
## think think 9.985085e-01
## spam spam 9.400972e-01
## write write 9.023706e-01
## month month 8.311448e-01
## price price 7.122357e-01
## just just 6.271718e-01
## run run 6.203517e-01
## onlin onlin 6.137794e-01
## life life 6.110274e-01
## origin origin 6.020475e-01
## compani compani 6.014813e-01
## home home 5.901168e-01
## offer offer 5.830631e-01
## said said 5.816172e-01
## seem seem 5.670010e-01
## post post 5.120565e-01
## someth someth 4.929206e-01
## set set 4.652284e-01
## two two 4.232338e-01
## messag messag 4.145107e-01
## old old 3.606918e-01
## busi busi 3.580901e-01
## base base 3.271115e-01
## minut minut 3.242680e-01
## say say 3.127809e-01
## date date 3.071246e-01
## linux linux 2.945055e-01
## list list 2.847406e-01
## problem problem 2.783897e-01
## like like 2.603574e-01
## contact contact 2.534431e-01
## internet internet 2.439998e-01
## mail mail 2.239890e-01
## can can 2.170459e-01
## market market 2.152287e-01
## group group 2.129763e-01
## ask ask 1.942057e-01
## page page 1.912279e-01
## first first 1.798282e-01
## get get 1.742528e-01
## know know 1.637295e-01
## form form 1.606384e-01
## still still 1.548644e-01
## well well 1.511234e-01
## sure sure 1.493177e-01
## increas increas 1.490209e-01
## world world 1.460991e-01
## user user 1.460639e-01
## big big 1.452180e-01
## also also 1.437661e-01
## anyth anyth 1.398074e-01
## order order 1.396753e-01
## cours cours 1.352443e-01
## want want 1.318634e-01
## els els 1.284044e-01
## one one 1.279265e-01
## help help 1.272265e-01
## time time 1.262518e-01
## bit bit 1.253737e-01
## send send 1.246751e-01
## wed wed 1.217600e-01
## peopl peopl 1.214480e-01
## comput comput 1.191559e-01
## bill bill 1.179737e-01
## provid provid 1.176690e-01
## sponsor sponsor 1.126738e-01
## per per 1.099541e-01
## sinc sinc 1.066307e-01
## new new 1.066102e-01
## test test 1.024831e-01
## result result 1.018704e-01
## anyon anyon 9.788953e-02
## phone phone 9.235779e-02
## got got 9.101675e-02
## start start 8.667044e-02
## fix fix 8.482784e-02
## futur futur 8.133557e-02
## end end 7.921247e-02
## look look 7.707685e-02
## welcom welcom 7.643967e-02
## place place 7.452474e-02
## differ differ 7.254353e-02
## now now 7.049492e-02
## window window 6.760719e-02
## info info 6.740470e-02
## find find 6.701832e-02
## friend friend 6.688898e-02
## multipart multipart 6.494659e-02
## anoth anoth 6.448710e-02
## servic servic 6.240642e-02
## day day 6.110203e-02
## high high 6.087286e-02
## server server 6.019775e-02
## show show 5.851082e-02
## subject subject 5.774622e-02
## special special 5.629601e-02
## site site 5.081442e-02
## live live 5.065205e-02
## make make 4.785880e-02
## come come 4.521090e-02
## network network 4.467296e-02
## much much 4.426474e-02
## septemb septemb 4.316123e-02
## found found 4.167335e-02
## idea idea 4.141606e-02
## work work 4.045378e-02
## thank thank 3.962084e-02
## program program 3.946947e-02
## around around 3.886292e-02
## import import 3.808581e-02
## read read 3.799456e-02
## system system 3.789746e-02
## million million 3.788443e-02
## call call 3.567298e-02
## name name 3.445449e-02
## noth noth 3.418957e-02
## may may 3.295285e-02
## thing thing 3.089898e-02
## talk talk 3.051746e-02
## case case 2.996170e-02
## regard regard 2.959185e-02
## keep keep 2.944816e-02
## textplain textplain 2.931279e-02
## seen seen 2.910696e-02
## see see 2.809688e-02
## stuff stuff 2.793521e-02
## back back 2.746908e-02
## bythinkgeek bythinkgeek 2.734656e-02
## account account 2.673125e-02
## part part 2.644753e-02
## possibl possibl 2.617820e-02
## give give 2.613689e-02
## pay pay 2.521518e-02
## version version 2.438420e-02
## need need 2.377230e-02
## someon someon 2.337084e-02
## sfnet sfnet 2.230258e-02
## news news 2.219106e-02
## question question 2.206843e-02
## communic communic 1.964057e-02
## forward forward 1.922125e-02
## mean mean 1.917130e-02
## let let 1.867871e-02
## report report 1.828012e-02
## today today 1.805068e-02
## take take 1.797569e-02
## number number 1.782216e-02
## instal instal 1.781596e-02
## custom custom 1.695841e-02
## process process 1.677739e-02
## real real 1.665066e-02
## easi easi 1.652082e-02
## power power 1.647496e-02
## best best 1.517644e-02
## week week 1.403640e-02
## mayb mayb 1.401136e-02
## heaven heaven 1.378781e-02
## point point 1.340291e-02
## sell sell 1.329788e-02
## data data 1.164278e-02
## packag packag 1.056673e-02
## mani mani 1.024633e-02
## ever ever 9.829212e-03
## better better 9.589975e-03
## simpli simpli 9.007904e-03
## rate rate 8.473232e-03
## person person 8.184903e-03
## year year 7.749456e-03
## next next 6.107026e-03
## issu issu 5.850099e-03
## put put 5.577113e-03
## wait wait 4.659619e-03
## code code 4.548406e-03
## last last 4.533431e-03
## web web 4.368925e-03
## word word 4.303451e-03
## build build 3.927266e-03
## realli realli 3.865272e-03
## contenttyp contenttyp 3.644155e-03
## product product 3.050829e-03
## interest interest 2.844313e-03
## save save 2.808408e-03
## direct direct 2.790693e-03
## includ includ 2.428964e-03
## good good 2.324983e-03
## lot lot 2.007823e-03
## kind kind 1.987814e-03
## geek geek 1.386487e-03
## error error 1.213979e-03
## right right 1.170147e-03
## yes yes 9.860478e-04
## releas releas 6.735418e-04
## abl abl 0.000000e+00
## access access 0.000000e+00
## actual actual 0.000000e+00
## add add 0.000000e+00
## address address 0.000000e+00
## allow allow 0.000000e+00
## alway alway 0.000000e+00
## aug aug 0.000000e+00
## avail avail 0.000000e+00
## bad bad 0.000000e+00
## chang chang 0.000000e+00
## check check 0.000000e+00
## complet complet 0.000000e+00
## cost cost 0.000000e+00
## current current 0.000000e+00
## done done 0.000000e+00
## either either 0.000000e+00
## enough enough 0.000000e+00
## even even 0.000000e+00
## everi everi 0.000000e+00
## everyth everyth 0.000000e+00
## fact fact 0.000000e+00
## feel feel 0.000000e+00
## fill fill 0.000000e+00
## follow follow 0.000000e+00
## format format 0.000000e+00
## full full 0.000000e+00
## great great 0.000000e+00
## happen happen 0.000000e+00
## hour hour 0.000000e+00
## howev howev 0.000000e+00
## instead instead 0.000000e+00
## least least 0.000000e+00
## less less 0.000000e+00
## line line 0.000000e+00
## link link 0.000000e+00
## long long 0.000000e+00
## made made 0.000000e+00
## manag manag 0.000000e+00
## mime mime 0.000000e+00
## must must 0.000000e+00
## never never 0.000000e+00
## note note 0.000000e+00
## probabl probabl 0.000000e+00
## quick quick 0.000000e+00
## reason reason 0.000000e+00
## relat relat 0.000000e+00
## requir requir 0.000000e+00
## sent sent 0.000000e+00
## sep sep 0.000000e+00
## softwar softwar 0.000000e+00
## sourc sourc 0.000000e+00
## state state 0.000000e+00
## support support 0.000000e+00
## tell tell 0.000000e+00
## though though 0.000000e+00
## type type 0.000000e+00
## unsubscrib unsubscrib 0.000000e+00
## way way 0.000000e+00
## wish wish 0.000000e+00
## within within 0.000000e+00
## without without 0.000000e+00
With over 230 possible models, there are many more options to train, like XGBoost, Neural Networks, Bayesian Regression, Support Vector Machines, etc. We don’t need to exhaust the possibilities here.
The best models in the above categories will now be compared against the testing/holdout set:
LogRt <- predict(LogR1, test_m)
RFt <- predict(RF1, newdata=test_m)
NNt <- predict(NN2, test_m)
NBt <- predict(NB2, test_m) # For laughs
GBMt <- predict(GBM2, test_m)
confusionMatrix(LogRt, clean_test_type, mode = "prec_recall", positive = "spam")
## Confusion Matrix and Statistics
##
## Reference
## Prediction spam ham
## spam 292 60
## ham 57 597
##
## Accuracy : 0.8837
## 95% CI : (0.8623, 0.9029)
## No Information Rate : 0.6531
## P-Value [Acc > NIR] : <2e-16
##
## Kappa : 0.7439
##
## Mcnemar's Test P-Value : 0.8533
##
## Precision : 0.8295
## Recall : 0.8367
## F1 : 0.8331
## Prevalence : 0.3469
## Detection Rate : 0.2903
## Detection Prevalence : 0.3499
## Balanced Accuracy : 0.8727
##
## 'Positive' Class : spam
##
confusionMatrix(RFt, clean_test_type, mode = "prec_recall", positive = "spam")
## Confusion Matrix and Statistics
##
## Reference
## Prediction spam ham
## spam 302 20
## ham 47 637
##
## Accuracy : 0.9334
## 95% CI : (0.9162, 0.948)
## No Information Rate : 0.6531
## P-Value [Acc > NIR] : < 2.2e-16
##
## Kappa : 0.8503
##
## Mcnemar's Test P-Value : 0.001491
##
## Precision : 0.9379
## Recall : 0.8653
## F1 : 0.9001
## Prevalence : 0.3469
## Detection Rate : 0.3002
## Detection Prevalence : 0.3201
## Balanced Accuracy : 0.9174
##
## 'Positive' Class : spam
##
confusionMatrix(NNt, clean_test_type, mode = "prec_recall", positive = "spam")
## Confusion Matrix and Statistics
##
## Reference
## Prediction spam ham
## spam 302 20
## ham 47 637
##
## Accuracy : 0.9334
## 95% CI : (0.9162, 0.948)
## No Information Rate : 0.6531
## P-Value [Acc > NIR] : < 2.2e-16
##
## Kappa : 0.8503
##
## Mcnemar's Test P-Value : 0.001491
##
## Precision : 0.9379
## Recall : 0.8653
## F1 : 0.9001
## Prevalence : 0.3469
## Detection Rate : 0.3002
## Detection Prevalence : 0.3201
## Balanced Accuracy : 0.9174
##
## 'Positive' Class : spam
##
confusionMatrix(NBt, clean_test_type, mode = "prec_recall", positive = "spam")
## Confusion Matrix and Statistics
##
## Reference
## Prediction spam ham
## spam 34 3
## ham 315 654
##
## Accuracy : 0.6839
## 95% CI : (0.6542, 0.7126)
## No Information Rate : 0.6531
## P-Value [Acc > NIR] : 0.02111
##
## Kappa : 0.1175
##
## Mcnemar's Test P-Value : < 2e-16
##
## Precision : 0.91892
## Recall : 0.09742
## F1 : 0.17617
## Prevalence : 0.34692
## Detection Rate : 0.03380
## Detection Prevalence : 0.03678
## Balanced Accuracy : 0.54643
##
## 'Positive' Class : spam
##
confusionMatrix(GBMt, clean_test_type, mode = "prec_recall", positive = "spam")
## Confusion Matrix and Statistics
##
## Reference
## Prediction spam ham
## spam 300 9
## ham 49 648
##
## Accuracy : 0.9423
## 95% CI : (0.9261, 0.9559)
## No Information Rate : 0.6531
## P-Value [Acc > NIR] : < 2.2e-16
##
## Kappa : 0.8693
##
## Mcnemar's Test P-Value : 3.04e-07
##
## Precision : 0.9709
## Recall : 0.8596
## F1 : 0.9119
## Prevalence : 0.3469
## Detection Rate : 0.2982
## Detection Prevalence : 0.3072
## Balanced Accuracy : 0.9230
##
## 'Positive' Class : spam
##
From these models, while the logistic had no false negatives—a recall of 1—it did so by coding 57 good emails as spam. The remaining models all did quite well, but the winner is the random forest model, with the highest F-score and fewest miscategorized emails of any type.
With our initial pass on this project, we did NOT remove HTML from email messages and as a consequence, HTML tags and attribute names and values became “words” or “terms” used by our models to help resolve SPAM vs HAM. Interestingly, our models performed significantly better and the HTML terms and attribute ended up being the most important features used as criteria by models. After seeing this, we modified our email cleaning to actively remove HTML markup. Our model perform dropped ~10% across all models without HTML. This suggests that the very presense of HTML markup in the corpus is a feature associated with and predictive of SPAM.
The email corpus is from the early 2000’s at a time when most email clients did NOT use HTML markup by default, so most HAM would NOT have included much if any HTML. SPAM on the other hand often included HTML links and images intended to draw the recipient to a website or email address where they could buy something.
While the presense of HTML was an indicator of SPAM in the early 2000’s, we suspect that models trained with HTML would perform poorly today as most email clients routinely use HTML markup for text formating, shared links and images. For this reason, we chose to remove the HTML and try training a model on only the email text, as that might perform better over time.
Note that whlie we tried to remove HTML markup, when we inspect the terms, we still see some words that look suspiciouly like HTML, for example, “contenttype”. This may suggest some leakage of HTML that we missed during scrubbing.
If you inspect the terms, you may note missing trailing characters. This is not a bug, but rather part of the word stem approach to simplifying the word list by finding similar words. For example, “run”, “running”, “runs”, “runner” all have the same base “run”. The SnowballC package drops the endings so all the variants collapse to the same word root.
If we really wanted to expand this project, some additional features we might include beyond the word list:
Since email language and markup changes over time, and spammers are constantly changing their email to get past spam filters, any model built to separate HAM vs SPAM will probably need to be constantly retrained.
sessionInfo()
## R version 3.6.1 (2019-07-05)
## Platform: x86_64-pc-linux-gnu (64-bit)
## Running under: Ubuntu 18.04.2 LTS
##
## Matrix products: default
## BLAS: /usr/lib/x86_64-linux-gnu/openblas/libblas.so.3
## LAPACK: /usr/lib/x86_64-linux-gnu/libopenblasp-r0.2.20.so
##
## locale:
## [1] LC_CTYPE=C LC_NUMERIC=C
## [3] LC_TIME=C LC_COLLATE=C
## [5] LC_MONETARY=C LC_MESSAGES=en_US.UTF-8
## [7] LC_PAPER=en_US.UTF-8 LC_NAME=C
## [9] LC_ADDRESS=C LC_TELEPHONE=C
## [11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C
##
## attached base packages:
## [1] parallel stats graphics grDevices utils datasets methods
## [8] base
##
## other attached packages:
## [1] forcats_0.4.0 stringr_1.4.0 dplyr_0.8.3
## [4] purrr_0.3.3 readr_1.3.1 tidyr_1.0.0
## [7] tibble_2.1.3 tidyverse_1.2.1 wordcloud_2.6
## [10] RColorBrewer_1.1-2 caret_6.0-84 ggplot2_3.2.1
## [13] lattice_0.20-38 SnowballC_0.6.0 tm_0.7-6
## [16] NLP_0.2-0 doParallel_1.0.15 iterators_1.0.12
## [19] foreach_1.4.7
##
## loaded via a namespace (and not attached):
## [1] colorspace_1.4-1 class_7.3-15 rstudioapi_0.10
## [4] prodlim_2019.10.13 lubridate_1.7.4 ranger_0.11.2
## [7] xml2_1.2.2 codetools_0.2-16 splines_3.6.1
## [10] knitr_1.25 zeallot_0.1.0 jsonlite_1.6
## [13] broom_0.5.2 shiny_1.4.0 compiler_3.6.1
## [16] httr_1.4.1 backports_1.1.5 assertthat_0.2.1
## [19] Matrix_1.2-17 fastmap_1.0.1 lazyeval_0.2.2
## [22] cli_1.1.0 later_1.0.0 htmltools_0.4.0
## [25] tools_3.6.1 gtable_0.3.0 glue_1.3.1
## [28] reshape2_1.4.3 Rcpp_1.0.2 slam_0.1-46
## [31] cellranger_1.1.0 vctrs_0.2.0 nlme_3.1-141
## [34] timeDate_3043.102 gower_0.2.1 xfun_0.10
## [37] rvest_0.3.4 mime_0.7 miniUI_0.1.1.1
## [40] lifecycle_0.1.0 MASS_7.3-51.4 MLmetrics_1.1.1
## [43] scales_1.0.0 ipred_0.9-9 hms_0.5.2
## [46] promises_1.1.0 yaml_2.2.0 gridExtra_2.3
## [49] rpart_4.1-15 stringi_1.4.3 highr_0.8
## [52] klaR_0.6-14 e1071_1.7-2 lava_1.6.6
## [55] rlang_0.4.1 pkgconfig_2.0.3 evaluate_0.14
## [58] recipes_0.1.7 labeling_0.3 tidyselect_0.2.5
## [61] gbm_2.1.5 plyr_1.8.4 magrittr_1.5
## [64] R6_2.4.0 generics_0.0.2 combinat_0.0-8
## [67] pillar_1.4.2 haven_2.1.1 withr_2.1.2
## [70] survival_2.44-1.1 nnet_7.3-12 modelr_0.1.5
## [73] crayon_1.3.4 questionr_0.7.0 rmarkdown_1.16
## [76] grid_3.6.1 readxl_1.3.1 data.table_1.12.6
## [79] ModelMetrics_1.2.2 digest_0.6.22 xtable_1.8-4
## [82] httpuv_1.5.2 stats4_3.6.1 munsell_0.5.0
stopCluster(cl)