You will need to have NLTK
installed, along with downloading the corpus for stopwords
.
# RUN THIS CELL IF YOU NEED
# TO DOWNLOAD NLTK WITH CONDA
# Uncomment the code below and run:
# !conda install nltk #This installs nltk
# import nltk # Imports the library
# nltk.download() #Download the necessary datasets
The dataset can be found here:
https://archive.ics.uci.edu/ml/datasets/SMS+Spam+Collection
The file contains a collection of more than 5 thousand SMS phone messages. You can review the readme file for more info.
Use rstrip()
plus a list comprehension to get a list of all the lines of text messages:
> import nltk
> messages = [line.rstrip() for line in
> open('smsspamcollection/SMSSpamCollection')]
> print(len(messages))
5574
> messages[50]
'ham\tWhat you thinked about me. First time you saw me in class.'
A collection of texts is also sometimes called “corpus”. Let’s print the first ten messages and number them using enumerate:
> for message_no, message in enumerate(messages[:10]):
+ print(message_no, message)
+ print('\n')
0 ham Go until jurong point, crazy.. Available only in bugis n great world la e buffet... Cine there got amore wat...
1 ham Ok lar... Joking wif u oni...
2 spam Free entry in 2 a wkly comp to win FA Cup final tkts 21st May 2005. Text FA to 87121 to receive entry question(std txt rate)T&C's apply 08452810075over18's
3 ham U dun say so early hor... U c already then say...
4 ham Nah I don't think he goes to usf, he lives around here though
5 spam FreeMsg Hey there darling it's been 3 week's now and no word back! I'd like some fun you up for it still? Tb ok! XxX std chgs to send, £1.50 to rcv
6 ham Even my brother is not like to speak with me. They treat me like aids patent.
7 ham As per your request 'Melle Melle (Oru Minnaminunginte Nurungu Vettam)' has been set as your callertune for all Callers. Press *9 to copy your friends Callertune
8 spam WINNER!! As a valued network customer you have been selected to receivea £900 prize reward! To claim call 09061701461. Claim code KL341. Valid 12 hours only.
9 spam Had your mobile 11 months or more? U R entitled to Update to the latest colour mobiles with camera for Free! Call The Mobile Update Co FREE on 08002986030
Due to the spacing we can tell that this is a TSV (“tab separated values”) file, where the first column is a label noting whether the given message is a normal message (commonly known as “ham”) or “spam”. The second column is the message itself. (The numbers aren’t part of the file, they are just from the enumerate call).
Using these labeled ham and spam examples, we’ll train a machine learning model to learn to discriminate between ham/spam automatically. Then, with a trained model, we’ll be able to classify arbitrary unlabeled messages as ham or spam.
From the official SciKit Learn documentation, we can visualize our process:
> knitr::include_graphics("https://ogrisel.github.io/scikit-learn.org/sklearn-tutorial/_images/plot_ML_flow_chart_1.png")
> knitr::include_graphics("https://ogrisel.github.io/scikit-learn.org/sklearn-tutorial/_images/plot_ML_flow_chart_3.png")
Instead of parsing TSV manually using Python, we can just take advantage of pandas.
> import pandas as pd
We’ll use read_csv
and make note of the sep
argument, we can also specify the desired column names by passing in a list of names
.
> messages = pd.read_csv('smsspamcollection/SMSSpamCollection',
> sep='\t',names=["label", "message"])
> messages.head()
label message
0 ham Go until jurong point, crazy.. Available only ...
1 ham Ok lar... Joking wif u oni...
2 spam Free entry in 2 a wkly comp to win FA Cup fina...
3 ham U dun say so early hor... U c already then say...
4 ham Nah I don't think he goes to usf, he lives aro...
> messages.describe()
label message
count 5572 5572
unique 2 5169
top ham Sorry, I'll call later
freq 4825 30
We can use groupby
to use describe by label.
> messages.groupby('label').describe()
message
count unique top freq
label
ham 4825 4516 Sorry, I'll call later 30
spam 747 653 Please call our customer service representativ... 4
As we continue our analysis we want to start thinking about the features that we’ll be using. This goes along with the general idea of feature engineering. Feature engineering is a very large part of spam detection.
Let’s make a new column to detect how long the text messages are:
> messages['length'] = messages['message'].apply(len)
+ messages.head()
label message length
0 ham Go until jurong point, crazy.. Available only ... 111
1 ham Ok lar... Joking wif u oni... 29
2 spam Free entry in 2 a wkly comp to win FA Cup fina... 155
3 ham U dun say so early hor... U c already then say... 49
4 ham Nah I don't think he goes to usf, he lives aro... 61
> import matplotlib.pyplot as plt
+ import seaborn as sns
> plt.figure(figsize=(6,4))
+ sns.set_style('darkgrid')
+ messages['length'].plot(bins=50, kind='hist');
+ plt.show()
The x-axis goes to nearly 1000, so there is a really long message.
> messages.length.describe()
count 5572.000000
mean 80.489950
std 59.942907
min 2.000000
25% 36.000000
50% 62.000000
75% 122.000000
max 910.000000
Name: length, dtype: float64
910 characters.
> messages[messages['length'] == 910]['message'].iloc[0]
"For me the love should start with attraction.i should feel that I need her every time around me.she should be the first thing which comes in my thoughts.I would start the day and end it with her.she should be there every time I dream.love will be then when my every breath has her name.my life should happen around her.my life will be named to her.I would cry for her.will give all my happiness and take all her sorrows.I will be ready to fight with anyone for her.I will be in love when I will be doing the craziest things for her.love will be when I don't have to proove anyone that my girl is the most beautiful lady on the whole planet.I will always be singing praises for her.love will be when I start up making chicken curry and end up makiing sambar.life will be the most beautiful then.will get every morning and thank god for the day because she is with me.I would like to say a lot..will tell later.."
Let’s focus back on the idea of trying to see if message length is a distinguishing feature between ham and spam:
> messages.hist(column='length', by='label',
+ bins=50,figsize=(12,4));
+ plt.show()
Spam messages tend to have more characters.
The data is all in text format (strings), which is an issue. The classification algorithms will need a numerical feature vector in order to perform the classification task. There are actually many methods to convert a corpus to a vector. The simplest is the the bag-of-words approach, where each unique word in a text will be represented by one number.
As a first step, let’s write a function that will split a message into its individual words and return a list. We’ll also remove very common words, (‘the’, ‘a’, etc..). To do this we will take advantage of the NLTK
library. It’s the standard library in Python for processing text and has a lot of useful features. We’ll only use some of the basic ones here.
Let’s create a function that will process the string in the message column. Then we can just use apply()
in pandas do process all the text in the DataFrame.
First remove punctuation. We can take advantage of Python’s built-in string
library to get a quick list of all the possible punctuation:
> import string
+
+ mess = 'Sample message! Notice: it has punctuation.'
+
+ # Check characters to see if they are in punctuation
+ nopunc = [char for char in mess if char not in string.punctuation]
+
+ # Join the characters again to form the string.
+ nopunc = ''.join(nopunc)
+ nopunc
'Sample message Notice it has punctuation'
Now let’s see how to remove stopwords. We can impot a list of english stopwords from NLTK
(check the documentation for more languages and info).
> from nltk.corpus import stopwords
+ # Show some stop words
+ stopwords.words('english')[0:10]
['i', 'me', 'my', 'myself', 'we', 'our', 'ours', 'ourselves', 'you', "you're"]
> nopunc.split()
['Sample', 'message', 'Notice', 'it', 'has', 'punctuation']
> # Now just remove any stopwords
+ clean_mess = [word for word in nopunc.split() if
+ word.lower() not in stopwords.words('english')]
> clean_mess
['Sample', 'message', 'Notice', 'punctuation']
Now let’s put both of these together in a function to apply it to our DataFrame later on:
> def text_process(mess):
+ """
+ Takes in a string of text, then performs the following:
+ 1. Remove all punctuation
+ 2. Remove all stopwords
+ 3. Returns a list of the cleaned text
+ """
+ # Check characters to see if they are in punctuation
+ nopunc = [char for char in mess if
+ char not in string.punctuation]
+
+ # Join the characters again to form the string.
+ nopunc = ''.join(nopunc)
+
+ # Now just remove any stopwords
+ return [word for word in nopunc.split()
+ if word.lower() not in stopwords.words('english')]
Here is the original DataFrame again:
> messages.head()
label message length
0 ham Go until jurong point, crazy.. Available only ... 111
1 ham Ok lar... Joking wif u oni... 29
2 spam Free entry in 2 a wkly comp to win FA Cup fina... 155
3 ham U dun say so early hor... U c already then say... 49
4 ham Nah I don't think he goes to usf, he lives aro... 61
Now let’s “tokenize” these messages. Tokenization is just the term used to describe the process of converting the normal text strings in to a list of tokens (words that we actually want).
> # Check to make sure its working
+ messages['message'].head(5).apply(text_process)
0 [Go, jurong, point, crazy, Available, bugis, n...
1 [Ok, lar, Joking, wif, u, oni]
2 [Free, entry, 2, wkly, comp, win, FA, Cup, fin...
3 [U, dun, say, early, hor, U, c, already, say]
4 [Nah, dont, think, goes, usf, lives, around, t...
Name: message, dtype: object
There are a lot of ways to continue normalizing this text. Such as Stemming or distinguishing by part of speech.
NLTK
has lots of built-in tools and great documentation on a lot of these methods. Sometimes they don’t work well for text-messages due to the way a lot of people tend to use abbreviations or shorthand, For example:
'Nah dawg, IDK! Wut time u headin to da club?'
versus
'No dog, I don't know! What time are you heading to the club?'
Some text normalization methods will have trouble with this type of shorthand but there are more advanced methods in the NLTK book online.
Each vector will have as many dimensions as there are unique words in the SMS corpus. We will first use SciKit Learn’s CountVectorizer
. This model will convert a collection of text documents to a matrix of token counts.
We can imagine this as a 2-Dimensional matrix. Where the 1-dimension is the entire vocabulary (1 row per word) and the other dimension are the actual documents, in this case a column per text message.
For example:
Message 1 | Message 2 | … | Message N | |
---|---|---|---|---|
Word 1 Count | 0 | 1 | … | 0 |
Word 2 Count | 0 | 0 | … | 0 |
… | 1 | 2 | … | 0 |
Word N Count | 0 | 1 | … | 1 |
Since there are so many messages, we can expect a lot of zero counts for the presence of that word in that document. Because of this, SciKit Learn will output a Sparse Matrix.
> from sklearn.feature_extraction.text import CountVectorizer
There are a lot of arguments and parameters that can be passed to the CountVectorizer. In this case we will just specify the analyzer to be our own previously defined function:
> bow_transformer = CountVectorizer(
+ analyzer=text_process).fit(messages['message'])
+
+ # Print total number of vocab words
+ print(len(bow_transformer.vocabulary_))
11425
Let’s take one text message and get its bag-of-words counts as a vector, putting to use our new bow_transformer
:
> message4 = messages['message'][3]
+ print(message4)
U dun say so early hor... U c already then say...
Now let’s see its vector representation:
> bow4 = bow_transformer.transform([message4])
+ print(bow4)
(0, 4068) 2
(0, 4629) 1
(0, 5261) 1
(0, 6204) 1
(0, 6222) 1
(0, 7186) 1
(0, 9554) 2
> print(bow4.shape)
(1, 11425)
This means that there are seven unique words in message number 4 (after removing common stop words). Two of them appear twice, the rest only once. Let’s go ahead and check and confirm which ones appear twice:
> print(bow_transformer.get_feature_names()[4068])
U
> print(bow_transformer.get_feature_names()[9554])
say
Now we can use .transform
on our Bag-of-Words (bow) transformed object and transform the entire DataFrame of messages. Let’s go ahead and check out how the bag-of-words counts for the entire SMS corpus is a large, sparse matrix:
> messages_bow = bow_transformer.transform(messages['message'])
> print('Shape of Sparse Matrix: ', messages_bow.shape)
Shape of Sparse Matrix: (5572, 11425)
> print('Amount of Non-Zero occurences: ', messages_bow.nnz)
Amount of Non-Zero occurences: 50548
> sparsity = (100.0 * messages_bow.nnz /
+ (messages_bow.shape[0] *
+ messages_bow.shape[1]))
+ print('sparsity: {}'.format(sparsity))
+ #non zero/total
sparsity: 0.07940295412668218
After the counting, the term weighting and normalization can be done with TF-IDF, using scikit-learn’s TfidfTransformer
.
TF-IDF stands for term frequency-inverse document frequency
, and the tf-idf weight is a weight often used in information retrieval and text mining. This weight is a statistical measure used to evaluate how important a word is to a document in a collection or corpus. The importance increases proportionally to the number of times a word appears in the document but is offset by the frequency of the word in the corpus. Variations of the tf-idf weighting scheme are often used by search engines as a central tool in scoring and ranking a document’s relevance given a user query.
One of the simplest ranking functions is computed by summing the tf-idf for each query term; many more sophisticated ranking functions are variants of this simple model.
Typically, the tf-idf weight is composed by two terms: the first computes the normalized Term Frequency (TF), aka. the number of times a word appears in a document, divided by the total number of words in that document; the second term is the Inverse Document Frequency (IDF), computed as the logarithm of the number of the documents in the corpus divided by the number of documents where the specific term appears.
TF: Term Frequency
measures how frequently a term occurs in a document. Since every document is different in length, it is possible that a term would appear much more times in long documents than shorter ones. Thus, the term frequency is often divided by the document length (aka. the total number of terms in the document) as a way of normalization:
TF(t) = (Number of times term t appears in a document) / (Total number of terms in the document).
IDF: Inverse Document Frequency
measures how important a term is. While computing TF, all terms are considered equally important. However it is known that certain terms, such as “is”, “of”, and “that”, may appear a lot of times but have little importance. Thus we need to weigh down the frequent terms while scale up the rare ones, by computing the following:
IDF(t) = log_e(Total number of documents / Number of documents with term t in it).
See below for a simple example.
Consider a document containing 100 words wherein the word cat appears 3 times.
The term frequency (i.e., tf) for cat is then (3 / 100) = 0.03. Now, assume we have 10 million documents and the word cat appears in one thousand of these. Then, the inverse document frequency (i.e., idf) is calculated as log(10,000,000 / 1,000) = 4. Thus, the Tf-idf weight is the product of these quantities: 0.03 * 4 = 0.12.
Let’s go ahead and see how we can do this in SciKit Learn:
> from sklearn.feature_extraction.text import TfidfTransformer
+
+ tfidf_transformer = TfidfTransformer().fit(messages_bow)
+ tfidf4 = tfidf_transformer.transform(bow4)
+ print(tfidf4)
(0, 9554) 0.5385626262927564
(0, 7186) 0.4389365653379857
(0, 6222) 0.3187216892949149
(0, 6204) 0.29953799723697416
(0, 5261) 0.29729957405868723
(0, 4629) 0.26619801906087187
(0, 4068) 0.40832589933384067
We’ll check the IDF (inverse document frequency) of the word "u"
and of word "university"
?
> print(tfidf_transformer.
+ idf_[bow_transformer.vocabulary_['u']])
3.2800524267409408
> print(tfidf_transformer.
+ idf_[bow_transformer.vocabulary_['university']])
8.527076498901426
To transform the entire bag-of-words corpus into TF-IDF corpus at once:
> messages_tfidf = tfidf_transformer.transform(messages_bow)
+ print(messages_tfidf.shape)
(5572, 11425)
There are many ways the data can be preprocessed and vectorized. These steps involve feature engineering and building a “pipeline”.
With messages represented as vectors, we can finally train our spam/ham classifier. Now we can use almost any of the classification algorithms. For a variety of reasons, the Naive Bayes classifier algorithm is a good choice.
We’ll be using scikit-learn here, choosing the Naive Bayes classifier to start with:
> from sklearn.naive_bayes import MultinomialNB
+ spam_detect_model = MultinomialNB().fit(
+ messages_tfidf, messages['label'])
Let’s try classifying our single random message and checking how we do:
> print('predicted:', spam_detect_model.predict(tfidf4)[0])
predicted: ham
> print('expected:', messages.label[3])
expected: ham
Now we want to determine how well our model will do overall on the entire dataset. Let’s begin by getting all the predictions:
> all_predictions = spam_detect_model.predict(messages_tfidf)
+ print(all_predictions)
['ham' 'ham' 'spam' ... 'ham' 'ham' 'ham']
We can use SciKit Learn’s built-in classification report, which returns precision, recall, f1-score, and a column for support (meaning how many cases supported that classification).
> knitr::include_graphics("https://upload.wikimedia.org/wikipedia/commons/thumb/2/26/Precisionrecall.svg/700px-Precisionrecall.svg.png")
> from sklearn.metrics import classification_report
+ print (classification_report(messages['label'],
+ all_predictions))
precision recall f1-score support
ham 0.98 1.00 0.99 4825
spam 1.00 0.85 0.92 747
accuracy 0.98 5572
macro avg 0.99 0.92 0.95 5572
weighted avg 0.98 0.98 0.98 5572
Above we evaluated accuracy on the same data we used for training. You should never actually evaluate on the same dataset you train on
Such evaluation tells us nothing about the true predictive power of our model. If we simply remembered each example during training, the accuracy on training data would trivially be 100%, even though we wouldn’t be able to classify any new messages.
A proper way is to split the data into a training/test set, where the model only ever sees the training data during its model fitting and parameter tuning. The test data is never used in any way. This is then our final evaluation on test data is representative of true predictive performance.
> from sklearn.model_selection import train_test_split
+
+ msg_train, msg_test, label_train, label_test = \
+ train_test_split(messages['message'], messages['label'], test_size=0.2)
+
+ print(len(msg_train), len(msg_test), len(msg_train) + len(msg_test))
4457 1115 5572
The test size is 20% of the entire dataset (1115 messages out of total 5572), and the training is the rest (4457 out of 5572). Note the default split would have been 30/70.
Let’s run our model again and then predict off the test set. We will use SciKit Learn’s pipeline capabilities to store a pipeline of workflow. This will allow us to set up all the transformations that we will do to the data for future use. Let’s see an example of how it works:
> from sklearn.pipeline import Pipeline
+
+ pipeline = Pipeline([
+ ('bow', CountVectorizer(analyzer=text_process)),
+ # strings to token integer counts
+ ('tfidf', TfidfTransformer()),
+ # integer counts to weighted TF-IDF scores
+ ('classifier', MultinomialNB()),
+ # train on TF-IDF vectors w/ Naive Bayes classifier
+ ])
Now we can directly pass message text data and the pipeline will do our pre-processing for us. We can treat it as a model/estimator API:
> pipeline.fit(msg_train,label_train)
Pipeline(memory=None,
steps=[('bow',
CountVectorizer(analyzer=<function text_process at 0x00000000386DE0D8>,
binary=False, decode_error='strict',
dtype=<class 'numpy.int64'>, encoding='utf-8',
input='content', lowercase=True, max_df=1.0,
max_features=None, min_df=1,
ngram_range=(1, 1), preprocessor=None,
stop_words=None, strip_accents=None,
token_pattern='(?u)\\b\\w\\w+\\b',
tokenizer=None, vocabulary=None)),
('tfidf',
TfidfTransformer(norm='l2', smooth_idf=True,
sublinear_tf=False, use_idf=True)),
('classifier',
MultinomialNB(alpha=1.0, class_prior=None, fit_prior=True))],
verbose=False)
> predictions = pipeline.predict(msg_test)
> print(classification_report(predictions,label_test))
precision recall f1-score support
ham 1.00 0.95 0.97 1023
spam 0.65 1.00 0.79 92
accuracy 0.96 1115
macro avg 0.82 0.98 0.88 1115
weighted avg 0.97 0.96 0.96 1115
Now we have a classification report for our model on a true testing set.
Check out the links below for more info on Natural Language Processing: