setwd("C:/!!Machine_Learning/!_ALGORITMA/!01_Algoritma_Jul_Dec2020/11_Capstone ML NN/Data Capstone")

0.1 Introduction

As a final step in machine learning course, every student should complete 1 case as their machine learning capstone project. In this article, we will explain one cases of machine learning capstone project: “Spam SMS Classification”.

1 Datasets

The train dataset will be used to train and evaluate the model, while the test dataset is used for the final evaluation. The final evaluation requires you to submit your prediction of the test dataset to the leaderboard in order to obtain the final model evaluation (more details are provided below). The data scheme is illustrated as follows:

2 Case Study

The SMS dataset is collected by team for educational purposes. It is a real SMS dataset with a spam/ham label for each message.

SMS: “I didn’t get your message!”

Someone might contact you through old-school way of SMS and you might even skip it because the amount of the spams in your inbox is just way too much. The SMS is classified as spam is collected through user’s report for unwanted SMS. Can we build a spam classifier?

The problem above urge you to classify whether a text message would be a SPAM or HAM based on the content.

For this case study, there are SMS dataset sources, as follow 1. SMS Train Datasets 2. SMS Test Datasets

3 A. Data Preprocess and Exploratory Data Analysis

3.1 1. Text data preprocessing

3.2 1.1. Load the library and data

To load the following package.

library(lubridate) #Make Dealing with Dates
library(tidyverse) #The tidyverse package is designed to make it easy to install and load core packages from the tidyverse in a single command -> ggplot2, dplyr, tidyr, readr, purrr, tibble, stringr, forcats
library(readr) #Read Rectangular Text Data
library(caret) #Classification and Regression Training
library(dplyr) #A Grammar of Data Manipulation
library(e1071) #Misc Functions of the Department of Statistics, Probability Theory Group (Formerly: E1071)
library(ROCR) #Visualizing the Performance of Scoring Classifiers
library(SnowballC) #Snowball Stemmers Based on the C 'libstemmer' UTF-8 Library 

3.2.1 1.2. Input Data

In this phase, To read the source SMS data and use the read_csv() function from the readr package to speed up the data reading process.

3.2.2 1.2.1. Data format checking / mutate

  • datetime –> datetime
  • status –> factor
sms_datatrain <- read.csv("4. sms-cl-spam/data/data-train.csv",
                    stringsAsFactors = FALSE, row.names = , 
                    encoding = "UTF-8") %>% 
  mutate(status = as.factor(status),
          datetime  = as_datetime(datetime),
          hourly = hour(datetime)
         )

glimpse(sms_datatrain)
#> Rows: 2,004
#> Columns: 4
#> $ datetime <dttm> 2017-02-15 14:48:00, 2017-02-15 15:24:00, 2017-02-15 16:0...
#> $ text     <chr> "Telegram code 53784", "Rezeki Nomplok Dompetku Pengiriman...
#> $ status   <fct> ham, spam, ham, ham, ham, ham, ham, spam, spam, spam, spam...
#> $ hourly   <int> 14, 15, 16, 16, 18, 18, 18, 10, 11, 18, 18, 9, 9, 11, 13, ...

3.2.3 1.3. To Observe spam text pattern using wordcloud display

Taking 20 text samples with ‘spam’ category, then observe which words / sentences can be indicator or predictor as text spam. it can also use the a wordcloud to look at the most frequent words in the overall collection of words(“bag of words”) available.

To see “SPAM” word pattern thru bag of word.

library(wordcloud)

# your code here
sms_datatrain %>% 
  filter(status == "spam") %>%
  head(20) %>% 
  pull(text)     # ambil kolom text sebagai vektor character 
#>  [1] "Rezeki Nomplok Dompetku Pengiriman Uang! Kirim uang di Alfamart & dptkan hadiah jutaan rupiah setiap hari.Periode s.d. 28Feb17.Info: http://bit.ly/dmpurna MFI1" 
#>  [2] "YEAY! Free Ice Tea atau Cashback up to 30% dg transaksi  di AH Resto! Hanya untuk pengguna TCASH TAP. S&K Berlaku. Info tsel.me/tappromo"                        
#>  [3] "Voting your Offer. Disc 40%, 1 crispy chicken+1 spicy chicken+ nasi+lotteria tea Rp.26rb. Tukar SMS ini di LOTTERIA terdekat. Berlaku hari ini. SKB. Promo *606#"
#>  [4] "Ayo bergabung dgn Freedom Postpaid! Makin rame makin seru, ajak teman & keluarga diskonnya lebih besar. Daftar di http://im3.do/uxU PAI1"                        
#>  [5] "Nikmati kemudahan mewujudkan impian kamu dan pasangan utk masa depan yg lebih cerah. Cek Dana Bantuan Sahabat di DOMPETKU! Info: http://bit.ly/dmpdbs MFI3"      
#>  [6] "Gratis 1 bulan Spotify Premium khusus FreedomCombo. Bisa bebas dengar musik,bikin playlist sepuasnya tanpa iklan dgn Spotify Premium. Aktifkan di *123*123# CVI1"
#>  [7] "Masukan Username & Password ini di aplikasi Spotify. Username:085722688068 Password:1khF1SpC Mohon lgsg ganti alamat email & password di aplikasi Spotify."      
#>  [8] "TIPS HEMAT DATA: pakai resolusi 480p ketika menonton video online, temukan di setting layanan video yg kamu nikmati. Ini akan membuat pemakaian data lbh hemat"  
#>  [9] "Ayam (syp/paha bwh),Nasi,Ades Rp.18.181. Add on CD Bebi Glenn Rp.22.727. Tkr SMS hari ini di CFC CIBUBUR JUNC. Selama persediaan msh ada. Promo*606#"            
#> [10] "YEAY! Kejutan cashback & freebies dg TCASH TAP! Terus #pakeTCASH, cek HP kamu & dapatkan kejutannya. S&K berlaku. Info cek tsel.me/yeay"                         
#> [11] "Beli EXTRA kuota 1GB harga DISKON cuma Rp10rb. Ketik YA9 kirim ke 929 sd. 20/02/17"                                                                              
#> [12] "Hari Senin saatnya Nonton Hemat hanya 25ribu di Cinema XXI dgn TCASH TAP! Dptkan stiker TCASH TAP di GraPARI terdekat. Info tsel.me/tappromo"                    
#> [13] "Disc 50% setiap Senin, 10% di hari lainnya di Coffee Bean dgn TCASH TAP! Dapatkan stiker TCASH TAP di graPARI terdekat. Info tsel.me/tappromo"                   
#> [14] "Beli EXTRA kuota 1GB harga DISKON cuma Rp10rb. Ketik YA9 kirim ke 929 sd. 20/02/17"                                                                              
#> [15] "Pelanggan 085722688068, Ada yang Ngajak Kamu Chatting. Hubungi *858*11# untuk baca. Pesan akan dihapus dalam 5 menit. Silakan hubungi *858*11# sekarang."        
#> [16] "Dapatkan hasil investasi yg bersaing dgn deposito, segera maksimalkan hasil Investasi Anda dgn fitur Auto-Invest di DOMPETKU! Info: http://bit.ly/bnp2211 MFI2"  
#> [17] "Harga spesial Rp 75ribu atau Disc 44% utk tiket Jungle Land. Khusus pengguna TCASH TAP! S&K Berlaku. Info tsel.me/tappromo"                                      
#> [18] "Cashback 50% setiap di Haagen-Dasz Selasa&Kamis / 10% hari lainnya khusus dengan TCASH TAP! S&K berlaku. Info tsel.me/tappromo"                                  
#> [19] "Raih kesempatan mendapat Smartphone keren, cukup download iflix skrg & tonton film sebanyak2nya. Unduh iflix di im3.do/iflix Info kunjungi website kami. CVI2"   
#> [20] "Hanya dengan Isi ulang akumulasi 150rb sebelum 24 Febuari 2017, Dapatkan GRATIS 3GB berlaku 30 hari. Mau? Ayo buruan isi ulang sekarang juga."
sms_datatrain %>% 
  filter(status == "spam") %>% 
  pull(text) %>% 
  wordcloud( max.words = 250, scale = c(2, 0.4),random.order = FALSE,colors=brewer.pal(8, "BrBG")) 

Kata yang berpotensial mengindikasi bahwa suatu text adalah spam: kuota, pulsa, paket, sms, internetan, rezeki, hadiah, disc, promo, diskon, gratis, bonus..

3.3 1.4. Reported a distribution plot of total hourly frequency for each status

See the report of SMS hourly, using histogram bar chart.

library(ggplot2)

hist(sms_datatrain$hourly, breaks = 30) # 2 variabel kategorik

See the report of SMS hourly, using Boc Plot chart.

# layer 2: geom_point
ggplot(data = sms_datatrain,  mapping = aes(x = hourly, y = status)) +
  geom_boxplot() +
  geom_point()

Look the interesting report of SMS hourly, using bar chart (GEOM_COL), so we can see the proportion or composition of SPAM/HAM SMS in hourly.

sms_datatrain_freq <- as.data.frame(table(sms_datatrain$status,
                                 sms_datatrain$hourly))

ggplot(data = sms_datatrain_freq, mapping = aes(x = Freq,  y = Var2  )) +
geom_col(mapping = aes(fill = Var1), position = "fill") +
  labs(x = "Spam & Ham count Proportion",
       y = "Hour",
       fill = "",
       title = "Proportion of Spam & Ham SMS",
       subtitle = "Publish Hour vs Spam/Ham") +
  scale_fill_brewer(palette = "Set2") +
  theme_minimal() +
  theme(legend.position = "top")

ggplot(data = sms_datatrain_freq, mapping = aes(x = Freq,  y = Var2  )) +
geom_col(mapping = aes(fill = Var1), position = "dodge") +
  labs(x = "Spam & Ham counts",
       y = "Hour",
       fill = "",
       title = "No. of Spam & Ham SMS",
       subtitle = "Publish Hour vs Spam/Ham") +
  scale_fill_brewer(palette = "Set2") +
  theme_minimal() +
  theme(legend.position = "right")

Based on the Data Visualization using Wordcloud, Boxplot & Bar chart, we can conclude: - Mostly, SPAM SMS were from Telco provider Promotion (ie: Kuota, paket, sms, gratis, etc.) - SPAM SMS were sent at early day - Peak of SMS broacasting & activity were from morning till afternoon

3.4 2. Data Preprocessing & Text transformation

Before to start the model development, we need to have a clear the text data first. At a later stage, the text is converted to corpus format and then cleaned.

Corpus is a collection of documents. In this case, one document is equivalent to one SMS observation. In one SMS there can be one or more sentences.

summary: in general, the steps that are often done for text cleansing are: - 1. Case-folding, to change all words with lower case - 2. Remove numbers, to remove all numbers - 3. Remove stopwords, to delete words that often appear in the corpus and are usually meaningless - 4. Remove punctuation, to replace certain characters with spaces/blank - 5. Stemming, cutting words into their base words - 6. Remove white space,we remove excessive white space, because the next tokenizing process will cut word by word based on the space character ("")

3.4.1 2.1. Text to Corpus

Corpus is a collection of documents. In this case, one document is equivalent to one SMS observation. In one SMS there can be one or more sentences.

One of the packages that we can use for text mining is tm. Changing from vector text to corpus can be done using the function VCorpus ()

library(tm)

# ubah format menjadi corpus
sms.corpus_train <- VCorpus(VectorSource(sms_datatrain$text))
sms.corpus_train
#> <<VCorpus>>
#> Metadata:  corpus specific: 0, document level (indexed): 0
#> Content:  documents: 2004
nrow(sms_datatrain)
#> [1] 2004

So let’s inspect the contents of (example: the 9th) SMS:

sms.corpus_train[[9]]$content
#> [1] "Voting your Offer. Disc 40%, 1 crispy chicken+1 spicy chicken+ nasi+lotteria tea Rp.26rb. Tukar SMS ini di LOTTERIA terdekat. Berlaku hari ini. SKB. Promo *606#"

3.4.2 2.2. Text Cleansing: Case-folding, Remove numbers, Remove stopwords

  • Text cleasing for data train
library(stopwords)
# 1. case-folding: mengubah semua text menjadi lowercase
sms.corpus_train <- tm_map(sms.corpus_train, content_transformer(tolower))

# 2. remove numbers: menghapus angka
sms.corpus_train <- tm_map(sms.corpus_train, removeNumbers)

# 3. remove stopwords: menghapus kata yang sering muncul di corpus dan biasanya tidak meaningful
sms.corpus_train <- tm_map(sms.corpus_train, removeWords, stopwords("english"))

# cek content ke-9
sms.corpus_train[[9]]$content
#> [1] "voting  offer. disc %,  crispy chicken+ spicy chicken+ nasi+lotteria tea rp.rb. tukar sms ini di lotteria terdekat. berlaku hari ini. skb. promo *#"

3.4.3 2.3. Transformer - removePunctuation function

We create a transformer function that can replace a certain character with a space (" "):

transformer <- content_transformer(FUN = function(x, pattern){
  gsub(x = x, 
       pattern = pattern, 
       replacement = " ") 
})

We can use a function removePunctuation to replace character. Punctuation omitted: ! ’ # S % & ’ ( ) * + , - . / : ; < = > ? @ [ / ] ^ _ { | } ~

# replace ".", "/", "@", "-" with a white space
sms.corpus_train <- 
    VCorpus(VectorSource(sms_datatrain$text)) %>% 
    tm_map(content_transformer(tolower)) %>% 
    tm_map(removeNumbers) %>% 
    tm_map(removeWords, stopwords("english")) %>% 
    tm_map( transformer, "/") %>% 
    tm_map( transformer, "@") %>% 
    tm_map( transformer, "-") %>% 
    tm_map( transformer, "\\.") %>%  # \\. mengakses "." di awal, tengah, maupun akhir kalimat
    tm_map(transformer, "<[^>]+>.") %>%  # Remove Pipe
    tm_map(transformer, "@\\S+.") %>%  # Remove mention
    tm_map( transformer, "*#") %>%  # Remove hastag
    tm_map(transformer, "http[^[:space:]]*.") %>%  # remove URL
    tm_map(transformer, "&amp;.") %>%   # remove amp
    tm_map(transformer, "[[:punct:]].") %>%  # remove titik3
    tm_map(transformer, "[^[:alpha:][:space:]].") %>%  # remove all
    tm_map(removePunctuation) # remove all punctuation, but without white space (dari package tm)

# cek content ke-9
sms.corpus_train[[9]]$content
#> [1] "voting  offer  disc    crispy chicken spicy chicken nasi otteria tea rp rb  tukar sms ini di lotteria terdekat  berlaku hari ini  skb  promo  "
sms.corpus_train[[5]]$content
#> [1] "apakah anda mencoba mengakses akun anda dari perangkat lain jika ya mohon klik tautan ini   api gojek co id customers device oken e  e bac dc dalam  jam ke depan  jika tidak mohon abaikan pesan ini"

3.4.4 2.4. Data Stemming

Next we do ** stemming ** or cutting the word into the root word. For example **walking, walked, walks* becomes walk.

library(SnowballC)
# lakukan instalasi package: SnowballC
# stemming
sms.corpus_train <- tm::tm_map(sms.corpus_train, stemDocument)

# cek content ke-9 
sms.corpus_train[[9]]$content
#> [1] "vote offer disc crispi chicken spici chicken nasi otteria tea rp rb tukar sms ini di lotteria terdekat berlaku hari ini skb promo"

3.4.5 2.5. White space - Text Processing

Finally, we remove excess white space, because the next tokenizing process will cut word by word based on the space character ("").

# remove white space
sms.corpus_train <- tm_map(sms.corpus_train, stripWhitespace)

sms.corpus_train[[9]]$content
#> [1] "vote offer disc crispi chicken spici chicken nasi otteria tea rp rb tukar sms ini di lotteria terdekat berlaku hari ini skb promo"
sms.corpus_train %>% 
  wordcloud( max.words = 200, scale = c(2, 0.5),random.order = FALSE, colors=brewer.pal(8, "BrBG")) 

3.6 4. Data Random & Bernoulli Converter

3.6.1 4.1. Data random for Train & Test Validation

To have robust a prediction model, the data_train for development to split into 2 data: - Data Train, the base data for model development - Data Test for validation, to test models based on Train Data

Splitting the data with sms_train_val dan sms_test_val dengan perbandingan 75%-25%.

RNGkind(sample.kind = "Rounding")
set.seed(100)

# train-test splitting
index      <- sample(nrow(sms_train.dtm), nrow(sms_train.dtm)*0.75)

# sms.dtm = DocumentTermMatrix yang tidak ada labelnya
sms_train_val <- sms_train.dtm[index,] 
sms_test_val  <- sms_train.dtm[-index,]

the label for target prediction:

# label untuk train dan test, tersimpan pada dataframe sms
label_train_val <- sms_datatrain[index, 'status']
label_test_val  <- sms_datatrain[-index, 'status']

To check the composition/ proportion of target class at label_train_val dan label_test_val:

prop.table(table(label_train_val))
#> label_train_val
#>      ham     spam 
#> 0.584165 0.415835
prop.table(table(label_test_val))
#> label_test_val
#>       ham      spam 
#> 0.5668663 0.4331337

3.6.2 4.2. Further Data Preprocessing

3.6.3 4.2.1. Remove Infrequent Words

To check dimention of sms_train data –> ‘sms_train_val’ & ‘sms_test_val’ that it will be used for model development:

dim(sms_train_val)
#> [1] 1503 2877
dim(sms_test_val)
#> [1]  501 2877

We have very many predictors, up to 2877. Let’s reduce the noise in our data by taking words that appear quite often, for example, at least 10 times in all SMS. Use the findFreqTerms () function:

# sms_train dari sms.dtm
sms_freq_val <- findFreqTerms(sms_train_val, lowfreq = 1) 
length(sms_freq_val)
#> [1] 2511

Note: Determination of lowfreq = 1 is not absolute and can be changed for feature selection. Please note: The bigger the lowfreq, the less terms we use as a feature / predictor.

Let’s subset the sms_train data only for the words that appear insms_freq:

sms_train_Val <- sms_train_val[,sms_freq_val] # terms letaknya di kolom
inspect(sms_train_val) #inspect(sms_train)
#> <<DocumentTermMatrix (documents: 1503, terms: 2877)>>
#> Non-/sparse entries: 16826/4307305
#> Sparsity           : 100%
#> Maximal term length: 22
#> Weighting          : term frequency (tf)
#> Sample             :
#>       Terms
#> Docs   anda atau dgn info kamu kuota paket pulsa saya sms
#>   1400    1    1   0    0    0     2     1     1    0   1
#>   196     0    0   0    0    0     0     0     0    0   0
#>   197     0    0   0    0    0     0     0     0    0   1
#>   225     0    0   0    0    0     0     0     0    0   0
#>   239     0    1   0    0    0     0     0     0    4   0
#>   29      0    1   0    0    0     0     0     0    4   0
#>   378     0    0   0    0    0     0     0     0    0   0
#>   403     0    0   0    0    0     0     0     0    0   0
#>   409     0    0   0    0    0     0     0     0    0   0
#>   410     1    0   1    2    0     0     2     2    0   7

3.6.4 4.2.2. Bernoulli Converter

The value in the sms_train matrix is still frequency. For probability calculation, the frequency will be changed to only the conditions appear (1) or not (0). One way is by using Bernoulli Converter.

  • If frequency > 0, then value 1 (appear)
  • If frequency == 0, then value 0 (not appear)
bernoulli_conv <- function(x){
  # parameter ifelse: kondisi, TRUE, FALSE
  x <- as.factor(ifelse(x > 0, 1, 0)) 
  return(x)
  }

# testing fungsi
bernoulli_conv(c(3,0,0,1,4,0))
#> [1] 1 0 0 1 1 0
#> Levels: 0 1

Next, to apply bernoulli_conv ke sms_train_val & sms_test_val:

sms_train_bn_val <- apply(X = sms_train_val, MARGIN = 2, FUN = bernoulli_conv)
sms_test_bn_val  <- apply(X = sms_test_val, MARGIN = 2, FUN = bernoulli_conv)
dim(sms_train_bn_val)
#> [1] 1503 2877
dim(sms_test_bn_val)
#> [1]  501 2877
  • MARGIN = 1 -> to apply FUN by row
  • MARGIN = 2 -> to apply FUN by column, becuase we want to keep a as-is matrix with form: ‘DocumentTermMatrix’

Check the result:

sms_train_bn_val[25:35, 35:50]
#>       Terms
#> Docs   ahlinya ahmad air aja ajaa ajak ajeng akan akhir akhirnya akj aks aksi
#>   832  "0"     "0"   "0" "0" "0"  "0"  "0"   "0"  "0"   "0"      "0" "0" "0" 
#>   1995 "0"     "0"   "0" "0" "0"  "0"  "0"   "0"  "0"   "0"      "0" "0" "0" 
#>   1524 "0"     "0"   "0" "0" "0"  "0"  "0"   "0"  "0"   "0"      "0" "0" "0" 
#>   1744 "0"     "0"   "0" "0" "0"  "0"  "0"   "0"  "0"   "0"      "0" "0" "0" 
#>   1086 "0"     "0"   "0" "0" "0"  "0"  "0"   "0"  "0"   "0"      "0" "0" "0" 
#>   549  "0"     "0"   "0" "0" "0"  "0"  "0"   "0"  "0"   "0"      "0" "0" "0" 
#>   964  "0"     "0"   "0" "0" "0"  "0"  "0"   "0"  "0"   "0"      "0" "0" "0" 
#>   1832 "0"     "0"   "0" "0" "0"  "0"  "0"   "0"  "0"   "0"      "0" "0" "0" 
#>   688  "0"     "0"   "0" "0" "0"  "0"  "0"   "0"  "0"   "0"      "0" "0" "0" 
#>   1881 "0"     "0"   "0" "0" "0"  "0"  "0"   "0"  "0"   "0"      "0" "0" "0" 
#>   1370 "0"     "0"   "0" "1" "0"  "0"  "0"   "0"  "0"   "0"      "0" "0" "0" 
#>       Terms
#> Docs   aktif aktifin aktifkan
#>   832  "0"   "0"     "0"     
#>   1995 "0"   "0"     "0"     
#>   1524 "0"   "0"     "0"     
#>   1744 "0"   "0"     "0"     
#>   1086 "0"   "0"     "0"     
#>   549  "0"   "0"     "0"     
#>   964  "0"   "0"     "0"     
#>   1832 "0"   "0"     "0"     
#>   688  "0"   "0"     "0"     
#>   1881 "0"   "0"     "0"     
#>   1370 "0"   "0"     "0"

4 B. Model Model Development and Evaluation

For this project, we will use Naive Bayes Model, The sms_train_bn is ready and using Library (e1071).

4.1 1. Naive Bayes Model

The Naive Bayes model will be used to classify the text

library(e1071)

# train
naive_spam <- naiveBayes(x = sms_train_bn_val, # predictor-predictor yang berupa matrix
                         y = label_train_val) # label atau target variable

head(naive_spam$tables)
#> $aagc
#>                aagc
#> label_train_val           0           1
#>            ham  0.998861048 0.001138952
#>            spam 1.000000000 0.000000000
#> 
#> $abaikan
#>                abaikan
#> label_train_val          0          1
#>            ham  0.98974943 0.01025057
#>            spam 0.99840000 0.00160000
#> 
#> $abi
#>                abi
#> label_train_val      0      1
#>            ham  1.0000 0.0000
#>            spam 0.9968 0.0032
#> 
#> $abu
#>                abu
#> label_train_val           0           1
#>            ham  0.998861048 0.001138952
#>            spam 1.000000000 0.000000000
#> 
#> $abung
#>                abung
#> label_train_val 0
#>            ham  1
#>            spam 1
#> 
#> $acara
#>                acara
#> label_train_val           0           1
#>            ham  0.998861048 0.001138952
#>            spam 1.000000000 0.000000000

4.2 2. Token or word will be used for training the model

inspect(sms_train_val) #inspect(sms_train)
#> <<DocumentTermMatrix (documents: 1503, terms: 2877)>>
#> Non-/sparse entries: 16826/4307305
#> Sparsity           : 100%
#> Maximal term length: 22
#> Weighting          : term frequency (tf)
#> Sample             :
#>       Terms
#> Docs   anda atau dgn info kamu kuota paket pulsa saya sms
#>   1400    1    1   0    0    0     2     1     1    0   1
#>   196     0    0   0    0    0     0     0     0    0   0
#>   197     0    0   0    0    0     0     0     0    0   1
#>   225     0    0   0    0    0     0     0     0    0   0
#>   239     0    1   0    0    0     0     0     0    4   0
#>   29      0    1   0    0    0     0     0     0    4   0
#>   378     0    0   0    0    0     0     0     0    0   0
#>   403     0    0   0    0    0     0     0     0    0   0
#>   409     0    0   0    0    0     0     0     0    0   0
#>   410     1    0   1    2    0     0     2     2    0   7

4.3 3. Model Prediction

Predict the target class in sms_test_bn_val. Save it to the sms_pred_class object, which will be used to evaluate the confusion matrix.

# predict
dim(sms_test_bn_val)
#> [1]  501 2877
sms_pred_class <- predict(object = naive_spam, # model naive bayes
                          newdata = sms_test_bn_val, # testing data
                          type = "class") # memprediksi kelas

To check the spam/ham prediction data/composition

head(sms_pred_class)
#> [1] ham  spam spam ham  spam spam
#> Levels: ham spam
table(sms_pred_class)
#> sms_pred_class
#>  ham spam 
#>  273  228
prop.table(table(sms_pred_class)) #sms_pred_class
#> sms_pred_class
#>       ham      spam 
#> 0.5449102 0.4550898

4.4 4. Model Evaluation

4.5 4.1. Confusion Matrix

Evaluate the naive_spam model using the confusion matrix and the existing metrics, with library(caret)

library(caret)

sms_pred_test_val_cm <- confusionMatrix(data = sms_pred_class, # label hasil prediksi
                reference = label_test_val, # label actual
                positive = "spam") # kelas positif: spam

sms_pred_test_val_cm
#> Confusion Matrix and Statistics
#> 
#>           Reference
#> Prediction ham spam
#>       ham  264    9
#>       spam  20  208
#>                                           
#>                Accuracy : 0.9421          
#>                  95% CI : (0.9179, 0.9609)
#>     No Information Rate : 0.5669          
#>     P-Value [Acc > NIR] : < 2e-16         
#>                                           
#>                   Kappa : 0.8828          
#>                                           
#>  Mcnemar's Test P-Value : 0.06332         
#>                                           
#>             Sensitivity : 0.9585          
#>             Specificity : 0.9296          
#>          Pos Pred Value : 0.9123          
#>          Neg Pred Value : 0.9670          
#>              Prevalence : 0.4331          
#>          Detection Rate : 0.4152          
#>    Detection Prevalence : 0.4551          
#>       Balanced Accuracy : 0.9441          
#>                                           
#>        'Positive' Class : spam            
#> 

Which cases do we want to minimize ? - False Negative: sms is actually spam, but the prediction model is as ham (consequently spam sms enters our inbox) - False Positive: sms is actually ham, but it is predicted to be spam (as a result, ham sms goes to our spam folder)

Minimize: False Positive

  1. Which metrics do we pay attention to? Include the reasons:
  • Precision: High precision value is able to select spam SMS as selectively as possible, thus reducing the possibility of important SMS (ham) being spam.
  1. According to the metrics, is our model performing well enough?
  • Pos Pred Value: 0.9123, our model’s performance is very good.

4.6 4.2. 💻 ROC and AUC

ROC and AUC as a model evaluation tool: a. ROC is a curve depicting TPR vs FPR for each threshold, while AUC is the area under the ROC curve. b. The more convex the ROC curve (that is, the AUC approaches the value of one), the better the model is at separating positive and negative classes.

4.6.1 4.2.1. Prepare a prediction result

First, prepare a prediction result in the form of a probability from sms_test_bn_val, save it to an object with the namesms_pred_prob:

sms_pred_prob <- predict(naive_spam, sms_test_bn_val, type = "raw")
head(sms_pred_prob)
#>               ham         spam
#> [1,] 1.000000e+00 4.174301e-17
#> [2,] 4.828383e-14 1.000000e+00
#> [3,] 1.158057e-05 9.999884e-01
#> [4,] 9.996457e-01 3.542589e-04
#> [5,] 2.466969e-04 9.997533e-01
#> [6,] 1.471589e-01 8.528411e-01

4.6.2 4.2.2. Prepare ROC data

Prepare ROC data to make things easier for us, save it to an object with the name data_sms_roc:

data_sms_roc <- data.frame(pred_prob = sms_pred_prob[,'spam'],
                           actual_label = ifelse(label_test_val == 'spam', 1, 0))
head(data_sms_roc)

4.6.3 4.2.3. Prepare a ROC Curve

Create an ROC curve. Save the resulting prediction () object with the name sms_roc:

library(ROCR)


sms_roc <- prediction(predictions = data_sms_roc$pred_prob,
                      labels = data_sms_roc$actual_label)

plot(performance(sms_roc, "tpr", "fpr"))
abline(0, 1, lty = 2) # garis diagonal, yaitu performa ketika asal tebak

4.6.4 4.2.4. Prepare a ROC Curve

Calculate the AUC value:

sms_auc <- performance(sms_roc, measure = "auc")
sms_auc@y.values
#> [[1]]
#> [1] 0.9841955

AUC value = 0.9841955, so it can be concluded that the performance of the Naive Bayes Spam Classifier model is very good in separating which class is positive (spam) from class negative (ham).

5 📝 SUMMARY / CONCLUSION

  1. ROC and AUC as a model evaluation tool:

    1. ROC is a curve depicting TPR vs FPR for each threshold, while AUC is the area under the ROC curve.
    2. The more convex the ROC curve (that is, the AUC approaches the value of one), the better the model is at separating positive and negative classes.
  2. Naive Bayes is often used for text classification cases, because the computation time is fast enough to predict many words.

  3. The Workflow for text mining are:

  1. Read data dan wrangling
  2. Eksplorasi data
  3. Text cleansing
  4. Tokenization (DocumentTermMatrix)
  5. Cross-Validation
  6. Further Preprocessing: i. Feature selection findFreqTerms() ii. Bernoulli converter
  7. Train model dan prediksi
  8. Evaluasi model

The CAPSTONE PROJECT: SMS Classification SPAM, using Naive Bayes Model and based on Validation, also ROC & AUC, We can conclude that the model is very predictive. AUC value = 0.9841955, so it can be concluded that the performance of the Naive Bayes Spam Classifier model is very good in separating which class is positive (spam) from class negative (ham).


6 SUBMISSION

7 A. Predict on Data Test

Prepare the data test, read and transform as same with Data train & validation at Model Development.

sms_datatest <- read.csv("4. sms-cl-spam/data/data-test.csv",
                    stringsAsFactors = FALSE,nrows = , row.names = , 
                    encoding = "UTF-8") %>% 
  mutate(status = as.factor(status)
         ) 

transformer <- content_transformer(FUN = function(x, pattern){
  gsub(x = x, 
       pattern = pattern, 
       replacement = " ") 
})

sms.corpus_test <- 
    VCorpus(VectorSource(sms_datatest$text)) %>% 
    tm_map(content_transformer(tolower)) %>% 
    tm_map(removeNumbers) %>% 
    tm_map(removeWords, stopwords("english")) %>% 
    tm_map( transformer, "/") %>% 
    tm_map( transformer, "@") %>% 
    tm_map( transformer, "-") %>% 
    tm_map( transformer, "\\.") %>%  # \\. mengakses "." di awal, tengah, maupun akhir kalimat
    tm_map(transformer, "<[^>]+>.") %>%  # Remove Pipe
    tm_map(transformer, "@\\S+.") %>%  # Remove mention
    tm_map( transformer, "*#") %>%  # Remove hastag
    tm_map(transformer, "http[^[:space:]]*.") %>%  # remove URL
    tm_map(transformer, "&amp;.") %>%   # remove amp
    tm_map(transformer, "[[:punct:]].") %>%  # remove titik3
    tm_map(transformer, "[^[:alpha:][:space:]].") %>%  # remove all
    tm_map(removePunctuation) %>% 
    tm::tm_map(stemDocument) %>% 
    tm_map(stripWhitespace)

glimpse(sms_datatest)
#> Rows: 283
#> Columns: 3
#> $ datetime <chr> "2018-03-01T00:32:00Z", "2018-03-01T08:57:00Z", "2018-03-0...
#> $ text     <chr> "Km baru saja akses Apps Sehari-hari terpopuler.Nikmati ak...
#> $ status   <fct> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA...
# cek content ke-9
sms.corpus_test[[9]]$content
#> [1] "preorder samsung galaxi s cashback s d rpribu disc s d rpribu bln dgn kartukredit di samsung store tertentu s d mar s info"
sms_test.dtm  <- DocumentTermMatrix(x = sms.corpus_test)
inspect(sms_test.dtm)
#> <<DocumentTermMatrix (documents: 283, terms: 808)>>
#> Non-/sparse entries: 4021/224643
#> Sparsity           : 98%
#> Maximal term length: 22
#> Weighting          : term frequency (tf)
#> Sample             :
#>      Terms
#> Docs  anda axi diblokir info jam kamu ketik registrasi sms ulang
#>   134    0   0        0    1   0    1     0          0   2     0
#>   180    1   0        0    0   0    0     1          0   1     0
#>   200    0   0        0    0   0    0     0          0   0     0
#>   208    1   0        0    0   0    0     1          1   1     2
#>   209    1   0        0    0   0    0     1          1   1     2
#>   212    1   0        0    0   0    0     1          1   1     2
#>   64     2   0        0    0   0    0     0          0   1     0
#>   82     2   0        0    0   0    0     0          0   1     0
#>   84     2   0        0    0   0    0     0          0   1     0
#>   88     2   0        0    0   0    0     0          0   1     0
index_test <- sample(nrow(sms_test.dtm) )
label_test_test <- sms_datatest[index_test, 'status']

sms_test_test <- sms_test.dtm
prop.table(table(label_test_test))
#> numeric(0)
dim(label_test_test)
#> NULL
dim(sms_test_test)
#> [1] 283 808
sms_freq_test <- findFreqTerms(sms_test_test, lowfreq = 1) 
length(sms_freq_test)
#> [1] 808
sms_test_test1 <- sms_test_test[,sms_freq_test] # terms letaknya di kolom
inspect(sms_test_test1)
#> <<DocumentTermMatrix (documents: 283, terms: 808)>>
#> Non-/sparse entries: 4021/224643
#> Sparsity           : 98%
#> Maximal term length: 22
#> Weighting          : term frequency (tf)
#> Sample             :
#>      Terms
#> Docs  anda axi diblokir info jam kamu ketik registrasi sms ulang
#>   134    0   0        0    1   0    1     0          0   2     0
#>   180    1   0        0    0   0    0     1          0   1     0
#>   200    0   0        0    0   0    0     0          0   0     0
#>   208    1   0        0    0   0    0     1          1   1     2
#>   209    1   0        0    0   0    0     1          1   1     2
#>   212    1   0        0    0   0    0     1          1   1     2
#>   64     2   0        0    0   0    0     0          0   1     0
#>   82     2   0        0    0   0    0     0          0   1     0
#>   84     2   0        0    0   0    0     0          0   1     0
#>   88     2   0        0    0   0    0     0          0   1     0
dim(sms_test_test1)
#> [1] 283 808
bernoulli_conv <- function(x){
  # parameter ifelse: kondisi, TRUE, FALSE
  x <- as.factor(ifelse(x > 0, 1, 0)) 
  return(x)
}

sms_test_bn_test <- apply(X = sms_test_test1, MARGIN = 2, FUN = bernoulli_conv)

dim(sms_test_bn_test)
#> [1] 283 808
sms_test_bn_test_df <- data.frame(sms_test_bn_test)
pred_test <- predict(object = naive_spam, # model naive bayes
                          newdata = sms_test_bn_test_df , #%>% select(-status), # testing data
                          type = "class") # memprediksi kelas
#sms_pred_class

#head(sms_test_bn)

pred_test_df  <- data.frame(pred_test)

dim(sms_test_bn_test_df)
#> [1] 283 808
(table(pred_test_df))
#> pred_test_df
#>  ham spam 
#>   88  195
head(pred_test)
#> [1] spam spam spam spam ham  spam
#> Levels: ham spam
table(pred_test)
#> pred_test
#>  ham spam 
#>   88  195
prop.table(table(pred_test))
#> pred_test
#>       ham      spam 
#> 0.3109541 0.6890459
library(wordcloud)

wordcloud(sms_datatest$text, max.words = 200, scale = c(2, 0.5),random.order = FALSE)

8 B.Create submission data

submission <- sms_test_bn_test_df %>% 
  mutate(datetime = sms_datatest$datetime,
         text = sms_datatest$text
         ) %>% 
  mutate(status = pred_test) %>% 
  select(datetime,status)

table(submission$status)
#> 
#>  ham spam 
#>   88  195
prop.table(table(submission$status))
#> 
#>       ham      spam 
#> 0.3109541 0.6890459
# save data
write.csv(submission, "submission-dwi_susiyanto.csv", row.names = F)

# check first 3 data
head(submission, 3)
nrow(submission)
#> [1] 283