In order to build the predictive model for predicting the next word, I have to understand the distribution and relationship between the words, tokens and phrases in the text. In this exercise we are given three (3) types of texts from blogs, news and twitter to study and analyse. We have to carry out a thorough exploratory analysis of the data and understand the basic distribution of words and relationship between the words in the corpora.
This report contains tables and graphs to show the variation in the frequencies of words (unigrams) and word pairs (bigrams and trigrams) in the data.
The following software packages were loaded in the RStudio:
blogs = readLines("~/Capstone/final/en_US/en_US.blogs.txt",encoding="UTF-8",skipNul = TRUE) news = readLines("~/Capstone/final/en_US/en_US.news.txt",encoding="UTF-8", skipNul = TRUE)
## Warning in readLines("~/Capstone/final/en_US/en_US.news.txt", encoding
## = "UTF-8", : incomplete final line found on '~/Capstone/final/en_US/
## en_US.news.txt'
twitter = readLines("~/Capstone/final/en_US/en_US.twitter.txt",encoding="UTF-8", skipNul = TRUE) # Sample a smaller number of lines in the 3 texts set.seed(123) blogs <- blogs[rbinom(length(blogs)*.003, length(blogs), .5)] news <- news[rbinom(length(news)*.030, length(news), .5)] twitter <- twitter[rbinom(length(twitter)*.004, length(twitter), .5)]
The folowing code was used to give the word count for the original text and the smaller sampled text documents: stri_stats_latex(blogs) stri_stats_latex(news) stri_stats_latex(twitter)
Shows the word count in sampled data stri_stats_latex(blogs) stri_stats_latex(news) stri_stats_latex(twitter)
The word count of the original texts and the sampled texts are given in the following table:
## doc.text original sampled ## 1 blogs 37570839 109560 ## 2 news 2651432 76958 ## 3 twitter 30451170 122711
In the following section we set up a corpus of the sampled data for cleaning and analysis. We also see five examples of the cleaned texts without stop words
bnttext = c(blogs,news,twitter) bnttext_sent = sent_detect(bnttext, language = "en", model = NULL) rm(blogs, news, twitter) corpus <- VCorpus(VectorSource(bnttext_sent)) corpus <- tm_map(corpus, content_transformer(tolower), lazy=TRUE) CleanCorpora <- function(corpus){ # Non UTF-8 Characters and Set lower case corpus <- tm_map(corpus, content_transformer (function(x) iconv(enc2utf8(x), sub = "byte"))) corpus <- tm_map(corpus, content_transformer (function(x) iconv(x,'latin1','ASCII',sub=''))) corpus <- tm_map(corpus, content_transformer(tolower)) } corpus <- tm_map(corpus, removeNumbers) remover <- content_transformer(function(x,pattern)gsub(pattern,' ',x)) corpus<- tm_map(corpus, remover, '[@][a-zA-Z0-9_]{1,15}') # remove twitter usernames corpus<- tm_map(corpus, remover, 'Ã|½Ã|¸¥' ) corpus<- tm_map(corpus, remover, 'í ½í¸¥' ) corpus <- tm_map(corpus, stripWhitespace) corpus <- tm_map(corpus, removePunctuation) bad_words <- VectorSource(readLines("~/Capstone/Terms-to-Block.csv")) corpus <- tm_map(corpus, removeWords, bad_words)
## Error in rank(x, ties.method = "min", na.last = "keep"): unimplemented type 'list' in 'greater'
corpus <- tm_map(corpus, removeWords, stopwords("english")) corpus <- tm_map(corpus, stemDocument,language = ("english"))# stemming the words fulldata<-data.frame(text=unlist(sapply(corpus, `[`, "content")), stringsAsFactors=F) fulldata[1:5,1]
## [1] " friend fill empti worri fear" ## [2] " trek jungl wild anim torrid desert travers danger eventu get hors gallop mountain love villag fill music song love stay carri blind snow fight wolv bite bitter wind" ## [3] " speci man grown one stage poorer longer possess strength interpret creat fiction produc nihilist" ## [4] " nihilist man judg world world exist" ## [5] "accord view exist mean patho vain nihilist pathosat time patho inconsist part nihilist"
The size of the cleaned corpus without stop words is 28.9 Mbs.
one_grams <- NGramTokenizer(fulldata, Weka_control(min = 1, max = 1)) bi_grams <- NGramTokenizer(fulldata, Weka_control(min = 2, max = 2, delimiters = " \\r\\n\\t.,;:\"()?!")) tri_grams <- NGramTokenizer(fulldata, Weka_control(min = 3, max = 3, delimiters = " \\r\\n\\t.,;:\"()?!")) one_gramsDF <- data.frame(table(one_grams)) bi_gramsDF <- data.frame(table(bi_grams)) tri_gramsDF <- data.frame(table(tri_grams)) unigrams_sorted <- one_gramsDF[order(one_gramsDF$Freq,decreasing = TRUE),] bigrams_sorted <- bi_gramsDF[order(bi_gramsDF$Freq,decreasing = TRUE),] trigrams_sorted <- tri_gramsDF[order(tri_gramsDF$Freq,decreasing = TRUE),] top15unigram <- unigrams_sorted[1:15,] top15bigram <- bigrams_sorted[1:15,] top15trigram <- trigrams_sorted[1:15,]
## 28.9 MB

In this case, the size of the corpus is 29.1 Mbs.
# If we keep the stop words then we can see the following examples which are more readable:## Error in rank(x, ties.method = "min", na.last = "keep"): unimplemented type 'list' in 'greater'
## [1] "our friend fill with empti worri and fear" ## [2] "the trek through jungl with wild anim the torrid desert that have to be travers with all their danger eventu they get hors gallop up mountain and down into love villag fill with music and song he would have love to stay there but they had to carri on through blind snow fight wolv and the bite bitter wind" ## [3] "this same speci of man grown one stage poorer no longer possess the strength to interpret to creat fiction produc nihilist" ## [4] "a nihilist is a man who judg of the world as it is that it ought not to be and of the world as it ought to be that it doe not exist" ## [5] "accord to this view our exist has no mean the patho of in vain is the nihilist pathosat the same time as patho an inconsist on the part of the nihilist"
The size of the corpus is given here in Mbs.
The following 3 plots show the top-15 unigram, bigram and trigram:
## 29.1 MB

The original dataset is very big and includes more than 70 million words from the blogs.txt, news.txt and twitter.txt. It took a lot of time to download and will also take a lot more time to analyse. So a sample dataset of about 300 thousand words from all the three original texts were used in the Exploratory Analysis to avoid a very long runtime.
The study also compared the results of including and excluding stopwords in the sampled corpus. Since we are going to come up with a predictive model for normal everyday language, we should not exclude stopwords in the final model that will be used to arrive at the predictive model.
The five examples of the 2 different sets of sampled data, after these have been cleaned, also show that improvements have to be made to the text mining and cleaning algorithm.
First I will need to divide the sample dataset into a training and testing set for purposes of initial study and subsequent testing of the predictive model.
As mentioned earlier, I should keep stop words and fine-tune the cleaning process before arriving at the final model.
The final model will be a Shiny app that takes as input a phrase (multiple words), and one clicks submit, and it predicts the next word.
I will try both the backoff mmodel and the interpolated smoothing (Kneser-Ney) model in predicting the next word, and then decide which model to adopt. I may also try the stupid backoff implementation. Would appreciate feedback on which of the models has proven to be accurate and efficient in terms of memory usage and response time.
Terms-to-Block were downloaded from the following website: "http://www.frontgatemedia.com/a-list-of-723-bad-words-to-blacklist-and-how-to-use-facebooks-moderation-tool/""