Introduction

The goal of this project is just to display that you’ve gotten used to working with the data and that you are on track to create your prediction algorithm. Please submit a report on R Pubs (http://rpubs.com/) that explains your exploratory analysis and your goals for the eventual app and algorithm. This document should be concise and explain only the major features of the data you have identified and briefly summarize your plans for creating the prediction algorithm and Shiny app in a way that would be understandable to a non-data scientist manager. You should make use of tables and plots to illustrate important summaries of the data set. The motivation for this project is to:
1. Demonstrate that you’ve downloaded the data and have successfully loaded it in.
2. Create a basic report of summary statistics about the data sets.
3. Report any interesting findings that you amassed so far.
4. Get feedback on your plans for creating a prediction algorithm and Shiny app.

Download and Unzip Data

if (!file.exists("myFile.zip")){
  download.file("https://d396qusza40orc.cloudfront.net/dsscapstone/dataset/Coursera-SwiftKey.zip","myFile.zip", quiet=TRUE,mode="wb")
  unzip("myFile.zip")
}

This folder consists four sub language folders, we will focus on the US English one. This folder consists three types of texts: blogs, news and tweets.

Load data

blogs_file<-"final/en_US/en_US.blogs.txt"
twitter_file<-"final/en_US/en_US.twitter.txt"
news_file<-"final/en_US/en_US.news.txt"

blogs <- readLines(blogs_file, warn = FALSE, encoding = "UTF-8", skipNul = TRUE)
twitter <- readLines(twitter_file, warn = FALSE, encoding = "UTF-8", skipNul = TRUE)
news <- readLines(news_file, warn = FALSE, encoding = "UTF-8", skipNul = TRUE)

Summary Table of Statistics

library(stringi)
library(data.table)
data.table(Category = c("Blogs", "News","Twitter"), 
           Number_lines =c(length(blogs),length(news),length(twitter)),
           Number_words=c(sum(stri_count_words(blogs)),sum(stri_count_words(news)),sum(stri_count_words(twitter))),
           Number_Characters=c(sum(nchar(blogs)),sum(nchar(news)),sum(nchar(twitter))))
##    Category Number_lines Number_words Number_Characters
## 1:    Blogs       899288     37546239         206824505
## 2:     News        77259      2674536          15639408
## 3:  Twitter      2360148     30093413         162096241

Random Sample Data

set.seed(123)
twitter_sample  <- sample(twitter, length(twitter) * 0.01, replace = FALSE)
blogs_sample    <- sample(blogs, length(blogs) * 0.01, replace = FALSE)
news_sample     <- sample(news, length(news) * 0.01, replace = FALSE)
data_sample = c(twitter_sample, blogs_sample, news_sample)

Clean Data

library(tm)
## Loading required package: NLP
library(dplyr)
## 
## Attaching package: 'dplyr'
## The following objects are masked from 'package:stats':
## 
##     filter, lag
## The following objects are masked from 'package:base':
## 
##     intersect, setdiff, setequal, union
data_cleaner<-function(sample){
    # Create corpus
     mycorpus <- VCorpus(VectorSource(sample))
     toSpace <- content_transformer(function(x, pattern) gsub(pattern, " ", x))
     # clean data
     corpus_clean <- mycorpus%>%
          tm_map(content_transformer(tolower))%>%
          tm_map(toSpace, "—")%>%
          tm_map(toSpace,"=")%>%
          tm_map(removePunctuation)%>%
          tm_map(removeNumbers)%>%
          tm_map(removeWords, stopwords("english"))%>%
          tm_map(stripWhitespace)
     
     corpus_clean_df <- data.frame(text=unlist(sapply(corpus_clean, `[`, "content")), 
                        stringsAsFactors=F)
     return (corpus_clean_df)
}

Tokenization

library(RWeka)
ngram_tokenizer <- function(corpus_clean,n) {
     ngram<- data.frame(NGramTokenizer(corpus_clean, Weka_control(min = n, max = n)))
     colnames(ngram)<-"token"
     return (ngram)
}

Plot

library(ggplot2)
## 
## Attaching package: 'ggplot2'
## The following object is masked from 'package:NLP':
## 
##     annotate
library(plyr)
## ------------------------------------------------------------------------------
## You have loaded plyr after dplyr - this is likely to cause problems.
## If you need functions from both plyr and dplyr, please load plyr first, then dplyr:
## library(plyr); library(dplyr)
## ------------------------------------------------------------------------------
## 
## Attaching package: 'plyr'
## The following objects are masked from 'package:dplyr':
## 
##     arrange, count, desc, failwith, id, mutate, rename, summarise,
##     summarize
myplot<-function(sample,n,cl,cat) {
     corpus_cleaned<-data_cleaner(sample)
     ngram<-ngram_tokenizer(corpus_cleaned,n)
     t<-count(ngram,"token")
     colnames(t)<-c("token","freq")
     t <-t[order(-t$freq),]
     
     if (n==1){
       gram="Unigram"
      }else if (n==2){
         gram="Bigram"
      }else if (n==3){
           gram="Trigram"
      }
     
     g<-ggplot(t[1:15,], aes(x=reorder(token,-freq),y=freq))+
          geom_bar(stat="identity",fill=cl)+
          theme(axis.text.x=element_text(angle=45, hjust=1))+
          xlab("Gram")+
          ylab("Frequency")+
          ggtitle(paste(gram,"",cat)) 
         
}
g1<-myplot(blogs_sample,1,"red","blogs")
g2<-myplot(news_sample,1,"blue","news")
g3<-myplot(twitter_sample,1,"green","twitter")

g4<-myplot(blogs_sample,2,"red","blogs")
g5<-myplot(news_sample,2,"blue","news")
g6<-myplot(twitter_sample,2,"green","twitter")

g7<-myplot(blogs_sample,3,"red","blogs")
g8<-myplot(news_sample,3,"blue","news")
g9<-myplot(twitter_sample,3,"green","twitter")
library(gridExtra)
grid.arrange(g1,g2,g3,g4,g5,g6,g7,g8,g9,nrow = 3)

Conclusion and Next Steps

The above exploratory analysis demostrate the difference ngram frequency among different sources: blogs, news and twitters. Hoever our goal is to make a model for word prediction for all context, therefore we need to pool the data from those three sources together to train the model: We will use the ngram dataframes created to calculate the probability of the next word occuring.

And then the next step is to build a simple and fast shiny app to let the user provide a input string. The model will extract the last n grams and return the next word with the highest probablity. As a next step a model will be created and integrated into a Shiny app for word prediction.

We will also create a five-pages presentation to demostrate our findings.