Introduction

The goal of this project is just to display that you’ve gotten used to working with the data and that you are on track to create your prediction algorithm. Please submit a report on R Pubs that explains your exploratory analysis and your goals for the eventual app and algorithm.

This document should be concise and explain only the major features of the data you have identified and briefly summarize your plans for creating the prediction algorithm and Shiny app in a way that would be understandable to a non-data scientist manager. You should make use of tables and plots to illustrate important summaries of the data set.

The motivation for this project is to:
1. Demonstrate that you’ve downloaded the data and have successfully loaded it in.
2. Create a basic report of summary statistics about the data sets.
3. Report any interesting findings that you amassed so far.
4. Get feedback on your plans for creating a prediction algorithm and Shiny app.

Exploratory Analysis

setwd("~/Desktop/Exercises/Course10/Course_Project")
library(tidyverse)
library(lubridate)
library(ggplot2)
library(gridExtra)
library(stringi)
library(wordcloud)
library(tm)
#library(RWeka)

1. Demonstrate that you’ve downloaded the data and have successfully loaded it in.

if(!file.exists("./final/en_US/en_US.news.txt")){
  url <- 'https://d396qusza40orc.cloudfront.net/dsscapstone/dataset/Coursera-SwiftKey.zip'
  download.file(url, destfile="./data/Coursera-SwiftKey.zip", mode = "wb")
  unzip('./data/Coursera-SwiftKey.zip')
}

We take a look at the US data set:

con1 <- file("./final/en_US/en_US.news.txt", open="r")
US_news <- readLines(con1) 
close(con1)

con2 <- file("./final/en_US/en_US.blogs.txt", open="r")
US_blogs <- readLines(con2) 
close(con2)

con3 <- file("./final/en_US/en_US.twitter.txt", open="r")
US_twitter <- readLines(con3) 
close(con3)

2. Create a basic report of summary statistics about the data sets.

Basic file statistics:

file_stat<-function(file_name, lines){
    file_size<-file.info(file_name)[1]/1024^2
    line_num<-stri_stats_general(lines)[['Lines']]
    char_num<-stri_stats_general(lines)[['Chars']]
    word_num<-sum(sapply(strsplit(lines, "\\s+"), length))
    return(c(file_name, format(round(as.double(file_size), 2), nsmall=2), 
             line_num, char_num, word_num))
}

US_news_stat<-file_stat("./final/en_US/en_US.news.txt", US_news)
US_blogs_stat<-file_stat("./final/en_US/en_US.blogs.txt", US_blogs)
US_twitter_stat<-file_stat("./final/en_US/en_US.twitter.txt", US_twitter)

summary<-data.frame(matrix(c(US_news_stat, US_blogs_stat, US_twitter_stat), nrow=3, byrow=TRUE))
colnames(summary) <- c("File_names", "Size(MB)", "Line_Count", "Char_Count", "Words_Count")
print(summary)   
##                        File_names Size(MB) Line_Count Char_Count Words_Count
## 1    ./final/en_US/en_US.news.txt   196.28    1010242  203223154    34372814
## 2   ./final/en_US/en_US.blogs.txt   200.42     899288  206824382    37334149
## 3 ./final/en_US/en_US.twitter.txt   159.36    2360148  162096031    30373565

3. Report any interesting findings that you amassed so far.

  1. We can take a look at the top 10 frequently used words by randomly sampling 1000 lines from the original 3 datasets:
makeCorpus<-function(lines){
    set.seed(12345)
    lines<-sample(lines, size = 1000)
    corpus<-VCorpus(VectorSource(lines))
}

cleanCorpus<-function(corpus){
    toSpace <- content_transformer(function (x , pattern ) gsub(pattern, " ", x))
    corpus <- tm_map( corpus, toSpace, "/")
    corpus <- tm_map( corpus, toSpace, "@")
    corpus <- tm_map( corpus, toSpace, "\\|")
    
    corpus <- tm_map(corpus, content_transformer(tolower))
    corpus <- tm_map(corpus, removePunctuation)
    corpus <- tm_map(corpus, removeNumbers)
    corpus <- tm_map(corpus, stripWhitespace)
    corpus <- tm_map(corpus, removeWords, stopwords("english"))
    return(corpus)
}
  
Top_wording<-function(corpus, n){
    term_sparse<-TermDocumentMatrix(corpus)
    term_matrix<-as.matrix(term_sparse)
    v<-sort(rowSums(term_matrix), decreasing = TRUE)
    df<-data.frame(word = names(v), freq=v)
    head(df, n)
}
 news_Corp<-makeCorpus(US_news)
 news_Corp<-cleanCorpus(news_Corp)
 blogs_Corp<-makeCorpus(US_blogs)
 blogs_Corp<-cleanCorpus(blogs_Corp)
 twitter_Corp<-makeCorpus(US_twitter)
 twitter_Corp<-cleanCorpus(twitter_Corp)
news_top10<-Top_wording(news_Corp,10)
p1<-ggplot(data=news_top10, aes(x=reorder(word,-freq),y=freq))+
  geom_bar(stat="identity")+
  labs(x="Word", y="Frequency" )+
  theme(axis.text.x = element_text(angle=45, hjust=1))+
  ggtitle("US News Top 10 Frequently Used Words")

blogs_top10<-Top_wording(blogs_Corp,10)
p2<-ggplot(data=blogs_top10, aes(x=reorder(word,-freq),y=freq))+
  geom_bar(stat="identity")+
  labs(x="Word", y="Frequency" )+
  theme(axis.text.x = element_text(angle=45, hjust=1))+
  ggtitle("US Blogs Top 10 Frequently Used Words")

twitter_top10<-Top_wording(twitter_Corp,10)
p3<-ggplot(data=twitter_top10, aes(x=reorder(word,-freq),y=freq))+
  geom_bar(stat="identity")+
  labs(x="Word", y="Frequency" )+
  theme(axis.text.x = element_text(angle=45, hjust=1))+
  ggtitle("US Twitter Top 10 Frequently Used Words")

grid.arrange(p1, p2, p3,nrow = 1, widths=c(6, 6, 6)) 

From the plots, we can see “said”, “will” and “can” are the high frequency words in US news ,and “one”, “just” and “like” are the high frequency words in US blogs, and “just”, “like” and “love” are the high frequency words in US twitter from the given datasets.

  1. We can also generate the word cloud:
set.seed(12345)
news<-Top_wording(news_Corp,1000)
wordcloud(words = news$word, freq = news$freq, min.freq = 1,
          max.words=200, random.order=FALSE, rot.per=0.35, 
          colors=brewer.pal(8, "Dark2"))

blogs<-Top_wording(blogs_Corp,1000)
wordcloud(words = blogs$word, freq = blogs$freq, min.freq = 1,
          max.words=200, random.order=FALSE, rot.per=0.35, 
          colors=brewer.pal(8, "Dark2"))

twitter<-Top_wording(twitter_Corp,1000)
wordcloud(words = twitter$word, freq = twitter$freq, min.freq = 1,
          max.words=200, random.order=FALSE, rot.per=0.35, 
          colors=brewer.pal(8, "Dark2"))

As can be seen from the word cloud, “said” is most frequently used in the US news, and “one”, “just”, “like” are high frequency words in the US blogs, and “just”, “like”, “thanks” are frequently used in the US twitter, which are consistent with the previous plots.

4. Get feedback on your plans for creating a prediction algorithm and Shiny app.

In future, a predictive model will be built and implemented on a Shiny app. The objective of the model is to predict the next word after user inputs part of the sentences.