1 Assignment Requirement

The goal of this project is just to display that you’ve gotten used to working with the data and that you are on track to create your prediction algorithm. Please submit a report on R Pubs (http://rpubs.com/) that explains your exploratory analysis and your goals for the eventual app and algorithm. This document should be concise and explain only the major features of the data you have identified and briefly summarize your plans for creating the prediction algorithm and Shiny app in a way that would be understandable to a non-data scientist manager. You should make use of tables and plots to illustrate important summaries of the data set.

The motivation for this project is to:

  1. Demonstrate that you’ve downloaded the data and have successfully loaded it in.
  2. Create a basic report of summary statistics about the data sets.
  3. Report any interesting findings that you amassed so far.
  4. Get feedback on your plans for creating a prediction algorithm and Shiny app.

2 Download and Load Data

# To avoid downloading the huge dataset again and again, it is downloaded and
# unzipped only when the data directory is missing.

# Get current working dir
maindir <- getwd()
subdir  <- "data"

# Download file and unzip only when the data directory is not present
chkdir <- dir.exists(file.path(maindir, subdir))
if (chkdir!=TRUE) {
        # Create data directory and set that as the data directory
        dir.create(file.path(workdir, subdir), showWarnings = FALSE)
        setwd(file.path(maindir, subdir))
        
        # specify the source and destination of the download
        dest.file <- "Coursera-SwiftKey.zip"
        src.file <- "http://d396qusza40orc.cloudfront.net/dsscapstone/dataset/Coursera-SwiftKey.zip"
        
        # execute the download
        download.file(src.file, dest.file)
        # extract the files from the zip file
        unzip(dest.file)
        }
# Define file paths
filepath.blog <- "final/en_US/en_US.blogs.txt"
filepath.twit <- "final/en_US/en_US.twitter.txt"
filepath.news <- "final/en_US/en_US.news.txt"
# Set the data as the working directory
setwd(file.path(maindir, subdir))

# Load the datasets into memory
blogs   <- readLines(filepath.blog, encoding="UTF-8")
twitter <- readLines(filepath.twit, encoding="UTF-8")
news    <- readLines(filepath.news, encoding="UTF-8")

3 Summary statistics about the dataset

3.1 Analysing lines and characters

# library for character string analysis
library(stringi)
# library for plotting
library(ggplot2)

# Analysing lines and characters
stri_stats_general( blogs )
##       Lines LinesNEmpty       Chars CharsNWhite 
##      899288      899288   206824382   170389539
stri_stats_general( twitter )
##       Lines LinesNEmpty       Chars CharsNWhite 
##     2360148     2360148   162096031   134082634
stri_stats_general( news )
##       Lines LinesNEmpty       Chars CharsNWhite 
##     1010242     1010242   203223154   169860866

3.2 Summary of the corpus

3.2.1 Blogs

summary(blogs)
##    Length     Class      Mode 
##    899288 character character
blog_words <- stri_count_words(blogs); summary(blog_words); qplot(blog_words)
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##    0.00    9.00   28.00   41.75   60.00 6726.00
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.

3.2.2 Twitter

summary(twitter)
##    Length     Class      Mode 
##   2360148 character character
twit_words <- stri_count_words(twitter); summary( twit_words); qplot(twit_words)
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##    1.00    7.00   12.00   12.75   18.00   47.00
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.

3.2.3 News

summary(news)
##    Length     Class      Mode 
##   1010242 character character
news_words <- stri_count_words(news); summary(news_words); qplot(news_words)
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##    1.00   19.00   32.00   34.41   46.00 1796.00
## stat_bin: binwidth defaulted to range/30. Use 'binwidth = x' to adjust this.

4 Further analysis of the Corpus

4.1 Sampling the corpus for further analysis

# Setting seed for using in Sampling of corupis
set.seed(1000)
# Sampling of blogs
sampleBlogs <- blogs[sample(1:length(blogs), 1000)]; head(sampleBlogs, 3)
## [1] "Even beyond that, though, religion permeates our culture, our language, our traditions, our public rituals, our history, and yes, our political debate. More than anything else -- more than political party, more than political history, more than any cultural icon whether it be Shakespeare, Star Wars or John Wayne — Christian religion is at the core of what America believes in and relates to. Progressives ignore or dismiss religion at our peril: we will never get to a majority political coalition in this country without understanding religion and the people who believe in it."
## [2] "Da da da daan da, do lots."                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          
## [3] "The first step that I’d like you to focus on before you start making any kind of decisions is What is important to you? I’m going to tell you the order of importance that I personally have in my mind. Faith, family, Business. That’s the order that I live by."
# Sampling of twitter
sampleTwitter <- twitter[sample(1:length(twitter), 1000)]; head(sampleTwitter,3)
## [1] "I think people always believe what they want to believe, don't you? - Lillian Hellman"
## [2] "are you still on ? If so what is your store name ?"                                   
## [3] "Glad to have back on twitter :) made my night."
# Sampling of news
sampleNews <- news[sample(1:length(news), 1000)]; head(sampleNews, 3)
## [1] "Mr. Brennan, building on earlier remarks by lawyers for the State and Defense departments, and Attorney General Eric Holder, emphasized that drones offer the government a remarkable ability to \"distinguish more effectively between an al Qaeda terrorist and innocent civilians.\" He also said the U.S. doesn't engage in the strikes casually, but has a rigorous review process."
## [2] "\"Nothing changes. It will slow down for a couple of weeks, then it starts up again,\" said Elie Saade, one of the New Brunwick-licensed taxi drivers in municipal court Tuesday."                                                                                                                                                                                                       
## [3] "The Question: OK, But who is in charge?"
# Conslidating the sample files
sampleData <- c(sampleTwitter,sampleNews,sampleBlogs)
# Removing the original datasets from memory
rm(blogs); rm(twitter); rm(news)

4.2 Cleaning the Corpus

library(stringi)
library(tm)
## Loading required package: NLP
## 
## Attaching package: 'NLP'
## 
## The following object is masked from 'package:ggplot2':
## 
##     annotate
myCorpus <- Corpus(VectorSource(sampleData))
toSpace  <- content_transformer(function(x, pattern) gsub(pattern, " ", x))
myCorpus <- tm_map(myCorpus, toSpace,"\"|/|@|\\|")
myCorpus <- tm_map(myCorpus, content_transformer(tolower))
myCorpus <- tm_map(myCorpus, removeNumbers)
myCorpus <- tm_map(myCorpus, stripWhitespace)

# Remove stop words
myCorpus <- tm_map(myCorpus , removeWords, stopwords('english'))

4.3 Creating ngrams

library(RWeka)
myCorpusDF <-data.frame(text = unlist(sapply(myCorpus, `[`, "content")), 
                        stringsAsFactors = FALSE)

findNGrams <- function(corp, grams, top) {
        ngram <- NGramTokenizer(corp, Weka_control(min = grams, max = grams,
                                                   delimiters = " \\r\\n\\t.,;:\"()?!"))
        ngram <- data.frame(table(ngram))
        ngram <- ngram[order(ngram$Freq, decreasing = TRUE),][1:top,]
        colnames(ngram) <- c("Words","Count")
        ngram
        }

monoGrams   <- findNGrams(myCorpusDF, 1, 100)
biGrams     <- findNGrams(myCorpusDF, 2, 100)
triGrams    <- findNGrams(myCorpusDF, 3, 100)
quadriGrams <- findNGrams(myCorpusDF, 4, 100)

4.4 Plotting nGrams

library(ggplot2)
# number of ngrams to show in the graph
n <- 20
# Plotting of the various nGrams
ggplot(monoGrams[1:n,], aes(Words, Count))   + geom_bar(stat = "identity") + 
        coord_flip()

ggplot(biGrams[1:n,], aes(Words, Count))     + geom_bar(stat = "identity") + 
        coord_flip()

ggplot(triGrams[1:n,], aes(Words, Count))    + geom_bar(stat = "identity") + 
        coord_flip()

ggplot(quadriGrams[1:n,], aes(Words, Count)) + geom_bar(stat = "identity") + 
        coord_flip()

5 The Next Step

The next steps in the project are:

  1. Continuing cleaning the corpus to to increase the accuracy of the model
  2. Refining the sampling process for getting a good ngram representation without using the entire corpus
  3. Building the final prediction model and testing it

For predicting the current word, the application will use mono-gram prediction data. For predicting the next word, the application will use bi-gram, tri-gram and quadri-gram prediction data.