Introduction

The goal of this project is just to display that you’ve gotten used to working with the data and that you are on track to create your prediction algorithm. Please submit a report on R Pubs (http://rpubs.com/) that explains your exploratory analysis and your goals for the eventual app and algorithm. This document should be concise and explain only the major features of the data you have identified and briefly summarize your plans for creating the prediction algorithm and Shiny app in a way that would be understandable to a non-data scientist manager. You should make use of tables and plots to illustrate important summaries of the data set. The motivation for this project is to: 1. Demonstrate that you’ve downloaded the data and have successfully loaded it in.2. Create a basic report of summary statistics about the data sets.3. Report any interesting findings that you amassed so far.4. Get feedback on your plans for creating a prediction algorithm and Shiny app.

Data collection

The source data used in this project are files taken from the set of corpora provided by HC Corpora (http://www.corpora.heliohost.org). Complete details bout the data can be found from http://www.corpora.heliohost.org/aboutcorpus.html.

Data set can be obtained from Coursera athttps://d396qusza40orc.cloudfront.net/dsscapstone/dataset/Coursera-SwiftKey.zip. The files are classfied by language and mostly contains text.

setwd("~/course/capstone")
# Download and unzip source files
#download.file("https://d396qusza40orc.cloudfront.net/dsscapstone/dataset/Coursera-SwiftKey.zip","Coursera-SwiftKey.zip")
#unzip("Coursera-SwiftKey.zip")

Raw Data Summary

Lists data statistics like file size, number lines, words, maximum length, etc.

library(plyr)
library(xtable)

corp.data <- lapply(sprintf("%s/%s", "./final/en_US", c("en_US.blogs.txt", "en_US.news.txt", "en_US.twitter.txt")), function(p) {
    message(p)
    system(paste("wc -l", p))
    list(path=p, content=readLines(p))
})

corp.data.stats <- ldply(corp.data, function(ds) {
    wc.l <- as.integer(sub(" .*$", "", system(paste("wc -l", ds$path), intern=TRUE)))
    wc.w <- as.integer(sub(" .*$", "", system(paste("wc -w", ds$path), intern=TRUE)))
    data.frame(path=ds$path, file.size=round(file.info(ds$path)[, "size"]/(2^20), 1), 
               obj.size=round(object.size(ds$content)[1]/(2^20), 1)
               , wc.l=wc.l
               , max.length=max(nchar(ds$content))
               , wc.w=wc.w
    )
})

colnames(corp.data.stats) <- c("File", "File Size (MB)", "Object Size (MB)", "Lines", "Max length", "Words")
print(xtable(corp.data.stats), type="html", include.rownames=FALSE)
File File Size (MB) Object Size (MB) Lines Max length Words
./final/en_US/en_US.blogs.txt 200.40 248.50 40833
./final/en_US/en_US.news.txt 196.30 249.60 11384
./final/en_US/en_US.twitter.txt 159.40 301.40 140
rm(list=ls())

Data Processing

Data Sampling

As the input Data from all three files are quite large, for test purpose only sample of the data is considered.

twitter <-readLines("./final/en_US/en_US.twitter.txt")
blogs <-readLines("./final/en_US/en_US.blogs.txt")
news <-readLines("./final/en_US/en_US.news.txt")
set.seed(2708)
data.sample <- c(twitter[sample(1:length(twitter), 1000)], blogs[sample(1:length(blogs), 1000)], news[sample(1:length(news), 1000)])
rm(twitter); rm(blogs); rm(news)
#head(data.sample)

Data Cleaning

‘Corpus’ was used to clean data by removing punctuation, numbers, convert to lower case, stop words and to strip white spaces.

library(tm)
corp <- Corpus(VectorSource(data.sample))
corp <- tm_map(corp, content_transformer(function(x, pattern) gsub(pattern, " ", x)), "[][!#$%()*,.:;<=>@^_-|~.{}]\"\'")
corp <- tm_map(corp, content_transformer(tolower))
corp <- tm_map(corp, removeNumbers)
corp <- tm_map(corp, removePunctuation)
corp <- tm_map(corp, stripWhitespace)
corp <- tm_map(corp, removeWords, c('the', 'this', stopwords('english')))

Creating N-Grams

Next step is to parse the words and create n-grams and perform explratory analysis to find corelation among them.

#library(rJava)
#.jinit(parameters="-Xmx12g")
library(RWeka)
corpDF <-data.frame(text = get("content", corp), stringsAsFactors = FALSE)
#  data.frame(text = unlist(sapply(corp, `[`, "content")), stringsAsFactors = FALSE)
# create a fgeneric unction to create top n-grams
findNGrams <- function(corp, grams, top) {
        ngram <- NGramTokenizer(corp, Weka_control(min = grams, max = grams,delimiters = " \\r\\n\\t.,;:\"()?!"))
        ngram <- data.frame(table(ngram))
        ngram <- ngram[order(ngram$Freq, decreasing = TRUE),][1:top,]
        colnames(ngram) <- c("Words","Count")
        ngram
}
# create n-grams.
monoGrams   <- findNGrams(corp, 1, 50)
biGrams     <- findNGrams(corp, 2, 50)
triGrams    <- findNGrams(corp, 3, 50)

N-gram Analysis

library(ggplot2)
#library(SnowballC)
library(wordcloud2) 

# number of ngrams to show in the graph
n <- 20
# Plotting of the various nGrams
ggplot(monoGrams[1:n,], aes(Words, Count))   + geom_bar(stat = "identity") + ggtitle("1-gram") +theme(plot.title = element_text(hjust = 0.5)) +  coord_flip()

#wordcloud(monoGrams[,1], max.words = n, random.order = FALSE)
wordcloud2(monoGrams)
ggplot(biGrams[1:n,], aes(Words, Count))   + geom_bar(stat = "identity") + ggtitle("2-gram") +theme(plot.title = element_text(hjust = 0.5)) +  coord_flip()

#wordcloud(biGrams[,1], max.words = n, random.order = FALSE)
wordcloud2(biGrams)
ggplot(triGrams[1:n,], aes(Words, Count))   + geom_bar(stat = "identity") + ggtitle("3-gram") +theme(plot.title = element_text(hjust = 0.5)) +  coord_flip()

#wordcloud(triGrams[,1], max.words = n, random.order = FALSE)
wordcloud2(triGrams)
#rm(list=ls())

Next Steps

  1. Improving the data quality by adding more criteria to cleaning
  2. Refining the smaple process to get a good n-gram represntation and Building a better predection model
  3. Tuning the code for faster performance with high volume of data