Outline

The goal of this project is just to display that you’ve gotten used to working with the data and that you are on track to create your prediction algorithm. Please submit a report on R Pubs (http://rpubs.com/) that explains your exploratory analysis and your goals for the eventual app and algorithm. This document should be concise and explain only the major features of the data you have identified and briefly summarize your plans for creating the prediction algorithm and Shiny app in a way that would be understandable to a non-data scientist manager. You should make use of tables and plots to illustrate important summaries of the data set. The motivation for this project is to: 1. Demonstrate that you’ve downloaded the data and have successfully loaded it in.2. Create a basic report of summary statistics about the data sets.3. Report any interesting findings that you amassed so far.4. Get feedback on your plans for creating a prediction algorithm and Shiny app.

Loading the R libraries

Sys.setenv(JAVA_HOME="C:/Program Files/Java/jre1.8.0_121/")

library(RWeka)
library(dplyr)
## 
## Attaching package: 'dplyr'
## The following objects are masked from 'package:stats':
## 
##     filter, lag
## The following objects are masked from 'package:base':
## 
##     intersect, setdiff, setequal, union
library(stringi)
library(tm)
## Loading required package: NLP
library(ggplot2)
## 
## Attaching package: 'ggplot2'
## The following object is masked from 'package:NLP':
## 
##     annotate
library(stringr)
library(wordcloud)
## Loading required package: RColorBrewer

Load data

First of all, I have to download and import the data:

if(!file.exists("Coursera-SwiftKey.zip")){
  download.file("https://d396qusza40orc.cloudfront.net/dsscapstone/dataset/Coursera-SwiftKey.zip", "Coursera-SwiftKey.zip")
  unzip("Coursera-SwiftKey.zip")
}
blogs <- readLines("final/en_US/en_US.blogs.txt", warn = FALSE, encoding = "UTF-8", skipNul = T)
news <- readLines("final/en_US/en_US.news.txt", warn = FALSE, encoding = "UTF-8", skipNul = T)
twitter <- readLines("final/en_US/en_US.twitter.txt", warn = FALSE, encoding = "UTF-8", skipNul = T)

Clean and sample data

set.seed(88)

blogs_sample <- blogs[rbinom(length(blogs)*.001, length(blogs), .5)]
twitter_sample <- twitter[rbinom(length(twitter)*.001, length(twitter), .5)]
news_sample <- news[rbinom(length(news)*.001, length(news), .5)]

sample_data <- c(blogs_sample, twitter_sample, news_sample)

Replacing some non-UTF8 characters

#to remove all non graphical characters
sample_data=str_replace_all(sample_data,"[^[:graph:]]", " ") 

#to remove emojis
sample_data <- iconv(sample_data, 'UTF-8', 'ASCII')

Build a Corpus

The next step is to build a corpus to clean the data, as a preparation for the n-grams

corpus <- VCorpus(VectorSource(sample_data))
corpus<- tm_map(corpus, tolower)
corpus <- tm_map(corpus, stemDocument)
corpus<- tm_map(corpus, removePunctuation)
corpus<- tm_map(corpus, removeNumbers)
corpus <- tm_map(corpus, stripWhitespace)
corpus <- tm_map(corpus, PlainTextDocument)

Exploratory Analysis

wordcloud(corpus, max.words = 100, random.order = FALSE,rot.per=0.35, use.r.layout=FALSE,colors=brewer.pal(8, "Dark2"))
title("Wordcloud: 100 Most Frequently Used Words")

Word Frequency

The main goal into this section is to analyze the Ngrams

nGram <- function(ng, data, lowfreq) {
  # Tokenizer splits a string into an n-gram (a stream of terms or tokens) with min and max grams.
  ngramtokenizer <- function(x) NGramTokenizer(x, Weka_control(min = ng, max = ng))  
  # construct a term-document matrix 
  ngram_tdm <- TermDocumentMatrix(data, control = list(tokenize = ngramtokenizer))
  # find the frequent terms in a term-doc matrix with a lower frequency bound 
  freq <- findFreqTerms(ngram_tdm, lowfreq=lowfreq)
  # tabulating frequency of the frequent terms in the form of data frame
  ngram_freq <- rowSums(as.matrix(ngram_tdm[freq,]))
  ngram_freq <- data.frame(ngramTerm=names(ngram_freq), Frequency=ngram_freq)
  ngram_freq <- ngram_freq[order(-ngram_freq$Frequency),]
  rownames(ngram_freq) <- c(1:nrow(ngram_freq))
  return(ngram_freq)
}

Analyzing Unigrams

ngram <- nGram(1, corpus, 100)

ggplot(data=ngram, 
       aes(x=reorder(ngramTerm, Frequency), y=Frequency)) +
  geom_bar(stat = "identity", fill="green") + 
  coord_flip() + 
  xlab("Words or Terms") + ylab("Frequency") +
  labs(title = "Unigram - Most Frequently Used Words")

Bigram Analysis:

ngram <- nGram(2, corpus, 70)

ggplot(data=ngram, 
       aes(x=reorder(ngramTerm, Frequency), y=Frequency)) +
  geom_bar(stat = "identity", fill="green") + 
  coord_flip() + 
  xlab("Words or Terms") + ylab("Frequency") +
  labs(title = "Bigram - Most Frequently Used Words")

Trigram Analysis:

ngram <- nGram(3, corpus, 12)

ggplot(data=ngram, 
       aes(x=reorder(ngramTerm, Frequency), y=Frequency)) +
  geom_bar(stat = "identity", fill="green") + 
  coord_flip() + 
  xlab("Words or Terms") + ylab("Frequency") +
  labs(title = "Trigram - Most Frequently Used Words")

Further Development plan

My model will use the frequency tables of the n-grams to be able to offering the next 3 most probable words to be typed, based on the former.