Outline

The goal of this project is just to display that you’ve gotten used to working with the data and that you are on track to create your prediction algorithm. Please submit a report on R Pubs (http://rpubs.com/) that explains your exploratory analysis and your goals for the eventual app and algorithm. This document should be concise and explain only the major features of the data you have identified and briefly summarize your plans for creating the prediction algorithm and Shiny app in a way that would be understandable to a non-data scientist manager. You should make use of tables and plots to illustrate important summaries of the data set. The motivation for this project is to: 1. Demonstrate that you’ve downloaded the data and have successfully loaded it in.2. Create a basic report of summary statistics about the data sets.3. Report any interesting findings that you amassed so far.4. Get feedback on your plans for creating a prediction algorithm and Shiny app.

Loading the R libraries

Sys.setenv(JAVA_HOME="C:/Program Files/Java/jre1.8.0_121/")

library(RWeka)
library(dplyr)
## 
## Attaching package: 'dplyr'
## The following objects are masked from 'package:stats':
## 
##     filter, lag
## The following objects are masked from 'package:base':
## 
##     intersect, setdiff, setequal, union
library(stringi)
library(tm)
## Loading required package: NLP
library(ggplot2)
## 
## Attaching package: 'ggplot2'
## The following object is masked from 'package:NLP':
## 
##     annotate
library(stringr)
library(wordcloud)
## Loading required package: RColorBrewer

Load data

First of all, I have to download and import the data:

if(!file.exists("Coursera-SwiftKey.zip")){
  download.file("https://d396qusza40orc.cloudfront.net/dsscapstone/dataset/Coursera-SwiftKey.zip", "Coursera-SwiftKey.zip")
  unzip("Coursera-SwiftKey.zip")
}
blogs <- readLines("final/en_US/en_US.blogs.txt", warn = FALSE, encoding = "UTF-8")
news <- readLines("final/en_US/en_US.news.txt", warn = FALSE, encoding = "UTF-8")
twitter <- readLines("final/en_US/en_US.twitter.txt", warn = FALSE, encoding = "UTF-8")

Clean and sample data

set.seed(123456)

blogs_sample <- blogs[rbinom(length(blogs)*.001, length(blogs), .5)]
twitter_sample <- twitter[rbinom(length(twitter)*.001, length(twitter), .5)]
news_sample <- news[rbinom(length(news)*.001, length(news), .5)]

sample_data <- c(blogs_sample, twitter_sample, news_sample)

Replacing some non-UTF8 characters to clean the final data and improve the analysis

#to remove all non graphical characters
sample_data=str_replace_all(sample_data,"[^[:graph:]]", " ") 

#to remove emojis
sample_data <- iconv(sample_data, 'UTF-8', 'ASCII')

Build a Corpus

The next step is to build a corpus to clean the data, as a preparation for the n-grams

corpus <- VCorpus(VectorSource(sample_data))
corpus <- tm_map(corpus, tolower)
corpus <- tm_map(corpus, removePunctuation)
corpus <- tm_map(corpus, removeNumbers)
corpus <- tm_map(corpus, stripWhitespace)
corpus <- tm_map(corpus, PlainTextDocument)

Exploratory Analysis

Tokenazing corpus

#Tokenizing for the uni-grams
dtm<-TermDocumentMatrix(corpus)
dtm<-as.matrix(dtm)

#Tokenizing for the bi-grams
bigram<- NGramTokenizer(corpus, Weka_control(min = 2, max = 2))

#Tokenizing for the tri-grams
trigram<- NGramTokenizer(corpus, Weka_control(min = 3, max = 3))

Cloud With Most Frequent Words

frequency <- rowSums(dtm)
freq <- sort(frequency, decreasing = TRUE)[1:100]
words <- names(freq)
wordcloud(words, freq)

barplot(head(freq,10),main="Top 10 Most Frequent Words", cex.main=2)

#10 Most Frequent Bi-grams
bigram_freq <- data.frame(table(bigram))
bigram_ord <- bigram_freq[order(bigram_freq$Freq,decreasing = TRUE),]
bigram_top<-head(bigram_ord,10)
barplot(bigram_top$Freq, names.arg = bigram_top$bigram, border=NA, las=2, main="10 Most Frequent Bi-grams", cex.main=2)

#10 Most Frequent Tri-grams
trigram_freq <- data.frame(table(trigram))
trigram_ord <- trigram_freq[order(trigram_freq$Freq,decreasing = TRUE),]
trigram_top<-head(trigram_ord,10)
barplot(trigram_top$Freq, names.arg = trigram_top$trigram, border=NA, las=2, main="10 Most Frequent Tri-grams", cex.main=2)

Further Development plan

My model will use the frequency tables of the n-grams to be able to offering the next 3 most probable words to be typed, based on the former.