Milestone Report

The goal of this project is just to display that you’ve gotten used to working with the data and that you are on track to create your prediction algorithm. Please submit a report on R Pubs (http://rpubs.com/) that explains your exploratory analysis and your goals for the eventual app and algorithm. This document should be concise and explain only the major features of the data you have identified and briefly summarize your plans for creating the prediction algorithm and Shiny app in a way that would be understandable to a non-data scientist manager. You should make use of tables and plots to illustrate important summaries of the data set. The motivation for this project is to:

  1. Demonstrate that you’ve downloaded the data and have successfully loaded it in.
  2. Create a basic report of summary statistics about the data sets.
  3. Report any interesting findings that you amassed so far.
  4. Get feedback on your plans for creating a prediction algorithm and Shiny app.

Download the Dataset

library(knitr)
library(ggplot2)
library(plotly)
## 
## Attaching package: 'plotly'
## The following object is masked from 'package:ggplot2':
## 
##     last_plot
## The following object is masked from 'package:stats':
## 
##     filter
## The following object is masked from 'package:graphics':
## 
##     layout
library(RWeka)
library(stringr)
library(tm)
## Loading required package: NLP
## 
## Attaching package: 'NLP'
## The following object is masked from 'package:ggplot2':
## 
##     annotate
library(wordcloud)
## Loading required package: RColorBrewer
fileUrl <- "https://d396qusza40orc.cloudfront.net/dsscapstone/dataset/Coursera-SwiftKey.zip"
directoryPath <- "D:/Downloads/Coursera-SwiftKey"
zipPath <- "D:/Downloads/Coursera-SwiftKey.zip"

if (!file.exists(directoryPath)) {
  dir.create(directoryPath)
  download.file(fileUrl, destfile = zipPath)
  unzip(zipPath, exdir = directoryPath)
}

Summary statistics of the Dataset files

blogsFilePath <- "D:/Downloads/Coursera-SwiftKey/final/en_US/en_US.blogs.txt"
newsFilePath <- "D:/Downloads/Coursera-SwiftKey/final/en_US/en_US.news.txt"
twitterFilePath <- "D:/Downloads/Coursera-SwiftKey/final/en_US/en_US.twitter.txt"

files <- list(blogsFilePath, newsFilePath, twitterFilePath)

size <- sapply(files, function(fileName) utils:::format.object_size(file.info(fileName)[["size"]], "auto"))
lines <- sapply(files, function(fileName) length(readLines(fileName)))
words <- sapply(files, function(fileName) sum(str_count(readLines(fileName))))

file <- c("en_US.blogs.txt", "en_US.news.txt", "en_US.twitter.txt")
summaryDF <- data.frame(file, size, lines, words)

kable(summaryDF)
file size lines words
en_US.blogs.txt 200.4 Mb 899288 208361438
en_US.news.txt 196.3 Mb 77259 15683765
en_US.twitter.txt 159.4 Mb 2360148 162384825
p1 <- ggplot(summaryDF, aes(x=file, y=size)) + 
  geom_bar(stat = "identity", aes(fill = file), show.legend = FALSE) +
  labs(title="File size", x="File", y="Size")

p2 <- ggplot(summaryDF, aes(x=file, y=lines)) + 
  geom_bar(stat = "identity", aes(fill = file), show.legend = FALSE) +
  labs(title="Lines per file", x="File", y="Lines")

p3 <- ggplot(summaryDF, aes(x=file, y=words)) + 
  geom_bar(stat = "identity", aes(fill = file), show.legend = FALSE) +
  labs(title="Words per file", x="File", y="Words")

ggplotly(p1)
ggplotly(p2)
ggplotly(p3)

Interesting findings

The datasets is quite big thus a sampling process will be required

Plans for creating a prediction algorithm and Shiny app.

The following procedure will be followed in order to create the prediction algorithm and the shiny app: